MIGS-GPU: Microarray Image Gridding and Segmentation on the GPU.
Katsigiannis, Stamos; Zacharia, Eleni; Maroulis, Dimitris
2017-05-01
Complementary DNA (cDNA) microarray is a powerful tool for simultaneously studying the expression level of thousands of genes. Nevertheless, the analysis of microarray images remains an arduous and challenging task due to the poor quality of the images that often suffer from noise, artifacts, and uneven background. In this study, the MIGS-GPU [Microarray Image Gridding and Segmentation on Graphics Processing Unit (GPU)] software for gridding and segmenting microarray images is presented. MIGS-GPU's computations are performed on the GPU by means of the compute unified device architecture (CUDA) in order to achieve fast performance and increase the utilization of available system resources. Evaluation on both real and synthetic cDNA microarray images showed that MIGS-GPU provides better performance than state-of-the-art alternatives, while the proposed GPU implementation achieves significantly lower computational times compared to the respective CPU approaches. Consequently, MIGS-GPU can be an advantageous and useful tool for biomedical laboratories, offering a user-friendly interface that requires minimum input in order to run.
Autonomous system for Web-based microarray image analysis.
Bozinov, Daniel
2003-12-01
Software-based feature extraction from DNA microarray images still requires human intervention on various levels. Manual adjustment of grid and metagrid parameters, precise alignment of superimposed grid templates and gene spots, or simply identification of large-scale artifacts have to be performed beforehand to reliably analyze DNA signals and correctly quantify their expression values. Ideally, a Web-based system with input solely confined to a single microarray image and a data table as output containing measurements for all gene spots would directly transform raw image data into abstracted gene expression tables. Sophisticated algorithms with advanced procedures for iterative correction function can overcome imminent challenges in image processing. Herein is introduced an integrated software system with a Java-based interface on the client side that allows for decentralized access and furthermore enables the scientist to instantly employ the most updated software version at any given time. This software tool is extended from PixClust as used in Extractiff incorporated with Java Web Start deployment technology. Ultimately, this setup is destined for high-throughput pipelines in genome-wide medical diagnostics labs or microarray core facilities aimed at providing fully automated service to its users.
Barton, G; Abbott, J; Chiba, N; Huang, DW; Huang, Y; Krznaric, M; Mack-Smith, J; Saleem, A; Sherman, BT; Tiwari, B; Tomlinson, C; Aitman, T; Darlington, J; Game, L; Sternberg, MJE; Butcher, SA
2008-01-01
Background Microarray experimentation requires the application of complex analysis methods as well as the use of non-trivial computer technologies to manage the resultant large data sets. This, together with the proliferation of tools and techniques for microarray data analysis, makes it very challenging for a laboratory scientist to keep up-to-date with the latest developments in this field. Our aim was to develop a distributed e-support system for microarray data analysis and management. Results EMAAS (Extensible MicroArray Analysis System) is a multi-user rich internet application (RIA) providing simple, robust access to up-to-date resources for microarray data storage and analysis, combined with integrated tools to optimise real time user support and training. The system leverages the power of distributed computing to perform microarray analyses, and provides seamless access to resources located at various remote facilities. The EMAAS framework allows users to import microarray data from several sources to an underlying database, to pre-process, quality assess and analyse the data, to perform functional analyses, and to track data analysis steps, all through a single easy to use web portal. This interface offers distance support to users both in the form of video tutorials and via live screen feeds using the web conferencing tool EVO. A number of analysis packages, including R-Bioconductor and Affymetrix Power Tools have been integrated on the server side and are available programmatically through the Postgres-PLR library or on grid compute clusters. Integrated distributed resources include the functional annotation tool DAVID, GeneCards and the microarray data repositories GEO, CELSIUS and MiMiR. EMAAS currently supports analysis of Affymetrix 3' and Exon expression arrays, and the system is extensible to cater for other microarray and transcriptomic platforms. Conclusion EMAAS enables users to track and perform microarray data management and analysis tasks through a single easy-to-use web application. The system architecture is flexible and scalable to allow new array types, analysis algorithms and tools to be added with relative ease and to cope with large increases in data volume. PMID:19032776
ERIC Educational Resources Information Center
Rowland-Goldsmith, Melissa
2009-01-01
DNA microarray is an ordered grid containing known sequences of DNA, which represent many of the genes in a particular organism. Each DNA sequence is unique to a specific gene. This technology enables the researcher to screen many genes from cells or tissue grown in different conditions. We developed an undergraduate lecture and laboratory…
USER-FRIENDLY SOLAR OVENS FOR OUTDOOR AND INDOOR USE
Excess Capacity in China’s Power Systems: A Regional Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Jiang; Liu, Xu; Karl, Fredrich
2016-11-01
This paper examines China’s regional electricity grids using a reliability perspective, which is commonly measured in terms of a reserve margin. Our analysis shows that at the end of 2014, the average reserve margin for China as a whole was roughly 28%, almost twice as high as a typical planning reserve margin in the U.S. However, this national average masks huge variations in reserve margins across major regional power grid areas: the northeastern region has the highest reserve margin of over 60%, followed by the northwestern region at 49%, and the southern grid area at 35%. In this analysis, wemore » also examined future reserve margins for regional electricity grids in China under two scenarios: 1) a low scenario of national annual electricity consumption growth rates of 1.5% between 2015 and 2020 and 1.0% between 2020 and 2025, and 2) a high scenario of annual average growth rates of 3.0% and 2.0%, respectively. Both scenarios suggest that the northeastern, northwestern, and southern regions have significant excess generation capacity, and that this excess capacity situation will continue over the next decade without regulatory intervention. The northern and central regions could have sufficient generation capacity to 2020, but may require additional resources in a higher growth scenario. The eastern region requires new resources by 2020 in both scenarios.« less
SimArray: a user-friendly and user-configurable microarray design tool
Auburn, Richard P; Russell, Roslin R; Fischer, Bettina; Meadows, Lisa A; Sevillano Matilla, Santiago; Russell, Steven
2006-01-01
Background Microarrays were first developed to assess gene expression but are now also used to map protein-binding sites and to assess allelic variation between individuals. Regardless of the intended application, efficient production and appropriate array design are key determinants of experimental success. Inefficient production can make larger-scale studies prohibitively expensive, whereas poor array design makes normalisation and data analysis problematic. Results We have developed a user-friendly tool, SimArray, which generates a randomised spot layout, computes a maximum meta-grid area, and estimates the print time, in response to user-specified design decisions. Selected parameters include: the number of probes to be printed; the microtitre plate format; the printing pin configuration, and the achievable spot density. SimArray is compatible with all current robotic spotters that employ 96-, 384- or 1536-well microtitre plates, and can be configured to reflect most production environments. Print time and maximum meta-grid area estimates facilitate evaluation of each array design for its suitability. Randomisation of the spot layout facilitates correction of systematic biases by normalisation. Conclusion SimArray is intended to help both established researchers and those new to the microarray field to develop microarray designs with randomised spot layouts that are compatible with their specific production environment. SimArray is an open-source program and is available from . PMID:16509966
Low-Cost Peptide Microarrays for Mapping Continuous Antibody Epitopes.
McBride, Ryan; Head, Steven R; Ordoukhanian, Phillip; Law, Mansun
2016-01-01
With the increasing need for understanding antibody specificity in antibody and vaccine research, pepscan assays provide a rapid method for mapping and profiling antibody responses to continuous epitopes. We have developed a relatively low-cost method to generate peptide microarray slides for studying antibody binding. Using a setup of an IntavisAG MultiPep RS peptide synthesizer, a Digilab MicroGrid II 600 microarray printer robot, and an InnoScan 1100 AL scanner, the method allows the interrogation of up to 1536 overlapping, alanine-scanning, and mutant peptides derived from the target antigens. Each peptide is tagged with a polyethylene glycol aminooxy terminus to improve peptide solubility, orientation, and conjugation efficiency to the slide surface.
Liu, Jiabin; Behrens, Timothy W.; Kearney, John F.
2014-01-01
Marginal Zone (MZ) B cells play an important role in the clearance of blood-borne bacterial infections via rapid T-independent IgM responses. We have previously demonstrated that MZ B cells respond rapidly and robustly to bacterial particulates. To determine the MZ-specific genes that are expressed to allow for this response, MZ and Follicular (FO) B cells were sort-purified and analyzed via DNA microarray analysis. We identified 181 genes that were significantly different between the two B cell populations. 99 genes were more highly expressed in MZ B cells while 82 genes were more highly expressed in FO B cells. To further understand the molecular mechanisms by which MZ B cells respond so rapidly to bacterial challenge, idiotype positive and negative MZ B cells were sort-purified before (0 hour) or after (1 hour) i.v. immunization with heat killed Streptococcus pneumoniae, R36A, and analyzed via DNA microarray analysis. We identified genes specifically up regulated or down regulated at 1 hour following immunization in the idiotype positive MZ B cells. These results give insight into the gene expression pattern in resting MZ vs. FO B cells and the specific regulation of gene expression in antigen-specific MZ B cells following interaction with antigen. PMID:18453586
AGScan: a pluggable microarray image quantification software based on the ImageJ library.
Cathelin, R; Lopez, F; Klopp, Ch
2007-01-15
Many different programs are available to analyze microarray images. Most programs are commercial packages, some are free. In the latter group only few propose automatic grid alignment and batch mode. More often than not a program implements only one quantification algorithm. AGScan is an open source program that works on all major platforms. It is based on the ImageJ library [Rasband (1997-2006)] and offers a plug-in extension system to add new functions to manipulate images, align grid and quantify spots. It is appropriate for daily laboratory use and also as a framework for new algorithms. The program is freely distributed under X11 Licence. The install instructions can be found in the user manual. The software can be downloaded from http://mulcyber.toulouse.inra.fr/projects/agscan/. The questions and plug-ins can be sent to the contact listed below.
ATMAD: robust image analysis for Automatic Tissue MicroArray De-arraying.
Nguyen, Hoai Nam; Paveau, Vincent; Cauchois, Cyril; Kervrann, Charles
2018-04-19
Over the last two decades, an innovative technology called Tissue Microarray (TMA), which combines multi-tissue and DNA microarray concepts, has been widely used in the field of histology. It consists of a collection of several (up to 1000 or more) tissue samples that are assembled onto a single support - typically a glass slide - according to a design grid (array) layout, in order to allow multiplex analysis by treating numerous samples under identical and standardized conditions. However, during the TMA manufacturing process, the sample positions can be highly distorted from the design grid due to the imprecision when assembling tissue samples and the deformation of the embedding waxes. Consequently, these distortions may lead to severe errors of (histological) assay results when the sample identities are mismatched between the design and its manufactured output. The development of a robust method for de-arraying TMA, which localizes and matches TMA samples with their design grid, is therefore crucial to overcome the bottleneck of this prominent technology. In this paper, we propose an Automatic, fast and robust TMA De-arraying (ATMAD) approach dedicated to images acquired with brightfield and fluorescence microscopes (or scanners). First, tissue samples are localized in the large image by applying a locally adaptive thresholding on the isotropic wavelet transform of the input TMA image. To reduce false detections, a parametric shape model is considered for segmenting ellipse-shaped objects at each detected position. Segmented objects that do not meet the size and the roundness criteria are discarded from the list of tissue samples before being matched with the design grid. Sample matching is performed by estimating the TMA grid deformation under the thin-plate model. Finally, thanks to the estimated deformation, the true tissue samples that were preliminary rejected in the early image processing step are recognized by running a second segmentation step. We developed a novel de-arraying approach for TMA analysis. By combining wavelet-based detection, active contour segmentation, and thin-plate spline interpolation, our approach is able to handle TMA images with high dynamic, poor signal-to-noise ratio, complex background and non-linear deformation of TMA grid. In addition, the deformation estimation produces quantitative information to asset the manufacturing quality of TMAs.
Voltage collapse in complex power grids
Simpson-Porco, John W.; Dörfler, Florian; Bullo, Francesco
2016-01-01
A large-scale power grid's ability to transfer energy from producers to consumers is constrained by both the network structure and the nonlinear physics of power flow. Violations of these constraints have been observed to result in voltage collapse blackouts, where nodal voltages slowly decline before precipitously falling. However, methods to test for voltage collapse are dominantly simulation-based, offering little theoretical insight into how grid structure influences stability margins. For a simplified power flow model, here we derive a closed-form condition under which a power network is safe from voltage collapse. The condition combines the complex structure of the network with the reactive power demands of loads to produce a node-by-node measure of grid stress, a prediction of the largest nodal voltage deviation, and an estimate of the distance to collapse. We extensively test our predictions on large-scale systems, highlighting how our condition can be leveraged to increase grid stability margins. PMID:26887284
Employing image processing techniques for cancer detection using microarray images.
Dehghan Khalilabad, Nastaran; Hassanpour, Hamid
2017-02-01
Microarray technology is a powerful genomic tool for simultaneously studying and analyzing the behavior of thousands of genes. The analysis of images obtained from this technology plays a critical role in the detection and treatment of diseases. The aim of the current study is to develop an automated system for analyzing data from microarray images in order to detect cancerous cases. The proposed system consists of three main phases, namely image processing, data mining, and the detection of the disease. The image processing phase performs operations such as refining image rotation, gridding (locating genes) and extracting raw data from images the data mining includes normalizing the extracted data and selecting the more effective genes. Finally, via the extracted data, cancerous cell is recognized. To evaluate the performance of the proposed system, microarray database is employed which includes Breast cancer, Myeloid Leukemia and Lymphomas from the Stanford Microarray Database. The results indicate that the proposed system is able to identify the type of cancer from the data set with an accuracy of 95.45%, 94.11%, and 100%, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.
[Typing and subtyping avian influenza virus using DNA microarrays].
Yang, Zhongping; Wang, Xiurong; Tian, Lina; Wang, Yu; Chen, Hualan
2008-07-01
Outbreaks of highly pathogenic avian influenza (HPAI) virus has caused great economic loss to the poultry industry and resulted in human deaths in Thailand and Vietnam since 2004. Rapid typing and subtyping of viruses, especially HPAI from clinical specimens, are desirable for taking prompt control measures to prevent spreading of the disease. We described a simultaneous approach using microarray to detect and subtype avian influenza virus (AIV). We designed primers of probe genes and used reverse transcriptase PCR to prepare cDNAs of AIV M gene, H5, H7, H9 subtypes haemagglutinin genes and N1, N2 subtypes neuraminidase genes. They were cloned, sequenced, reamplified and spotted to form a glass-bound microarrays. We labeled samples using Cy3-dUTP by RT-PCR, hybridized and scanned the microarrays to typing and subtyping AIV. The hybridization pattern agreed perfectly with the known grid location of each probe, no cross hybridization could be detected. Examinating of HA subtypes 1 through 15, 30 infected samples and 21 field samples revealed the DNA microarray assay was more sensitive and specific than RT-PCR test and chicken embryo inoculation. It can simultaneously detect and differentiate the main epidemic AIV. The results show that DNA microarray technology is a useful diagnostic method.
Segmentation and intensity estimation of microarray images using a gamma-t mixture model.
Baek, Jangsun; Son, Young Sook; McLachlan, Geoffrey J
2007-02-15
We present a new approach to the analysis of images for complementary DNA microarray experiments. The image segmentation and intensity estimation are performed simultaneously by adopting a two-component mixture model. One component of this mixture corresponds to the distribution of the background intensity, while the other corresponds to the distribution of the foreground intensity. The intensity measurement is a bivariate vector consisting of red and green intensities. The background intensity component is modeled by the bivariate gamma distribution, whose marginal densities for the red and green intensities are independent three-parameter gamma distributions with different parameters. The foreground intensity component is taken to be the bivariate t distribution, with the constraint that the mean of the foreground is greater than that of the background for each of the two colors. The degrees of freedom of this t distribution are inferred from the data but they could be specified in advance to reduce the computation time. Also, the covariance matrix is not restricted to being diagonal and so it allows for nonzero correlation between R and G foreground intensities. This gamma-t mixture model is fitted by maximum likelihood via the EM algorithm. A final step is executed whereby nonparametric (kernel) smoothing is undertaken of the posterior probabilities of component membership. The main advantages of this approach are: (1) it enjoys the well-known strengths of a mixture model, namely flexibility and adaptability to the data; (2) it considers the segmentation and intensity simultaneously and not separately as in commonly used existing software, and it also works with the red and green intensities in a bivariate framework as opposed to their separate estimation via univariate methods; (3) the use of the three-parameter gamma distribution for the background red and green intensities provides a much better fit than the normal (log normal) or t distributions; (4) the use of the bivariate t distribution for the foreground intensity provides a model that is less sensitive to extreme observations; (5) as a consequence of the aforementioned properties, it allows segmentation to be undertaken for a wide range of spot shapes, including doughnut, sickle shape and artifacts. We apply our method for gridding, segmentation and estimation to cDNA microarray real images and artificial data. Our method provides better segmentation results in spot shapes as well as intensity estimation than Spot and spotSegmentation R language softwares. It detected blank spots as well as bright artifact for the real data, and estimated spot intensities with high-accuracy for the synthetic data. The algorithms were implemented in Matlab. The Matlab codes implementing both the gridding and segmentation/estimation are available upon request. Supplementary material is available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Zeng, X.
2015-12-01
A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.
SUSTAINABLE ENERGY SYSTEMS DESIGN FOR A TRIBAL VILLAGE IN INDIA
Andrews, Brian D.; Chaytor, Jason D.; ten Brink, Uri S.; Brothers, Daniel S.; Gardner, James V.; Lobecker, Elizabeth A.; Calder, Brian R.
2016-01-01
A bathymetric terrain model of the Atlantic margin covering almost 725,000 square kilometers of seafloor from the New England Seamounts in the north to the Blake Basin in the south is compiled from existing multibeam bathymetric data for marine geological investigations. Although other terrain models of the same area are extant, they are produced from either satellite-derived bathymetry at coarse resolution (ETOPO1), or use older bathymetric data collected by using a combination of single beam and multibeam sonars (Coastal Relief Model). The new multibeam data used to produce this terrain model have been edited by using hydrographic data processing software to maximize the quality, usability, and cartographic presentation of the combined 100-meter resolution grid. The final grid provides the largest high-resolution, seamless terrain model of the Atlantic margin..
Statistical issues in signal extraction from microarrays
NASA Astrophysics Data System (ADS)
Bergemann, Tracy; Quiaoit, Filemon; Delrow, Jeffrey J.; Zhao, Lue Ping
2001-06-01
Microarray technologies are increasingly used in biomedical research to study genome-wide expression profiles in the post genomic era. Their popularity is largely due to their high throughput and economical affordability. For example, microarrays have been applied to studies of cell cycle, regulatory circuitry, cancer cell lines, tumor tissues, and drug discoveries. One obstacle facing the continued success of applying microarray technologies, however, is the random variaton present on microarrays: within signal spots, between spots and among chips. In addition, signals extracted by available software packages seem to vary significantly. Despite a variety of software packages, it appears that there are two major approaches to signal extraction. One approach is to focus on the identification of signal regions and hence estimation of signal levels above background levels. The other approach is to use the distribution of intensity values as a way of identifying relevant signals. Building upon both approaches, the objective of our work is to develop a method that is statistically rigorous and also efficient and robust. Statistical issues to be considered here include: (1) how to refine grid alignment so that the overall variation is minimized, (2) how to estimate the signal levels relative to the local background levels as well as the variance of this estimate, and (3) how to integrate red and green channel signals so that the ratio of interest is stable, simultaneously relaxing distributional assumptions.
Locational Marginal Pricing in the Campus Power System at the Power Distribution Level
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hao, Jun; Gu, Yi; Zhang, Yingchen
2016-11-14
In the development of smart grid at distribution level, the realization of real-time nodal pricing is one of the key challenges. The research work in this paper implements and studies the methodology of locational marginal pricing at distribution level based on a real-world distribution power system. The pricing mechanism utilizes optimal power flow to calculate the corresponding distributional nodal prices. Both Direct Current Optimal Power Flow and Alternate Current Optimal Power Flow are utilized to calculate and analyze the nodal prices. The University of Denver campus power grid is used as the power distribution system test bed to demonstrate themore » pricing methodology.« less
Smith, Maria W.; Herfort, Lydie; Tyrol, Kaitlin; Suciu, Dominic; Campbell, Victoria; Crump, Byron C.; Peterson, Tawnya D.; Zuber, Peter; Baptista, Antonio M.; Simon, Holly M.
2010-01-01
Through their metabolic activities, microbial populations mediate the impact of high gradient regions on ecological function and productivity of the highly dynamic Columbia River coastal margin (CRCM). A 2226-probe oligonucleotide DNA microarray was developed to investigate expression patterns for microbial genes involved in nitrogen and carbon metabolism in the CRCM. Initial experiments with the environmental microarrays were directed toward validation of the platform and yielded high reproducibility in multiple tests. Bioinformatic and experimental validation also indicated that >85% of the microarray probes were specific for their corresponding target genes and for a few homologs within the same microbial family. The validated probe set was used to query gene expression responses by microbial assemblages to environmental variability. Sixty-four samples from the river, estuary, plume, and adjacent ocean were collected in different seasons and analyzed to correlate the measured variability in chemical, physical and biological water parameters to differences in global gene expression profiles. The method produced robust seasonal profiles corresponding to pre-freshet spring (April) and late summer (August). Overall relative gene expression was high in both seasons and was consistent with high microbial abundance measured by total RNA, heterotrophic bacterial production, and chlorophyll a. Both seasonal patterns involved large numbers of genes that were highly expressed relative to background, yet each produced very different gene expression profiles. April patterns revealed high differential gene expression in the coastal margin samples (estuary, plume and adjacent ocean) relative to freshwater, while little differential gene expression was observed along the river-to-ocean transition in August. Microbial gene expression profiles appeared to relate, in part, to seasonal differences in nutrient availability and potential resource competition. Furthermore, our results suggest that highly-active particle-attached microbiota in the Columbia River water column may perform dissimilatory nitrate reduction (both dentrification and DNRA) within anoxic particle microniches. PMID:20967204
Gillet, Jean-Pierre; Molina, Thierry Jo; Jamart, Jacques; Gaulard, Philippe; Leroy, Karen; Briere, Josette; Theate, Ivan; Thieblemont, Catherine; Bosly, Andre; Herin, Michel; Hamels, Jacques; Remacle, Jose
2009-03-01
Lymphomas are classified according to the World Health Organisation (WHO) classification which defines subtypes on the basis of clinical, morphological, immunophenotypic, molecular and cytogenetic criteria. Differential diagnosis of the subtypes is sometimes difficult, especially for small B-cell lymphoma (SBCL). Standardisation of molecular genetic assays using multiple gene expression analysis by microarrays could be a useful complement to the current diagnosis. The aim of the present study was to develop a low density DNA microarray for the analysis of 107 genes associated with B-cell non-Hodgkin lymphoma and to evaluate its performance in the diagnosis of SBCL. A predictive tool based on Fisher discriminant analysis using a training set of 40 patients including four different subtypes (follicular lymphoma n = 15, mantle cell lymphoma n = 7, B-cell chronic lymphocytic leukemia n = 6 and splenic marginal zone lymphoma n = 12) was designed. A short additional preliminary analysis to gauge the accuracy of this signature was then performed on an external set of nine patients. Using this model, eight of nine of those samples were classified successfully. This pilot study demonstrates that such a microarray tool may be a promising diagnostic approach for small B-cell non-Hodgkin lymphoma.
Detection of pathogenic Vibrio spp. in shellfish by using multiplex PCR and DNA microarrays.
Panicker, Gitika; Call, Douglas R; Krug, Melissa J; Bej, Asim K
2004-12-01
This study describes the development of a gene-specific DNA microarray coupled with multiplex PCR for the comprehensive detection of pathogenic vibrios that are natural inhabitants of warm coastal waters and shellfish. Multiplex PCR with vvh and viuB for Vibrio vulnificus, with ompU, toxR, tcpI, and hlyA for V. cholerae, and with tlh, tdh, trh, and open reading frame 8 for V. parahaemolyticus helped to ensure that total and pathogenic strains, including subtypes of the three Vibrio spp., could be detected and discriminated. For DNA microarrays, oligonucleotide probes for these targeted genes were deposited onto epoxysilane-derivatized, 12-well, Teflon-masked slides by using a MicroGrid II arrayer. Amplified PCR products were hybridized to arrays at 50 degrees C and detected by using tyramide signal amplification with Alexa Fluor 546 fluorescent dye. Slides were imaged by using an arrayWoRx scanner. The detection sensitivity for pure cultures without enrichment was 10(2) to 10(3) CFU/ml, and the specificity was 100%. However, 5 h of sample enrichment followed by DNA extraction with Instagene matrix and multiplex PCR with microarray hybridization resulted in the detection of 1 CFU in 1 g of oyster tissue homogenate. Thus, enrichment of the bacterial pathogens permitted higher sensitivity in compliance with the Interstate Shellfish Sanitation Conference guideline. Application of the DNA microarray methodology to natural oysters revealed the presence of V. vulnificus (100%) and V. parahaemolyticus (83%). However, V. cholerae was not detected in natural oysters. An assay involving a combination of multiplex PCR and DNA microarray hybridization would help to ensure rapid and accurate detection of pathogenic vibrios in shellfish, thereby improving the microbiological safety of shellfish for consumers.
Detection of Pathogenic Vibrio spp. in Shellfish by Using Multiplex PCR and DNA Microarrays
Panicker, Gitika; Call, Douglas R.; Krug, Melissa J.; Bej, Asim K.
2004-01-01
This study describes the development of a gene-specific DNA microarray coupled with multiplex PCR for the comprehensive detection of pathogenic vibrios that are natural inhabitants of warm coastal waters and shellfish. Multiplex PCR with vvh and viuB for Vibrio vulnificus, with ompU, toxR, tcpI, and hlyA for V. cholerae, and with tlh, tdh, trh, and open reading frame 8 for V. parahaemolyticus helped to ensure that total and pathogenic strains, including subtypes of the three Vibrio spp., could be detected and discriminated. For DNA microarrays, oligonucleotide probes for these targeted genes were deposited onto epoxysilane-derivatized, 12-well, Teflon-masked slides by using a MicroGrid II arrayer. Amplified PCR products were hybridized to arrays at 50°C and detected by using tyramide signal amplification with Alexa Fluor 546 fluorescent dye. Slides were imaged by using an arrayWoRx scanner. The detection sensitivity for pure cultures without enrichment was 102 to 103 CFU/ml, and the specificity was 100%. However, 5 h of sample enrichment followed by DNA extraction with Instagene matrix and multiplex PCR with microarray hybridization resulted in the detection of 1 CFU in 1 g of oyster tissue homogenate. Thus, enrichment of the bacterial pathogens permitted higher sensitivity in compliance with the Interstate Shellfish Sanitation Conference guideline. Application of the DNA microarray methodology to natural oysters revealed the presence of V. vulnificus (100%) and V. parahaemolyticus (83%). However, V. cholerae was not detected in natural oysters. An assay involving a combination of multiplex PCR and DNA microarray hybridization would help to ensure rapid and accurate detection of pathogenic vibrios in shellfish, thereby improving the microbiological safety of shellfish for consumers. PMID:15574946
NASA Technical Reports Server (NTRS)
Aston, Graeme; Brophy, John R.
1987-01-01
Results from a series of experiments to determine the effect of accelerator grid mount geometry on the performance of the J-series ion optics assembly are described. Three mounting schemes, two flexible and one rigid, are compared for their relative ion extraction capability over a range of total accelerating voltages. The largest ion beam current, for the maximum total voltage investigated, is shown to occur using one of the flexible grid mounting geometries. However, at lower total voltages and reduced engine input power levels, the original rigid J-series ion optics accelerator grid mounts result in marginally better grid system performance at the same cold interelectrode gap.
NASA Astrophysics Data System (ADS)
Mueller, Ulf Philipp; Wienholt, Lukas; Kleinhans, David; Cussmann, Ilka; Bunke, Wolf-Dieter; Pleßmann, Guido; Wendiggensen, Jochen
2018-02-01
There are several power grid modelling approaches suitable for simulations in the field of power grid planning. The restrictive policies of grid operators, regulators and research institutes concerning their original data and models lead to an increased interest in open source approaches of grid models based on open data. By including all voltage levels between 60 kV (high voltage) and 380kV (extra high voltage), we dissolve the common distinction between transmission and distribution grid in energy system models and utilize a single, integrated model instead. An open data set for primarily Germany, which can be used for non-linear, linear and linear-optimal power flow methods, was developed. This data set consists of an electrically parameterised grid topology as well as allocated generation and demand characteristics for present and future scenarios at high spatial and temporal resolution. The usability of the grid model was demonstrated by the performance of exemplary power flow optimizations. Based on a marginal cost driven power plant dispatch, being subject to grid restrictions, congested power lines were identified. Continuous validation of the model is nescessary in order to reliably model storage and grid expansion in progressing research.
Unc, Adrian; Zurek, Ludek; Peterson, Greg; Narayanan, Sanjeev; Springthorpe, Susan V; Sattar, Syed A
2012-01-01
Potential risks associated with impaired surface water quality have commonly been evaluated by indirect description of potential sources using various fecal microbial indicators and derived source-tracking methods. These approaches are valuable for assessing and monitoring the impacts of land-use changes and changes in management practices at the source of contamination. A more detailed evaluation of putative etiologically significant genetic determinants can add value to these assessments. We evaluated the utility of using a microarray that integrates virulence genes with antibiotic and heavy metal resistance genes to describe and discriminate among spatially and seasonally distinct water samples from an agricultural watershed creek in Eastern Ontario. Because microarray signals may be analyzed as binomial distributions, the significance of ambiguous signals can be easily evaluated by using available off-the-shelf software. The FAMD software was used to evaluate uncertainties in the signal data. Analysis of multilocus fingerprinting data sets containing missing data has shown that, for the tested system, any variability in microarray signals had a marginal effect on data interpretation. For the tested watershed, results suggest that in general the wet fall season increased the downstream detection of virulence and resistance genes. Thus, the tested microarray technique has the potential to rapidly describe the quality of surface waters and thus to provide a qualitative tool to augment quantitative microbial risk assessments. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Aksu, Yaman; Miller, David J; Kesidis, George; Yang, Qing X
2010-05-01
Feature selection for classification in high-dimensional spaces can improve generalization, reduce classifier complexity, and identify important, discriminating feature "markers." For support vector machine (SVM) classification, a widely used technique is recursive feature elimination (RFE). We demonstrate that RFE is not consistent with margin maximization, central to the SVM learning approach. We thus propose explicit margin-based feature elimination (MFE) for SVMs and demonstrate both improved margin and improved generalization, compared with RFE. Moreover, for the case of a nonlinear kernel, we show that RFE assumes that the squared weight vector 2-norm is strictly decreasing as features are eliminated. We demonstrate this is not true for the Gaussian kernel and, consequently, RFE may give poor results in this case. MFE for nonlinear kernels gives better margin and generalization. We also present an extension which achieves further margin gains, by optimizing only two degrees of freedom--the hyperplane's intercept and its squared 2-norm--with the weight vector orientation fixed. We finally introduce an extension that allows margin slackness. We compare against several alternatives, including RFE and a linear programming method that embeds feature selection within the classifier design. On high-dimensional gene microarray data sets, University of California at Irvine (UCI) repository data sets, and Alzheimer's disease brain image data, MFE methods give promising results.
Procedure for locating 10 km UTM grid on Alabama County general highway maps
NASA Technical Reports Server (NTRS)
Paludan, C. T. N.
1975-01-01
Each county highway map has a geographic grid of degrees and tens of minutes in both longitude and latitude in the margins and within the map as intersection crosses. These will be used to locate the universal transverse mercator (UTM) grid at 10 km intervals. Since the maps used may have stretched or shrunk in height and/or width, interpolation should be done between the 10 min intersections when possible. A table of UTM coordinates of 10 min intersections is required and included. In Alabama, all eastings are referred to a false easting of 500,000 m at 87 deg W longitude (central meridian, CM).
Towards Effective Clustering Techniques for the Analysis of Electric Power Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, Emilie A.; Cotilla Sanchez, Jose E.; Halappanavar, Mahantesh
2013-11-30
Clustering is an important data analysis technique with numerous applications in the analysis of electric power grids. Standard clustering techniques are oblivious to the rich structural and dynamic information available for power grids. Therefore, by exploiting the inherent topological and electrical structure in the power grid data, we propose new methods for clustering with applications to model reduction, locational marginal pricing, phasor measurement unit (PMU or synchrophasor) placement, and power system protection. We focus our attention on model reduction for analysis based on time-series information from synchrophasor measurement devices, and spectral techniques for clustering. By comparing different clustering techniques onmore » two instances of realistic power grids we show that the solutions are related and therefore one could leverage that relationship for a computational advantage. Thus, by contrasting different clustering techniques we make a case for exploiting structure inherent in the data with implications for several domains including power systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voisin, N.; Kintner-Meyer, M.; Wu, D.
The 2016 SECURE Water Act report’s natural water availability benchmark, combined with the 2010 level of water demand from an integrated assessment model, is used as input to drive a large-scale water management model. The regulated flow at hydropower plants and thermoelectric plants in the Western U.S. electricity grid (WECC) is translated into potential hydropower generation and generation capacity constraints. The impact on reliability (unserved energy, reserve margin) and cost (production cost, carbon emissions) of water constraints on 2010-level WECC power system operations is assessed using an electricity production cost model (PCM). Use of the PCM reveals the changes inmore » generation dispatch that reflect the inter-regional interdependencies in water-constrained generation and the ability to use other generation resources to meet all electricity loads in the WECC. August grid operational benchmarks show a range of sensitivity in production cost (-8 to +11%) and carbon emissions (-7 to 11%). The reference reserve margin threshold of 15% above peak load is maintained in the scenarios analyzed, but in 5 out of 55 years unserved energy is observed when normal operations are maintained. There is 1 chance in 10 that a year will demonstrate unserved energy in August, which defines the system’s historical performance threshold to support impact, vulnerability, and adaptation analysis. For seasonal and longer term planning, i.e., multi-year drought, we demonstrate how the Water Scarcity Grid Impact Factor and climate oscillations (ENSO, PDO) can be used to plan for joint water-electricity management to maintain grid reliability.« less
NASA Astrophysics Data System (ADS)
Sohnen, Julia Meagher
This thesis explores the implications of the increased adoption of plug-in electric vehicles in California through its effect on the operation of the state's electric grid. The well-to-wheels emissions associated with driving an electric vehicle depend on the resource mix of the electricity grid used to charge the battery. We present a new least-cost dispatch model, EDGE-NET, for the California electricity grid consisting of interconnected sub-regions that encompass the six largest state utilities that can be used to evaluate the impact of growing electric vehicle demand on existing power grid infrastructure system and energy resources. This model considers spatiality and temporal dynamics of energy demand and supply when determining the regional impacts of additional charging profiles on the current electricity network. Model simulation runs for one year show generation and transmission congestion to be reasonable similar to historical data. Model simulation results show that average emissions and system costs associated with electricity generation vary significantly by time of day, season, and location. Marginal cost and emissions also exhibit seasonal and diurnal differences, but show less spatial variation. Sensitivity of demand analysis shows that the relative changes to average emissions and system costs respond asymmetrically to increases and decreases in electricity demand. These results depend on grid mix at the time and the marginal power plant type. In minimizing total system cost, the model will choose to dispatch the lowest-cost resource to meet additional vehicle demand, regardless of location, as long as transmission capacity is available. Location of electric vehicle charging has a small effect on the marginal greenhouse gas emissions associated with additional generation, due to electricity losses in the transmission grid. We use a geographically explicit, charging assessment model for California to develop and compare the effects of two charging profiles. Comparison of these two basic scenarios points to savings in greenhouse gas emissions savings and operational costs from delayed charging of electric vehicles. Vehicle charging simulations confirm that plug-in electric vehicles alone are unlikely to require additional generation or transmission infrastructure. EDGE-NET was successfully benchmarked against historical data for the present grid but additional work is required to expand the model for future scenario evaluation. We discuss how the model might be adapted for high penetrations of variable renewable energy resources, and the use of grid storage. Renewable resources such as wind and solar vary in California vary significantly by time-of-day, season, and location. However, combination of multiple resources from different geographic regions through transmission grid interconnection is expected to help mitigate the impacts of variability. EDGE-NET can evaluate interaction of supply and demand through the existing transmission infrastructure and can identify any critical network bottlenecks or areas for expansion. For this reason, EDGE-NET will be an important tool to evaluate energy policy scenarios.
NASA Astrophysics Data System (ADS)
Wibig, Joanna; Kotlarski, Sven; Maraun, Douglas; Soares, Pedro; Jaczewski, Adam; Czernecki, Bartosz; Gutierrez, Jose; Pongracz, Rita; Bartholy, Judit
2016-04-01
The aim of the paper is to compare the bias of selected ERA-Interim driven RCM projections when evaluated to gridded observation data (regridded to the same resolution as the considered RCM output) with those evaluated against station data to isolate the representativeness issue from the downscaling performance. The comparison has to be done for experiments of the COST action VALUE, so the same data period (1979-2008) and the same set consisting of 85 stations were used. As a gridded observations the EOBs data from the gridpoints closest to selected stations were used. The comparison was made for daily precipitation totals as well as daily minimum, maximum and mean temperature. A lot of indices were analysed to weigh up representativeness issues for marginal and temporal aspects. Relevant marginal aspects are described by average and extreme values distributions, whereas temporal aspects are presented by seasonality and length of extremespells. Set of indices used in VALUE experiment 1 is calculated for each dataset (stations, EOBs, selected RCM outputs) and biases of RCM outputs against station and EOBs data are obtained and compared. Those with most significant changes are analysed in details.
Chan, Alvin Y; Kharrat, Sohayla; Lundeen, Kelly; Mnatsakanyan, Lilit; Sazgar, Mona; Sen-Gupta, Indranil; Lin, Jack J; Hsu, Frank P K; Vadera, Sumeet
2017-06-01
Lowering the length of stay (LOS) is thought to potentially decrease hospital costs and is a metric commonly used to manage capacity. Patients with epilepsy undergoing intracranial electrode monitoring may have longer LOS because the time to seizure is difficult to predict or control. This study investigates the effect of economic implications of increased LOS in patients undergoing invasive electrode monitoring for epilepsy. We retrospectively collected and analyzed patient data for 76 patients who underwent invasive monitoring with either subdural grid (SDG) implantation or stereoelectroencephalography (SEEG) over 2 years at our institution. Data points collected included invasive electrode type, LOS, profit margin, contribution margins, insurance type, and complication rates. LOS correlated positively with both profit and contribution margins, meaning that as LOS increased, both the profit and contribution margins rose, and there was a low rate of complications in this patient group. This relationship was seen across a variety of insurance providers. These data suggest that LOS may not be the best metric to assess invasive monitoring patients (i.e., SEEG or SDG), and increased LOS does not necessarily equate with lower or negative institutional financial gain. Further research into LOS should focus on specific specialties, as each may differ in terms of financial implications. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.
NASA Astrophysics Data System (ADS)
Olivares, M. A.; Gonzalez Cabrera, J. M., Sr.; Moreno, R.
2016-12-01
Operation of hydropower reservoirs in Chile is prescribed by an Independent Power System Operator. This study proposes a methodology that integrates power grid operations planning with basin-scale multi-use reservoir operations planning. The aim is to efficiently manage a multi-purpose reservoir, in which hydroelectric generation is competing with other water uses, most notably irrigation. Hydropower and irrigation are competing water uses due to a seasonality mismatch. Currently, the operation of multi-purpose reservoirs with substantial power capacity is prescribed as the result of a grid-wide cost-minimization model which takes irrigation requirements as constraints. We propose advancing in the economic co-optimization of reservoir water use for irrigation and hydropower at the basin level, by explicitly introducing the economic value of water for irrigation represented by a demand function for irrigation water. The proposed methodology uses the solution of a long-term grid-wide operations planning model, a stochastic dual dynamic program (SDDP), to obtain the marginal benefit function for water use in hydropower. This marginal benefit corresponds to the energy price in the power grid as a function of the water availability in the reservoir and the hydrologic scenarios. This function allows capture technical and economic aspects to the operation of hydropower reservoir in the power grid and is generated with the dual variable of the power-balance constraint, the optimal reservoir operation and the hydrologic scenarios used in SDDP. The economic value of water for irrigation and hydropower are then integrated into a basin scale stochastic dynamic program, from which stored water value functions are derived. These value functions are then used to re-optimize reservoir operations under several inflow scenarios.
Shrinkage covariance matrix approach based on robust trimmed mean in gene sets detection
NASA Astrophysics Data System (ADS)
Karjanto, Suryaefiza; Ramli, Norazan Mohamed; Ghani, Nor Azura Md; Aripin, Rasimah; Yusop, Noorezatty Mohd
2015-02-01
Microarray involves of placing an orderly arrangement of thousands of gene sequences in a grid on a suitable surface. The technology has made a novelty discovery since its development and obtained an increasing attention among researchers. The widespread of microarray technology is largely due to its ability to perform simultaneous analysis of thousands of genes in a massively parallel manner in one experiment. Hence, it provides valuable knowledge on gene interaction and function. The microarray data set typically consists of tens of thousands of genes (variables) from just dozens of samples due to various constraints. Therefore, the sample covariance matrix in Hotelling's T2 statistic is not positive definite and become singular, thus it cannot be inverted. In this research, the Hotelling's T2 statistic is combined with a shrinkage approach as an alternative estimation to estimate the covariance matrix to detect significant gene sets. The use of shrinkage covariance matrix overcomes the singularity problem by converting an unbiased to an improved biased estimator of covariance matrix. Robust trimmed mean is integrated into the shrinkage matrix to reduce the influence of outliers and consequently increases its efficiency. The performance of the proposed method is measured using several simulation designs. The results are expected to outperform existing techniques in many tested conditions.
Characterization of Slosh Damping for Ortho-Grid and Iso-Grid Internal Tank Structures
NASA Technical Reports Server (NTRS)
Westra, Douglas G.; Sansone, Marco D.; Eberhart, Chad J.; West, Jeffrey S.
2016-01-01
Grid stiffened tank structures such as Ortho-Grid and Iso-Grid are widely used in cryogenic tanks for providing stiffening to the tank while reducing mass, compared to tank walls of constant cross-section. If the structure is internal to the tank, it will positively affect the fluid dynamic behavior of the liquid propellant, in regard to fluid slosh damping. As NASA and commercial companies endeavor to explore the solar system, vehicles will by necessity become more mass efficient, and design margin will be reduced where possible. Therefore, if the damping characteristics of the Ortho-Grid and Iso-Grid structure is understood, their positive damping effect can be taken into account in the systems design process. Historically, damping by internal structures has been characterized by rules of thumb and for Ortho-Grid, empirical design tools intended for slosh baffles of much larger cross-section have been used. There is little or no information available to characterize the slosh behavior of Iso-Grid internal structure. Therefore, to take advantage of these structures for their positive damping effects, there is much need for obtaining additional data and tools to characterize them. Recently, the NASA Marshall Space Flight Center conducted both sub-scale testing and computational fluid dynamics (CFD) simulations of slosh damping for Ortho-Grid and Iso-Grid tanks for cylindrical tanks containing water. Enhanced grid meshing techniques were applied to the geometrically detailed and complex Ortho-Grid and Iso-Grid structures. The Loci-STREAM CFD program with the Volume of Fluid Method module for tracking and locating the water-air fluid interface was used to conduct the simulations. The CFD simulations were validated with the test data and new empirical models for predicting damping and frequency of Ortho-Grid and Iso-Grid structures were generated.
Correlations and clustering in wholesale electricity markets
Cui, Tianyu; Caravelli, Francesco; Ududec, Cozmin
2017-11-24
We study the structure of locational marginal prices in day-ahead and real-time wholesale electricity markets. In particular, we consider the case of two North American markets and show that the price correlations contain information on the locational structure of the grid. We study various clustering methods and introduce a type of correlation function based on event synchronization for spiky time series, and another based on string correlations of location names provided by the markets. As a result, this allows us to reconstruct aspects of the locational structure of the grid.
Correlations and clustering in wholesale electricity markets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Tianyu; Caravelli, Francesco; Ududec, Cozmin
We study the structure of locational marginal prices in day-ahead and real-time wholesale electricity markets. In particular, we consider the case of two North American markets and show that the price correlations contain information on the locational structure of the grid. We study various clustering methods and introduce a type of correlation function based on event synchronization for spiky time series, and another based on string correlations of location names provided by the markets. As a result, this allows us to reconstruct aspects of the locational structure of the grid.
Correlations and clustering in wholesale electricity markets
NASA Astrophysics Data System (ADS)
Cui, Tianyu; Caravelli, Francesco; Ududec, Cozmin
2018-02-01
We study the structure of locational marginal prices in day-ahead and real-time wholesale electricity markets. In particular, we consider the case of two North American markets and show that the price correlations contain information on the locational structure of the grid. We study various clustering methods and introduce a type of correlation function based on event synchronization for spiky time series, and another based on string correlations of location names provided by the markets. This allows us to reconstruct aspects of the locational structure of the grid.
NOVEL MEMBRANE PROCESS TO UTILIZE DILUTE METHANE STREAMS - PHASE II
Reactive Power Pricing Model Considering the Randomness of Wind Power Output
NASA Astrophysics Data System (ADS)
Dai, Zhong; Wu, Zhou
2018-01-01
With the increase of wind power capacity integrated into grid, the influence of the randomness of wind power output on the reactive power distribution of grid is gradually highlighted. Meanwhile, the power market reform puts forward higher requirements for reasonable pricing of reactive power service. Based on it, the article combined the optimal power flow model considering wind power randomness with integrated cost allocation method to price reactive power. Meanwhile, considering the advantages and disadvantages of the present cost allocation method and marginal cost pricing, an integrated cost allocation method based on optimal power flow tracing is proposed. The model realized the optimal power flow distribution of reactive power with the minimal integrated cost and wind power integration, under the premise of guaranteeing the balance of reactive power pricing. Finally, through the analysis of multi-scenario calculation examples and the stochastic simulation of wind power outputs, the article compared the results of the model pricing and the marginal cost pricing, which proved that the model is accurate and effective.
caCORRECT2: Improving the accuracy and reliability of microarray data in the presence of artifacts
2011-01-01
Background In previous work, we reported the development of caCORRECT, a novel microarray quality control system built to identify and correct spatial artifacts commonly found on Affymetrix arrays. We have made recent improvements to caCORRECT, including the development of a model-based data-replacement strategy and integration with typical microarray workflows via caCORRECT's web portal and caBIG grid services. In this report, we demonstrate that caCORRECT improves the reproducibility and reliability of experimental results across several common Affymetrix microarray platforms. caCORRECT represents an advance over state-of-art quality control methods such as Harshlighting, and acts to improve gene expression calculation techniques such as PLIER, RMA and MAS5.0, because it incorporates spatial information into outlier detection as well as outlier information into probe normalization. The ability of caCORRECT to recover accurate gene expressions from low quality probe intensity data is assessed using a combination of real and synthetic artifacts with PCR follow-up confirmation and the affycomp spike in data. The caCORRECT tool can be accessed at the website: http://cacorrect.bme.gatech.edu. Results We demonstrate that (1) caCORRECT's artifact-aware normalization avoids the undesirable global data warping that happens when any damaged chips are processed without caCORRECT; (2) When used upstream of RMA, PLIER, or MAS5.0, the data imputation of caCORRECT generally improves the accuracy of microarray gene expression in the presence of artifacts more than using Harshlighting or not using any quality control; (3) Biomarkers selected from artifactual microarray data which have undergone the quality control procedures of caCORRECT are more likely to be reliable, as shown by both spike in and PCR validation experiments. Finally, we present a case study of the use of caCORRECT to reliably identify biomarkers for renal cell carcinoma, yielding two diagnostic biomarkers with potential clinical utility, PRKAB1 and NNMT. Conclusions caCORRECT is shown to improve the accuracy of gene expression, and the reproducibility of experimental results in clinical application. This study suggests that caCORRECT will be useful to clean up possible artifacts in new as well as archived microarray data. PMID:21957981
Wang, Lizhu; Riseng, Catherine M.; Mason, Lacey; Werhrly, Kevin; Rutherford, Edward; McKenna, James E.; Castiglione, Chris; Johnson, Lucinda B.; Infante, Dana M.; Sowa, Scott P.; Robertson, Mike; Schaeffer, Jeff; Khoury, Mary; Gaiot, John; Hollenhurst, Tom; Brooks, Colin N.; Coscarelli, Mark
2015-01-01
Managing the world's largest and most complex freshwater ecosystem, the Laurentian Great Lakes, requires a spatially hierarchical basin-wide database of ecological and socioeconomic information that is comparable across the region. To meet such a need, we developed a spatial classification framework and database — Great Lakes Aquatic Habitat Framework (GLAHF). GLAHF consists of catchments, coastal terrestrial, coastal margin, nearshore, and offshore zones that encompass the entire Great Lakes Basin. The catchments captured in the database as river pour points or coastline segments are attributed with data known to influence physicochemical and biological characteristics of the lakes from the catchments. The coastal terrestrial zone consists of 30-m grid cells attributed with data from the terrestrial region that has direct connection with the lakes. The coastal margin and nearshore zones consist of 30-m grid cells attributed with data describing the coastline conditions, coastal human disturbances, and moderately to highly variable physicochemical and biological characteristics. The offshore zone consists of 1.8-km grid cells attributed with data that are spatially less variable compared with the other aquatic zones. These spatial classification zones and their associated data are nested within lake sub-basins and political boundaries and allow the synthesis of information from grid cells to classification zones, within and among political boundaries, lake sub-basins, Great Lakes, or within the entire Great Lakes Basin. This spatially structured database could help the development of basin-wide management plans, prioritize locations for funding and specific management actions, track protection and restoration progress, and conduct research for science-based decision making.
CLOSING THE BIODIESEL LOOP: SELF SUSTAINING COMMUNITY BASED BIODIESEL PRODUCTION
NASA Astrophysics Data System (ADS)
Meyer, B.; Chulliat, A.; Saltus, R.
2017-12-01
The Earth Magnetic Anomaly Grid at 2 arc min resolution version 3, EMAG2v3, combines marine and airborne trackline observations, satellite data, and magnetic observatory data to map the location, intensity, and extent of lithospheric magnetic anomalies. EMAG2v3 includes over 50 million new data points added to NCEI's Geophysical Database System (GEODAS) in recent years. The new grid relies only on observed data, and does not utilize a priori geologic structure or ocean-age information. Comparing this grid to other global magnetic anomaly compilations (e.g., EMAG2 and WDMAM), we can see that the inclusion of a priori ocean-age patterns forces an artificial linear pattern to the grid; the data-only approach allows for greater complexity in representing the evolution along oceanic spreading ridges and continental margins. EMAG2v3 also makes use of the satellite-derived lithospheric field model MF7 in order to accurately represent anomalies with wavelengths greater than 300 km and to create smooth grid merging boundaries. The heterogeneous distribution of errors in the observations used in compiling the EMAG2v3 was explored, and is reported in the final distributed grid. This grid is delivered at both 4 km continuous altitude above WGS84, as well as at sea level for all oceanic and coastal regions.
The method of a joint intraday security check system based on cloud computing
NASA Astrophysics Data System (ADS)
Dong, Wei; Feng, Changyou; Zhou, Caiqi; Cai, Zhi; Dan, Xu; Dai, Sai; Zhang, Chuancheng
2017-01-01
The intraday security check is the core application in the dispatching control system. The existing security check calculation only uses the dispatch center’s local model and data as the functional margin. This paper introduces the design of all-grid intraday joint security check system based on cloud computing and its implementation. To reduce the effect of subarea bad data on the all-grid security check, a new power flow algorithm basing on comparison and adjustment with inter-provincial tie-line plan is presented. And the numerical example illustrated the effectiveness and feasibility of the proposed method.
Small Technology--Big Impact. Practical Options for Development
ERIC Educational Resources Information Center
Academy for Educational Development, 2009
2009-01-01
Technology has dramatically changed the world--now almost anyone can "move" at Internet-speed; people who were marginalized are able to find information on acquiring micro-loans to start businesses, and villages previously unconnected to the telecommunications grid now have affordable cell phone access. As technology becomes easier to…
Rebholz-Schuhman, Dietrich; Cameron, Graham; Clark, Dominic; van Mulligen, Erik; Coatrieux, Jean-Louis; Del Hoyo Barbolla, Eva; Martin-Sanchez, Fernando; Milanesi, Luciano; Porro, Ivan; Beltrame, Francesco; Tollis, Ioannis; Van der Lei, Johan
2007-03-08
The SYMBIOmatics Specific Support Action (SSA) is "an information gathering and dissemination activity" that seeks "to identify synergies between the bioinformatics and the medical informatics" domain to improve collaborative progress between both domains (ref. to http://www.symbiomatics.org). As part of the project experts in both research fields will be identified and approached through a survey. To provide input to the survey, the scientific literature was analysed to extract topics relevant to both medical informatics and bioinformatics. This paper presents results of a systematic analysis of the scientific literature from medical informatics research and bioinformatics research. In the analysis pairs of words (bigrams) from the leading bioinformatics and medical informatics journals have been used as indication of existing and emerging technologies and topics over the period 2000-2005 ("recent") and 1990-1990 ("past"). We identified emerging topics that were equally important to bioinformatics and medical informatics in recent years such as microarray experiments, ontologies, open source, text mining and support vector machines. Emerging topics that evolved only in bioinformatics were system biology, protein interaction networks and statistical methods for microarray analyses, whereas emerging topics in medical informatics were grid technology and tissue microarrays. We conclude that although both fields have their own specific domains of interest, they share common technological developments that tend to be initiated by new developments in biotechnology and computer science.
Rebholz-Schuhman, Dietrich; Cameron, Graham; Clark, Dominic; van Mulligen, Erik; Coatrieux, Jean-Louis; Del Hoyo Barbolla, Eva; Martin-Sanchez, Fernando; Milanesi, Luciano; Porro, Ivan; Beltrame, Francesco; Tollis, Ioannis; Van der Lei, Johan
2007-01-01
Background The SYMBIOmatics Specific Support Action (SSA) is "an information gathering and dissemination activity" that seeks "to identify synergies between the bioinformatics and the medical informatics" domain to improve collaborative progress between both domains (ref. to ). As part of the project experts in both research fields will be identified and approached through a survey. To provide input to the survey, the scientific literature was analysed to extract topics relevant to both medical informatics and bioinformatics. Results This paper presents results of a systematic analysis of the scientific literature from medical informatics research and bioinformatics research. In the analysis pairs of words (bigrams) from the leading bioinformatics and medical informatics journals have been used as indication of existing and emerging technologies and topics over the period 2000–2005 ("recent") and 1990–1990 ("past"). We identified emerging topics that were equally important to bioinformatics and medical informatics in recent years such as microarray experiments, ontologies, open source, text mining and support vector machines. Emerging topics that evolved only in bioinformatics were system biology, protein interaction networks and statistical methods for microarray analyses, whereas emerging topics in medical informatics were grid technology and tissue microarrays. Conclusion We conclude that although both fields have their own specific domains of interest, they share common technological developments that tend to be initiated by new developments in biotechnology and computer science. PMID:17430562
Blast2GO goes grid: developing a grid-enabled prototype for functional genomics analysis.
Aparicio, G; Götz, S; Conesa, A; Segrelles, D; Blanquer, I; García, J M; Hernandez, V; Robles, M; Talon, M
2006-01-01
The vast amount in complexity of data generated in Genomic Research implies that new dedicated and powerful computational tools need to be developed to meet their analysis requirements. Blast2GO (B2G) is a bioinformatics tool for Gene Ontology-based DNA or protein sequence annotation and function-based data mining. The application has been developed with the aim of affering an easy-to-use tool for functional genomics research. Typical B2G users are middle size genomics labs carrying out sequencing, ETS and microarray projects, handling datasets up to several thousand sequences. In the current version of B2G. The power and analytical potential of both annotation and function data-mining is somehow restricted to the computational power behind each particular installation. In order to be able to offer the possibility of an enhanced computational capacity within this bioinformatics application, a Grid component is being developed. A prototype has been conceived for the particular problem of speeding up the Blast searches to obtain fast results for large datasets. Many efforts have been done in the literature concerning the speeding up of Blast searches, but few of them deal with the use of large heterogeneous production Grid Infrastructures. These are the infrastructures that could reach the largest number of resources and the best load balancing for data access. The Grid Service under development will analyse requests based on the number of sequences, splitting them accordingly to the available resources. Lower-level computation will be performed through MPIBLAST. The software architecture is based on the WSRF standard.
Performance Evaluation of the Prototype Model NEXT Ion Thruster
NASA Technical Reports Server (NTRS)
Herman, Daniel A.; Soulas, George C.; Patterson, Michael J.
2008-01-01
The performance testing results of the first prototype model NEXT ion engine, PM1, are presented. The NEXT program has developed the next generation ion propulsion system to enhance and enable Discovery, New Frontiers, and Flagship-type NASA missions. The PM1 thruster exhibits operational behavior consistent with its predecessors, the engineering model thrusters, with substantial mass savings, enhanced thermal margins, and design improvements for environmental testing compliance. The dry mass of PM1 is 12.7 kg. Modifications made in the thruster design have resulted in improved performance and operating margins, as anticipated. PM1 beginning-of-life performance satisfies all of the electric propulsion thruster mission-derived technical requirements. It demonstrates a wide range of throttleability by processing input power levels from 0.5 to 6.9 kW. At 6.9 kW, the PM1 thruster demonstrates specific impulse of 4190 s, 237 mN of thrust, and a thrust efficiency of 0.71. The flat beam profile, flatness parameters vary from 0.66 at low-power to 0.88 at full-power, and advanced ion optics reduce localized accelerator grid erosion and increases margins for electron backstreaming, impingement-limited voltage, and screen grid ion transparency. The thruster throughput capability is predicted to exceed 750 kg of xenon, an equivalent of 36,500 hr of continuous operation at the full-power operating condition.
Vigneron, Adrien; Cruaud, Perrine; Roussel, Erwan G.; Pignet, Patricia; Caprais, Jean-Claude; Callac, Nolwenn; Ciobanu, Maria-Cristina; Godfroy, Anne; Cragg, Barry A.; Parkes, John R.; Van Nostrand, Joy D.; He, Zhili; Zhou, Jizhong; Toffin, Laurent
2014-01-01
Subsurface sediments of the Sonora Margin (Guaymas Basin), located in proximity of active cold seep sites were explored. The taxonomic and functional diversity of bacterial and archaeal communities were investigated from 1 to 10 meters below the seafloor. Microbial community structure and abundance and distribution of dominant populations were assessed using complementary molecular approaches (Ribosomal Intergenic Spacer Analysis, 16S rRNA libraries and quantitative PCR with an extensive primers set) and correlated to comprehensive geochemical data. Moreover the metabolic potentials and functional traits of the microbial community were also identified using the GeoChip functional gene microarray and metabolic rates. The active microbial community structure in the Sonora Margin sediments was related to deep subsurface ecosystems (Marine Benthic Groups B and D, Miscellaneous Crenarchaeotal Group, Chloroflexi and Candidate divisions) and remained relatively similar throughout the sediment section, despite defined biogeochemical gradients. However, relative abundances of bacterial and archaeal dominant lineages were significantly correlated with organic carbon quantity and origin. Consistently, metabolic pathways for the degradation and assimilation of this organic carbon as well as genetic potentials for the transformation of detrital organic matters, hydrocarbons and recalcitrant substrates were detected, suggesting that chemoorganotrophic microorganisms may dominate the microbial community of the Sonora Margin subsurface sediments. PMID:25099369
Control system and method for a universal power conditioning system
Lai, Jih-Sheng; Park, Sung Yeul; Chen, Chien-Liang
2014-09-02
A new current loop control system method is proposed for a single-phase grid-tie power conditioning system that can be used under a standalone or a grid-tie mode. This type of inverter utilizes an inductor-capacitor-inductor (LCL) filter as the interface in between inverter and the utility grid. The first set of inductor-capacitor (LC) can be used in the standalone mode, and the complete LCL can be used for the grid-tie mode. A new admittance compensation technique is proposed for the controller design to avoid low stability margin while maintaining sufficient gain at the fundamental frequency. The proposed current loop controller system and admittance compensation technique have been simulated and tested. Simulation results indicate that without the admittance path compensation, the current loop controller output duty cycle is largely offset by an undesired admittance path. At the initial simulation cycle, the power flow may be erratically fed back to the inverter causing catastrophic failure. With admittance path compensation, the output power shows a steady-state offset that matches the design value. Experimental results show that the inverter is capable of both a standalone and a grid-tie connection mode using the LCL filter configuration.
Weniger, Markus; Engelmann, Julia C; Schultz, Jörg
2007-01-01
Background Regulation of gene expression is relevant to many areas of biology and medicine, in the study of treatments, diseases, and developmental stages. Microarrays can be used to measure the expression level of thousands of mRNAs at the same time, allowing insight into or comparison of different cellular conditions. The data derived out of microarray experiments is highly dimensional and often noisy, and interpretation of the results can get intricate. Although programs for the statistical analysis of microarray data exist, most of them lack an integration of analysis results and biological interpretation. Results We have developed GEPAT, Genome Expression Pathway Analysis Tool, offering an analysis of gene expression data under genomic, proteomic and metabolic context. We provide an integration of statistical methods for data import and data analysis together with a biological interpretation for subsets of probes or single probes on the chip. GEPAT imports various types of oligonucleotide and cDNA array data formats. Different normalization methods can be applied to the data, afterwards data annotation is performed. After import, GEPAT offers various statistical data analysis methods, as hierarchical, k-means and PCA clustering, a linear model based t-test or chromosomal profile comparison. The results of the analysis can be interpreted by enrichment of biological terms, pathway analysis or interaction networks. Different biological databases are included, to give various information for each probe on the chip. GEPAT offers no linear work flow, but allows the usage of any subset of probes and samples as a start for a new data analysis. GEPAT relies on established data analysis packages, offers a modular approach for an easy extension, and can be run on a computer grid to allow a large number of users. It is freely available under the LGPL open source license for academic and commercial users at . Conclusion GEPAT is a modular, scalable and professional-grade software integrating analysis and interpretation of microarray gene expression data. An installation available for academic users can be found at . PMID:17543125
An Air-Ocean Coupled Nowcast/Forecast System for the East Asian Marginal Seas
2000-09-12
external factors affecting the regional oceanogra- phy. We use a rectilinear grid with horizontal spacing of 0.25° by 0.25° and 23 nonuniform vertical a ... levels . The model uses realistic bathymetry data from the Naval Oceanographic Office Digit~ Bathymetry Data Base with 5 minute resolution (DBDB5). 2.1.2
Aerodynamic simulation on massively parallel systems
NASA Technical Reports Server (NTRS)
Haeuser, Jochem; Simon, Horst D.
1992-01-01
This paper briefly addresses the computational requirements for the analysis of complete configurations of aircraft and spacecraft currently under design to be used for advanced transportation in commercial applications as well as in space flight. The discussion clearly shows that massively parallel systems are the only alternative which is both cost effective and on the other hand can provide the necessary TeraFlops, needed to satisfy the narrow design margins of modern vehicles. It is assumed that the solution of the governing physical equations, i.e., the Navier-Stokes equations which may be complemented by chemistry and turbulence models, is done on multiblock grids. This technique is situated between the fully structured approach of classical boundary fitted grids and the fully unstructured tetrahedra grids. A fully structured grid best represents the flow physics, while the unstructured grid gives best geometrical flexibility. The multiblock grid employed is structured within a block, but completely unstructured on the block level. While a completely unstructured grid is not straightforward to parallelize, the above mentioned multiblock grid is inherently parallel, in particular for multiple instruction multiple datastream (MIMD) machines. In this paper guidelines are provided for setting up or modifying an existing sequential code so that a direct parallelization on a massively parallel system is possible. Results are presented for three parallel systems, namely the Intel hypercube, the Ncube hypercube, and the FPS 500 system. Some preliminary results for an 8K CM2 machine will also be mentioned. The code run is the two dimensional grid generation module of Grid, which is a general two dimensional and three dimensional grid generation code for complex geometries. A system of nonlinear Poisson equations is solved. This code is also a good testcase for complex fluid dynamics codes, since the same datastructures are used. All systems provided good speedups, but message passing MIMD systems seem to be best suited for large miltiblock applications.
NASA Astrophysics Data System (ADS)
Eccles, Jennifer D.; White, Robert S.; Christie, Philip A. F.
2011-07-01
Imaging challenges caused by highly attenuative flood basalt sequences have resulted in the understanding of volcanic rifted continental margins lagging behind that of non-volcanic rifted and convergent margins. Massive volcanism occurred during break-up at 70% of the passive margins bordering the Atlantic Ocean, the causes and dynamics of which are still debated. This paper shows results from traveltime tomography of compressional and converted shear wave arrivals recorded on 170 four-component ocean bottom seismometers along two North Atlantic continental margin profiles. This traveltime tomography was performed using two different approaches. The first, a flexible layer-based parameterisation, enables the quality control of traveltime picks and investigation of the crustal structure. The second, with a regularised grid-based parameterisation, requires correction of converted shear wave traveltimes to effective symmetric raypaths and allows exploration of the model space via Monte Carlo analyses. The velocity models indicate high lower-crustal velocities and sharp transitions in both velocity and Vp/Vs ratios across the continent-ocean transition. The velocities are consistent with established mixing trends between felsic continental crust and high magnesium mafic rock on both margins. Interpretation of the high quality seismic reflection profile on the Faroes margin confirms that this mixing is through crustal intrusion. Converted shear wave data also provide constraints on the sub-basalt lithology on the Faroes margin, which is interpreted as a pre-break-up Mesozoic to Paleocene sedimentary system intruded by sills.
Online Analysis of Wind and Solar Part II: Transmission Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makarov, Yuri V.; Etingov, Pavel V.; Ma, Jian
2012-01-31
To facilitate wider penetration of renewable resources without compromising system reliability concerns arising from the lack of predictability of intermittent renewable resources, a tool for use by California Independent System Operator (CAISO) power grid operators was developed by Pacific Northwest National Laboratory (PNNL) in conjunction with CAISO with funding from California Energy Commission. The tool analyzes and displays the impacts of uncertainties in forecasts of loads and renewable generation on: (1) congestion, (2)voltage and transient stability margins, and (3)voltage reductions and reactive power margins. The impacts are analyzed in the base case and under user-specified contingencies.A prototype of the toolmore » has been developed and implemented in software.« less
Image microarrays (IMA): Digital pathology's missing tool
Hipp, Jason; Cheng, Jerome; Pantanowitz, Liron; Hewitt, Stephen; Yagi, Yukako; Monaco, James; Madabhushi, Anant; Rodriguez-canales, Jaime; Hanson, Jeffrey; Roy-Chowdhuri, Sinchita; Filie, Armando C.; Feldman, Michael D.; Tomaszewski, John E.; Shih, Natalie NC.; Brodsky, Victor; Giaccone, Giuseppe; Emmert-Buck, Michael R.; Balis, Ulysses J.
2011-01-01
Introduction: The increasing availability of whole slide imaging (WSI) data sets (digital slides) from glass slides offers new opportunities for the development of computer-aided diagnostic (CAD) algorithms. With the all-digital pathology workflow that these data sets will enable in the near future, literally millions of digital slides will be generated and stored. Consequently, the field in general and pathologists, specifically, will need tools to help extract actionable information from this new and vast collective repository. Methods: To address this limitation, we designed and implemented a tool (dCORE) to enable the systematic capture of image tiles with constrained size and resolution that contain desired histopathologic features. Results: In this communication, we describe a user-friendly tool that will enable pathologists to mine digital slides archives to create image microarrays (IMAs). IMAs are to digital slides as tissue microarrays (TMAs) are to cell blocks. Thus, a single digital slide could be transformed into an array of hundreds to thousands of high quality digital images, with each containing key diagnostic morphologies and appropriate controls. Current manual digital image cut-and-paste methods that allow for the creation of a grid of images (such as an IMA) of matching resolutions are tedious. Conclusion: The ability to create IMAs representing hundreds to thousands of vetted morphologic features has numerous applications in education, proficiency testing, consensus case review, and research. Lastly, in a manner analogous to the way conventional TMA technology has significantly accelerated in situ studies of tissue specimens use of IMAs has similar potential to significantly accelerate CAD algorithm development. PMID:22200030
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Jialin Frank; Martínez, Maria Gabriela; Anderson, C Lindsay
This work presents a preliminary analysis considering impact of a grid-connected microgrid on network transmission of the power system. The locational marginal prices of the power system are used to strategically place the microgrid to avoid congestion problems. In addition, a Monte Carlo simulation approach is implemented to confirm that network congestion can be attenuated if appropriate price-based signals are set to define the import and export dynamic between the two systems.
Gaussian Process Interpolation for Uncertainty Estimation in Image Registration
Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William
2014-01-01
Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127
Economic evaluation of distribution system smart grid investments
Onen, Ahmet; Cheng, Danling; Broadwater, Robert P.; ...
2014-12-31
This paper investigates economic benefits of smart grid automation investments. A system consisting of 7 substations and 14 feeders is used in the evaluation. Here benefits that can be quantified in terms of dollar savings are considered, termed “hard dollar” benefits. Smart Grid investment evaluations to be considered include investments in improved efficiency, more cost effective use of existing system capacity with automated switches, and coordinated control of capacitor banks and voltage regulators. These Smart Grid evaluations are sequentially ordered, resulting in a series of incremental hard dollar benefits. Hard dollar benefits come from improved efficiency, delaying large capital equipmentmore » investments, shortened storm restoration times, and reduced customer energy use. Analyses used in the evaluation involve hourly power flow analysis over multiple years and Monte Carlo simulations of switching operations during storms using a reconfiguration for restoration algorithm. The economic analysis uses the time varying value of the Locational Marginal Price. Algorithms used include reconfiguration for restoration involving either manual or automated switches and coordinated control involving two modes of control. Field validations of phase balancing and capacitor design results are presented. The evaluation shows that investments in automation can improve performance while at the same time lowering costs.« less
NASA Astrophysics Data System (ADS)
Souche, A.; Medvedev, S.; Hartz, E. H.
2009-04-01
The sub-ice topography of Greenland is characterized by a central depression below the sea level and by elevated (in some places significantly) margins. Whereas the central depression may be explained by significant load of the Greenland ice sheet, the origin of the peripheral relief remains unclear. We analyze the influence of formation of the ice sheet and carving by glacial erosion on the evolution of topography along the margins of Greenland. Our analysis shows that: (1) The heavy ice loading in the central part of Greenland and consecutive peripheral bulging has a negligible effect on amplitude of the uplifted Greenland margins. (2) First order estimates of uplift due to isostatic readjustment caused by glacial erosion and unloading in the fjord systems is up to 1.1 km. (3) The increase of accuracy of topographic data (comparing several data sets of resolution with grid size from 5 km to 50 m) results in increase of the isostatic response in the model. (4) The analysis of mass redistribution during erosion-sedimentation process and data on age of offshore sediments allows us to estimate the timing of erosion along the margins of Greenland. This ongoing analysis, however, requires careful account for the link between sources (localized glacial erosion) and sinks (offshore sedimentary basins around Greenland).
Upper mantle P velocity structure beneath the Baikal Rift from modeling regional seismic data
NASA Astrophysics Data System (ADS)
Brazier, Richard A.; Nyblade, Andrew A.
2003-02-01
Uppermost mantle P wave velocity structure beneath the Baikal rift and southern margin of the Siberian Platform has been investigated by using a grid search method to model Pnl waveforms from two moderate earthquakes recorded by station TLY at the southwestern end of Lake Baikal. The results yielded a limited number of successful models which indicate the presence of upper mantle P wave velocities beneath the rift axis and the margin of the platform that are 2-5% lower than expected. The magnitude of the velocity anomalies and their location support the presence of a thermal anomaly that extends laterally beyond the rift proper, possibly created by small-scale convection or a plume-like, thermal upwelling.
Federated ontology-based queries over cancer data
2012-01-01
Background Personalised medicine provides patients with treatments that are specific to their genetic profiles. It requires efficient data sharing of disparate data types across a variety of scientific disciplines, such as molecular biology, pathology, radiology and clinical practice. Personalised medicine aims to offer the safest and most effective therapeutic strategy based on the gene variations of each subject. In particular, this is valid in oncology, where knowledge about genetic mutations has already led to new therapies. Current molecular biology techniques (microarrays, proteomics, epigenetic technology and improved DNA sequencing technology) enable better characterisation of cancer tumours. The vast amounts of data, however, coupled with the use of different terms - or semantic heterogeneity - in each discipline makes the retrieval and integration of information difficult. Results Existing software infrastructures for data-sharing in the cancer domain, such as caGrid, support access to distributed information. caGrid follows a service-oriented model-driven architecture. Each data source in caGrid is associated with metadata at increasing levels of abstraction, including syntactic, structural, reference and domain metadata. The domain metadata consists of ontology-based annotations associated with the structural information of each data source. However, caGrid's current querying functionality is given at the structural metadata level, without capitalising on the ontology-based annotations. This paper presents the design of and theoretical foundations for distributed ontology-based queries over cancer research data. Concept-based queries are reformulated to the target query language, where join conditions between multiple data sources are found by exploiting the semantic annotations. The system has been implemented, as a proof of concept, over the caGrid infrastructure. The approach is applicable to other model-driven architectures. A graphical user interface has been developed, supporting ontology-based queries over caGrid data sources. An extensive evaluation of the query reformulation technique is included. Conclusions To support personalised medicine in oncology, it is crucial to retrieve and integrate molecular, pathology, radiology and clinical data in an efficient manner. The semantic heterogeneity of the data makes this a challenging task. Ontologies provide a formal framework to support querying and integration. This paper provides an ontology-based solution for querying distributed databases over service-oriented, model-driven infrastructures. PMID:22373043
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2010-01-01
Cell-centered and node-centered approaches have been compared for unstructured finite-volume discretization of inviscid fluxes. The grids range from regular grids to irregular grids, including mixed-element grids and grids with random perturbations of nodes. Accuracy, complexity, and convergence rates of defect-correction iterations are studied for eight nominally second-order accurate schemes: two node-centered schemes with weighted and unweighted least-squares (LSQ) methods for gradient reconstruction and six cell-centered schemes two node-averaging with and without clipping and four schemes that employ different stencils for LSQ gradient reconstruction. The cell-centered nearest-neighbor (CC-NN) scheme has the lowest complexity; a version of the scheme that involves smart augmentation of the LSQ stencil (CC-SA) has only marginal complexity increase. All other schemes have larger complexity; complexity of node-centered (NC) schemes are somewhat lower than complexity of cell-centered node-averaging (CC-NA) and full-augmentation (CC-FA) schemes. On highly anisotropic grids typical of those encountered in grid adaptation, discretization errors of five of the six cell-centered schemes converge with second order on all tested grids; the CC-NA scheme with clipping degrades solution accuracy to first order. The NC schemes converge with second order on regular and/or triangular grids and with first order on perturbed quadrilaterals and mixed-element grids. All schemes may produce large relative errors in gradient reconstruction on grids with perturbed nodes. Defect-correction iterations for schemes employing weighted least-square gradient reconstruction diverge on perturbed stretched grids. Overall, the CC-NN and CC-SA schemes offer the best options of the lowest complexity and secondorder discretization errors. On anisotropic grids over a curved body typical of turbulent flow simulations, the discretization errors converge with second order and are small for the CC-NN, CC-SA, and CC-FA schemes on all grids and for NC schemes on triangular grids; the discretization errors of the CC-NA scheme without clipping do not converge on irregular grids. Accurate gradient reconstruction can be achieved by introducing a local approximate mapping; without approximate mapping, only the NC scheme with weighted LSQ method provides accurate gradients. Defect correction iterations for the CC-NA scheme without clipping diverge; for the NC scheme with weighted LSQ method, the iterations either diverge or converge very slowly. The best option in curved geometries is the CC-SA scheme that offers low complexity, second-order discretization errors, and fast convergence.
NASA Astrophysics Data System (ADS)
Greacen, Christopher Edmund
This study analyzes forces that constrain sustainable deployment of cost-effective renewable energy in a developing country. By many economic and social measures, community micro-hydro is a superior electrification option for remote mountainous communities in Thailand. Yet despite a 20 year government program, only 59 projects were built and of these less than half remain operating. By comparison, the national grid has extended to over 69,000 villages. Based on microeconomic, engineering, social barriers, common pool resource, and political economic theories, this study investigates first, why so few micro-hydro projects were built, and second, why so few remain operating. Drawing on historical information, site visits, interviews, surveys, and data logging, this study shows that the marginal status of micro-hydro arises from multiple linked factors spanning from village experiences to geopolitical concerns. The dominance of the parastatal rural electrification utility, the PEA, and its singular focus on grid extension are crucial in explaining why so few projects were built. Buffered from financial consequences by domestic and international subsidies, grid expansion proceeded without consideration of alternatives. High costs borne by villagers for micro-hydro discouraged village choice. PEA remains catalytic in explaining why few systems remain operating: grid expansion plans favor villages with existing loads and most villages abandon micro-hydro generators when the grid arrives. Village experiences are fundamental: most projects suffer blackouts, brownouts, and equipment failures due to poor equipment and collective over-consumption. Over-consumption is linked to mismatch between tariffs and generator technical characteristics. Opportunities to resolve problems languished as limited state support focused on building projects and immediate repairs rather than fundamentals. Despite frustrations, many remain proud of "their power plant". Interconnecting and selling electricity to PEA offers a mutually beneficial opportunity for the Thai public and for villagers, but one thus far thwarted by bureaucratic challenges. Explanations of renewable energy dissemination in countries with strong state involvement in rural electrification should borrow approaches from political economy concerning the ways in which politics and constellations of other factors eclipse rational economic behavior. At the village level, common pool resource theory reveals causal linkages between appliance use, equipment limitations, power quality, and equipment failures.
Limitations and tradeoffs in synchronization of large-scale networks with uncertain links
Diwadkar, Amit; Vaidya, Umesh
2016-01-01
The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994
NASA Astrophysics Data System (ADS)
Tscherning, Carl Christian; Arabelos, Dimitrios; Reguzzoni, Mirko
2013-04-01
The GOCE satellite measures gravity gradients which are filtered and transformed to gradients into an Earth-referenced frame by the GOCE High Level processing Facility. More than 80000000 data with 6 components are available from the period 2009-2011. IAG Arctic gravity was used north of 83 deg., while data at the Antarctic was not used due to bureaucratic restrictions by the data-holders. Subsets of the data have been used to produce gridded values at 10 km altitude of gravity anomalies and vertical gravity gradients in 20 deg. x 20 deg. blocks with 10' spacing. Various combinations and densities of data were used to obtain values in areas with known gravity anomalies. The (marginally) best choice was vertical gravity gradients selected with an approximately 0.125 deg spacing. Using Least-Squares Collocation, error-estimates were computed and compared to the difference between the GOCE-grids and grids derived from EGM2008 to deg. 512. In general a good agreement was found, however with some inconsistencies in certain areas. The computation time on a usual server with 24 processors was typically 100 minutes for a block with generally 40000 GOCE vertical gradients as input. The computations will be updated with new Wiener-filtered data in the near future.
NASA Astrophysics Data System (ADS)
Hochmuth, K.; Gohl, K.; Leitchenkov, G. L.; Sauermilch, I.; Whittaker, J. M.; De Santis, L.; Olivo, E.; Uenzelmann-Neben, G.; Davy, B. W.
2017-12-01
Although the Southern Ocean plays a fundamental role in the global climate and ocean current system, paleo-ocean circulation models of the Southern Ocean suffer from missing boundary conditions. A more accurate representation of the geometry of the seafloor and their dynamics over long time-scales are key for enabling more precise reconstructions of the development of the paleo-currents, the paleo-environment and the Antarctic ice sheets. The accurate parameterisation of these models controls the meaning and implications of regional and global paleo-climate models. The dynamics of ocean currents in proximity of the continental margins is also controlled by the development of the regional seafloor morphology of the conjugate continental shelves, slopes and rises. The reassessment of all available reflection seismic and borehole data from Antarctica as well as its conjugate margins of Australia, New Zealand, South Africa and South America, allows us to create paleobathymetric grids for various time slices during the Cenozoic. Those grids inform us about sediment distribution and volume as well a local sedimentation rates. The earliest targeted time slice of the Eocene/Oligocene Boundary marks a significant turning point towards an icehouse climate. From latest Eocene to earliest Oligocene the Southern Ocean changes fundamentally from a post greenhouse to an icehouse environment with the establishment of a vast continental ice sheet on the Antarctic continent. With the calculated sediment distribution maps, we can evaluate the dynamics of the sedimentary cover as well as the development of structural obstacles such as oceanic plateaus and ridges. The ultimate aim of this project is - as a community based effort - to create paleobathymetric grids at various time slices such as the Mid-Miocene Climatic Optimum and the Pliocene/Pleistocene, and eventually mimic the time steps used within the modelling community. The observation of sediment distribution and local sediment volumes open the door towards more sophisticated paleo-topograpy studies of the Antarctic continent and more detailed studies of the paleo-circulation. Local paleo - water depths at the oceanic gateways or the position of paleo-shelf edges highly influence the regional circulation patterns supporting more elaborated climate models.
Overgeneration from Solar Energy in California. A Field Guide to the Duck Chart
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denholm, Paul; O'Connell, Matthew; Brinkman, Gregory
In 2013, the California Independent System Operator published the 'duck chart,' which shows a significant drop in mid-day net load on a spring day as solar photovoltaics (PV) are added to the system. The chart raises concerns that the conventional power system will be unable to accommodate the ramp rate and range needed to fully utilize solar energy, particularly on days characterized by the duck shape. This could result in 'overgeneration' and curtailed renewable energy, increasing its costs and reducing its environmental benefits. This paper explores the duck chart in detail, examining how much PV might need to be curtailedmore » if additional grid flexibility measures are not taken, and how curtailment rates can be decreased by changing grid operational practices. It finds that under "business-as-usual"" types of assumptions and corresponding levels of grid flexibility in California, solar penetrations as low as 20% of annual energy could lead to marginal curtailment rates that exceed 30%. However, by allowing (or requiring) distributed PV and storage (including new installations that are part of the California storage mandate) to provide grid services, system flexibility could be greatly enhanced. Doing so could significantly reduce curtailment and allow much greater penetration of variable generation resources. Overall, the work described in this paper points to the need to fully integrate distributed resources into grid system planning and operations to allow maximum use of the solar resource.« less
Overgeneration from Solar Energy in California - A Field Guide to the Duck Chart
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denholm, Paul; Brinkman, Gregory; Jorgenson, Jennie
In 2013, the California Independent System Operator published the "duck chart,"" which shows a significant drop in mid-day net load on a spring day as solar photovoltaics (PV) are added to the system. The chart raises concerns that the conventional power system will be unable to accommodate the ramp rate and range needed to fully utilize solar energy, particularly on days characterized by the duck shape. This could result in "overgeneration"" and curtailed renewable energy, increasing its costs and reducing its environmental benefits. This paper explores the duck chart in detail, examining how much PV might need to be curtailedmore » if additional grid flexibility measures are not taken, and how curtailment rates can be decreased by changing grid operational practices. It finds that under business-as-usual types of assumptions and corresponding levels of grid flexibility in California, solar penetrations as low as 20 percent of annual energy could lead to marginal curtailment rates that exceed 30 percent. However, by allowing (or requiring) distributed PV and storage (including new installations that are part of the California storage mandate) to provide grid services, system flexibility could be greatly enhanced. Doing so could significantly reduce curtailment and allow much greater penetration of variable generation resources in achieving a 50 percent renewable portfolio standard. Overall, the work described in this paper points to the need to fully integrate distributed resources into grid system planning and operations to allow maximum use of the solar resource.« less
Margin and sensitivity methods for security analysis of electric power systems
NASA Astrophysics Data System (ADS)
Greene, Scott L.
Reliable operation of large scale electric power networks requires that system voltages and currents stay within design limits. Operation beyond those limits can lead to equipment failures and blackouts. Security margins measure the amount by which system loads or power transfers can change before a security violation, such as an overloaded transmission line, is encountered. This thesis shows how to efficiently compute security margins defined by limiting events and instabilities, and the sensitivity of those margins with respect to assumptions, system parameters, operating policy, and transactions. Security margins to voltage collapse blackouts, oscillatory instability, generator limits, voltage constraints and line overloads are considered. The usefulness of computing the sensitivities of these margins with respect to interarea transfers, loading parameters, generator dispatch, transmission line parameters, and VAR support is established for networks as large as 1500 buses. The sensitivity formulas presented apply to a range of power system models. Conventional sensitivity formulas such as line distribution factors, outage distribution factors, participation factors and penalty factors are shown to be special cases of the general sensitivity formulas derived in this thesis. The sensitivity formulas readily accommodate sparse matrix techniques. Margin sensitivity methods are shown to work effectively for avoiding voltage collapse blackouts caused by either saddle node bifurcation of equilibria or immediate instability due to generator reactive power limits. Extremely fast contingency analysis for voltage collapse can be implemented with margin sensitivity based rankings. Interarea transfer can be limited by voltage limits, line limits, or voltage stability. The sensitivity formulas presented in this thesis apply to security margins defined by any limit criteria. A method to compute transfer margins by directly locating intermediate events reduces the total number of loadflow iterations required by each margin computation and provides sensitivity information at minimal additional cost. Estimates of the effect of simultaneous transfers on the transfer margins agree well with the exact computations for a network model derived from a portion of the U.S grid. The accuracy of the estimates over a useful range of conditions and the ease of obtaining the estimates suggest that the sensitivity computations will be of practical value.
3D inversion based on multi-grid approach of magnetotelluric data from Northern Scandinavia
NASA Astrophysics Data System (ADS)
Cherevatova, M.; Smirnov, M.; Korja, T. J.; Egbert, G. D.
2012-12-01
In this work we investigate the geoelectrical structure of the cratonic margin of Fennoscandian Shield by means of magnetotelluric (MT) measurements carried out in Northern Norway and Sweden during summer 2011-2012. The project Magnetotellurics in the Scandes (MaSca) focuses on the investigation of the crust, upper mantle and lithospheric structure in a transition zone from a stable Precambrian cratonic interior to a passive continental margin beneath the Caledonian Orogen and the Scandes Mountains in western Fennoscandia. Recent MT profiles in the central and southern Scandes indicated a large contrast in resistivity between Caledonides and Precambrian basement. The alum shales as a highly conductive layers between the resistive Precambrian basement and the overlying Caledonian nappes are revealed from this profiles. Additional measurements in the Northern Scandes were required. All together data from 60 synchronous long period (LMT) and about 200 broad band (BMT) sites were acquired. The array stretches from Lofoten and Bodo (Norway) in the west to Kiruna and Skeleftea (Sweden) in the east covering an area of 500x500 square kilometers. LMT sites were occupied for about two months, while most of the BMT sites were measured during one day. We have used new multi-grid approach for 3D electromagnetic (EM) inversion and modelling. Our approach is based on the OcTree discretization where the spatial domain is represented by rectangular cells, each of which might be subdivided (recursively) into eight sub-cells. In this simplified implementation the grid is refined only in the horizontal direction, uniformly in each vertical layer. Using multi-grid we manage to have a high grid resolution near the surface (for instance, to tackle with galvanic distortions) and lower resolution at greater depth as the EM fields decay in the Earth according to the diffusion equation. We also have a benefit in computational costs as number of unknowns decrease. The multi-grid forward solver is implemented within the framework of the modular system for EM inversion (ModEM by G. Egbert, A. Kelbert, N. Meqbel), using the ModEM 3D finite difference staggered grid forward solver (second order PDE in the electric field, with divergence correction) as a starting point for our development. The first 3D inversion model for the crust and upper mantle shows the highly conducting bodies in the crust which can be interpreted as alum shales. The eastern and central parts are presented by resistive Precambrian rocks of the Svecofennian and Archaean domains. The upper mantle is resistive and relates to the Baltica basement. We also compare 3D inversion model with the results of 2D inversion along several profiles. We are able to explain some of the features in the data (out of quadrant phase) with 3D model, thus providing more reliable results compared to routine 2D approach.
Cost-Effectiveness of Old and New Technologies for Aneuploidy Screening.
Sinkey, Rachel G; Odibo, Anthony O
2016-06-01
Cost-effectiveness analyses allow assessment of whether marginal gains from new technology are worth increased costs. Several studies have examined cost-effectiveness of Down syndrome (DS) screening and found it to be cost-effective. Noninvasive prenatal screening also appears to be cost-effective among high-risk women with respect to DS screening, but not for the general population. Chromosomal microarray (CMA) is a genetic sequencing method superior to but more expensive than karyotype. In light of CMAs greater ability to detect genetic abnormalities, it is cost-effective when used for prenatal diagnosis of an anomalous fetus. This article covers methodology and salient issues of cost-effectiveness. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Dar, Zamiyad
The prices in the electricity market change every five minutes. The prices in peak demand hours can be four or five times more than the prices in normal off peak hours. Renewable energy such as wind power has zero marginal cost and a large percentage of wind energy in a power grid can reduce the price significantly. The variability of wind power prevents it from being constantly available in peak hours. The price differentials between off-peak and on-peak hours due to wind power variations provide an opportunity for a storage device owner to buy energy at a low price and sell it in high price hours. In a large and complex power grid, there are many locations for installation of a storage device. Storage device owners prefer to install their device at locations that allow them to maximize profit. Market participants do not possess much information about the system operator's dispatch, power grid, competing generators and transmission system. The publicly available data from the system operator usually consists of Locational Marginal Prices (LMP), load, reserve prices and regulation prices. In this thesis, we develop a method to find the optimum location of a storage device without using the grid, transmission or generator data. We formulate and solve an optimization problem to find the most profitable location for a storage device using only the publicly available market pricing data such as LMPs, and reserve prices. We consider constraints arising due to storage device operation limitations in our objective function. We use binary optimization and branch and bound method to optimize the operation of a storage device at a given location to earn maximum profit. We use two different versions of our method and optimize the profitability of a storage unit at each location in a 36 bus model of north eastern United States and south eastern Canada for four representative days representing four seasons in a year. Finally, we compare our results from the two versions of our method with a multi period stochastically optimized economic dispatch of the same power system with storage device at locations proposed by our method. We observe a small gap in profit values arising due to the effect of storage device on market prices. However, we observe that the ranking of different locations in terms of profitability remains almost unchanged. This leads us to conclude that our method can successfully predict the optimum locations for installation of storage units in a complex grid using only the publicly available electricity market data.
Technical, economic and legal aspects of wind energy utilization
NASA Astrophysics Data System (ADS)
Obermair, G. M.; Jarass, L.
Potentially problematical areas of the implementation of wind turbines for electricity production in West Germany are identified and briefly discussed. Variations in wind generator output due to source variability may cause power regulation difficulties in the grid and also raise uncertainties in utility capacity planning for new construction. Catastrophic machine component failures, such as a thrown blade, are hazardous to life and property, while lulls in the resource can cause power regulation capabilities only when grid penetration has reached significant levels. Economically, the lack of actual data from large scale wind projects is cited as a barrier to accurate cost comparisons of wind-derived power relative to other generating sources, although breakeven costs for wind power have been found to be $2000/kW installed capacity, i.e., a marginal cost of $0.10/kW.
Hemkens, Lars G; Hilden, Kristian M; Hartschen, Stephan; Kaiser, Thomas; Didjurgeit, Ulrike; Hansen, Roland; Bender, Ralf; Sawicki, Peter T
2008-08-01
In addition to the metrological quality of international normalized ratio (INR) monitoring devices used in patients' self-management of long-term anticoagulation, the effectiveness of self-monitoring with such devices has to be evaluated under real-life conditions with a focus on clinical implications. An approach to evaluate the clinical significance of inaccuracies is the error-grid analysis as already established in self-monitoring of blood glucose. Two anticoagulation monitors were compared in a real-life setting and a novel error-grid instrument for oral anticoagulation has been evaluated. In a randomized crossover study 16 patients performed self-management of anticoagulation using the INRatio and the CoaguChek S system. Main outcome measures were clinically relevant INR differences according to established criteria and to the error-grid approach. A lower rate of clinically relevant disagreements according to Anderson's criteria was found with CoaguChek S than with INRatio without statistical significance (10.77% vs. 12.90%; P = 0.787). Using the error-grid we found principally consistent results: More measurement pairs with discrepancies of no or low clinical relevance were found with CoaguChek S, whereas with INRatio we found more differences with a moderate clinical relevance. A high rate of patients' satisfaction with both of the point of care devices was found with only marginal differences. A principal appropriateness of the investigated point-of-care devices to adequately monitor the INR is shown. The error-grid is useful for comparing monitoring methods with a focus on clinical relevance under real-life conditions beyond assessing the pure metrological quality, but we emphasize that additional trials using this instrument with larger patient populations are needed to detect differences in clinically relevant disagreements.
A new measure for gene expression biclustering based on non-parametric correlation.
Flores, Jose L; Inza, Iñaki; Larrañaga, Pedro; Calvo, Borja
2013-12-01
One of the emerging techniques for performing the analysis of the DNA microarray data known as biclustering is the search of subsets of genes and conditions which are coherently expressed. These subgroups provide clues about the main biological processes. Until now, different approaches to this problem have been proposed. Most of them use the mean squared residue as quality measure but relevant and interesting patterns can not be detected such as shifting, or scaling patterns. Furthermore, recent papers show that there exist new coherence patterns involved in different kinds of cancer and tumors such as inverse relationships between genes which can not be captured. The proposed measure is called Spearman's biclustering measure (SBM) which performs an estimation of the quality of a bicluster based on the non-linear correlation among genes and conditions simultaneously. The search of biclusters is performed by using a evolutionary technique called estimation of distribution algorithms which uses the SBM measure as fitness function. This approach has been examined from different points of view by using artificial and real microarrays. The assessment process has involved the use of quality indexes, a set of bicluster patterns of reference including new patterns and a set of statistical tests. It has been also examined the performance using real microarrays and comparing to different algorithmic approaches such as Bimax, CC, OPSM, Plaid and xMotifs. SBM shows several advantages such as the ability to recognize more complex coherence patterns such as shifting, scaling and inversion and the capability to selectively marginalize genes and conditions depending on the statistical significance. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Tamayao, Mili-Ann M; Michalek, Jeremy J; Hendrickson, Chris; Azevedo, Inês M L
2015-07-21
We characterize regionally specific life cycle CO2 emissions per mile traveled for plug-in hybrid electric vehicles (PHEVs) and battery electric vehicles (BEVs) across the United States under alternative assumptions for regional electricity emission factors, regional boundaries, and charging schemes. We find that estimates based on marginal vs average grid emission factors differ by as much as 50% (using National Electricity Reliability Commission (NERC) regional boundaries). Use of state boundaries versus NERC region boundaries results in estimates that differ by as much as 120% for the same location (using average emission factors). We argue that consumption-based marginal emission factors are conceptually appropriate for evaluating the emissions implications of policies that increase electric vehicle sales or use in a region. We also examine generation-based marginal emission factors to assess robustness. Using these two estimates of NERC region marginal emission factors, we find the following: (1) delayed charging (i.e., starting at midnight) leads to higher emissions in most cases due largely to increased coal in the marginal generation mix at night; (2) the Chevrolet Volt has higher expected life cycle emissions than the Toyota Prius hybrid electric vehicle (the most efficient U.S. gasoline vehicle) across the U.S. in nearly all scenarios; (3) the Nissan Leaf BEV has lower life cycle emissions than the Prius in the western U.S. and in Texas, but the Prius has lower emissions in the northern Midwest regardless of assumed charging scheme and marginal emissions estimation method; (4) in other regions the lowest emitting vehicle depends on charge timing and emission factor estimation assumptions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hathaway, M.D.; Wood, J.R.
1997-10-01
CFD codes capable of utilizing multi-block grids provide the capability to analyze the complete geometry of centrifugal compressors. Attendant with this increased capability is potentially increased grid setup time and more computational overhead with the resultant increase in wall clock time to obtain a solution. If the increase in difficulty of obtaining a solution significantly improves the solution from that obtained by modeling the features of the tip clearance flow or the typical bluntness of a centrifugal compressor`s trailing edge, then the additional burden is worthwhile. However, if the additional information obtained is of marginal use, then modeling of certainmore » features of the geometry may provide reasonable solutions for designers to make comparative choices when pursuing a new design. In this spirit a sequence of grids were generated to study the relative importance of modeling versus detailed gridding of the tip gap and blunt trailing edge regions of the NASA large low-speed centrifugal compressor for which there is considerable detailed internal laser anemometry data available for comparison. The results indicate: (1) There is no significant difference in predicted tip clearance mass flow rate whether the tip gap is gridded or modeled. (2) Gridding rather than modeling the trailing edge results in better predictions of some flow details downstream of the impeller, but otherwise appears to offer no great benefits. (3) The pitchwise variation of absolute flow angle decreases rapidly up to 8% impeller radius ratio and much more slowly thereafter. Although some improvements in prediction of flow field details are realized as a result of analyzing the actual geometry there is no clear consensus that any of the grids investigated produced superior results in every case when compared to the measurements. However, if a multi-block code is available, it should be used, as it has the propensity for enabling better predictions than a single block code.« less
NASA Astrophysics Data System (ADS)
Hardman, M.; Brodzik, M. J.; Long, D. G.
2017-12-01
Since 1978, the satellite passive microwave data record has been a mainstay of remote sensing of the cryosphere, providing twice-daily, near-global spatial coverage for monitoring changes in hydrologic and cryospheric parameters that include precipitation, soil moisture, surface water, vegetation, snow water equivalent, sea ice concentration and sea ice motion. Up until recently, the available global gridded passive microwave data sets have not been produced consistently. Various projections (equal-area, polar stereographic), a number of different gridding techniques were used, along with various temporal sampling as well as a mix of Level 2 source data versions. In addition, not all data from all sensors have been processed completely and they have not been processed in any one consistent way. Furthermore, the original gridding techniques were relatively primitive and were produced on 25 km grids using the original EASE-Grid definition that is not easily accommodated in modern software packages. As part of NASA MEaSUREs, we have re-processed all data from SMMR, all SSM/I-SSMIS and AMSR-E instruments, using the most mature Level 2 data. The Calibrated, Enhanced-Resolution Brightness Temperature (CETB) Earth System Data Record (ESDR) gridded data are now available from the NSIDC DAAC. The data are distributed as netCDF files that comply with CF-1.6 and ACDD-1.3 conventions. The data have been produced on EASE 2.0 projections at smoothed, 25 kilometer resolution and spatially-enhanced resolutions, up to 3.125 km depending on channel frequency, using the radiometer version of the Scatterometer Image Reconstruction (rSIR) method. We expect this newly produced data set to enable scientists to better analyze trends in coastal regions, marginal ice zones and in mountainous terrain that were not possible with the previous gridded passive microwave data. The use of the EASE-Grid 2.0 definition and netCDF-CF formatting allows users to extract compliant geotiff images and provides for easy importing and correct reprojection interoperability in many standard packages. As a consistently-processed, high-quality satellite passive microwave ESDR, we expect this data set to replace earlier gridded passive microwave data sets, and to pave the way for new insights from higher-resolution derived geophysical products.
Severino, Patricia; Alvares, Adriana M; Michaluart, Pedro; Okamoto, Oswaldo K; Nunes, Fabio D; Moreira-Filho, Carlos A; Tajara, Eloiza H
2008-01-01
Background Oral squamous cell carcinoma (OSCC) is a frequent neoplasm, which is usually aggressive and has unpredictable biological behavior and unfavorable prognosis. The comprehension of the molecular basis of this variability should lead to the development of targeted therapies as well as to improvements in specificity and sensitivity of diagnosis. Results Samples of primary OSCCs and their corresponding surgical margins were obtained from male patients during surgery and their gene expression profiles were screened using whole-genome microarray technology. Hierarchical clustering and Principal Components Analysis were used for data visualization and One-way Analysis of Variance was used to identify differentially expressed genes. Samples clustered mostly according to disease subsite, suggesting molecular heterogeneity within tumor stages. In order to corroborate our results, two publicly available datasets of microarray experiments were assessed. We found significant molecular differences between OSCC anatomic subsites concerning groups of genes presently or potentially important for drug development, including mRNA processing, cytoskeleton organization and biogenesis, metabolic process, cell cycle and apoptosis. Conclusion Our results corroborate literature data on molecular heterogeneity of OSCCs. Differences between disease subsites and among samples belonging to the same TNM class highlight the importance of gene expression-based classification and challenge the development of targeted therapies. PMID:19014556
Improving and Evaluating Nested Sampling Algorithm for Marginal Likelihood Estimation
NASA Astrophysics Data System (ADS)
Ye, M.; Zeng, X.; Wu, J.; Wang, D.; Liu, J.
2016-12-01
With the growing impacts of climate change and human activities on the cycle of water resources, an increasing number of researches focus on the quantification of modeling uncertainty. Bayesian model averaging (BMA) provides a popular framework for quantifying conceptual model and parameter uncertainty. The ensemble prediction is generated by combining each plausible model's prediction, and each model is attached with a model weight which is determined by model's prior weight and marginal likelihood. Thus, the estimation of model's marginal likelihood is crucial for reliable and accurate BMA prediction. Nested sampling estimator (NSE) is a new proposed method for marginal likelihood estimation. The process of NSE is accomplished by searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm is often used for local sampling. However, M-H is not an efficient sampling algorithm for high-dimensional or complicated parameter space. For improving the efficiency of NSE, it could be ideal to incorporate the robust and efficient sampling algorithm - DREAMzs into the local sampling of NSE. The comparison results demonstrated that the improved NSE could improve the efficiency of marginal likelihood estimation significantly. However, both improved and original NSEs suffer from heavy instability. In addition, the heavy computation cost of huge number of model executions is overcome by using an adaptive sparse grid surrogates.
Mallen, C A
1983-01-01
The sex-role stereotypes held by heterosexual and homosexual men were examined by comparing their Repertory Grid scores. It was found that homosexual men held less rigid sex-role stereotypes than heterosexuals. Degree of opposite-sex identification was marginally greater in homosexuals, but neither group showed strong masculine or feminine stereotypic identification. Homosexual men perceived themselves as psychologically more distant from their fathers than did their heterosexual counterparts; this was probably an effect of homosexuality rather than a cause.
Viray, Hollis; Bradley, William R; Schalper, Kurt A; Rimm, David L; Gould Rothberg, Bonnie E
2013-08-01
The distribution of the standard melanoma antibodies S100, HMB-45, and Melan-A has been extensively studied. Yet, the overlap in their expression is less well characterized. To determine the joint distributions of the classic melanoma markers and to determine if classification according to joint antigen expression has prognostic relevance. S100, HMB-45, and Melan-A were assayed by immunofluorescence-based immunohistochemistry on a large tissue microarray of 212 cutaneous melanoma primary tumors and 341 metastases. Positive expression for each antigen required display of immunoreactivity for at least 25% of melanoma cells. Marginal and joint distributions were determined across all markers. Bivariate associations with established clinicopathologic covariates and melanoma-specific survival analyses were conducted. Of 322 assayable melanomas, 295 (91.6%), 203 (63.0%), and 236 (73.3%) stained with S100, HMB-45, and Melan-A, respectively. Twenty-seven melanomas, representing a diverse set of histopathologic profiles, were S100 negative. Coexpression of all 3 antibodies was observed in 160 melanomas (49.7%). Intensity of endogenous melanin pigment did not confound immunolabeling. Among primary tumors, associations with clinicopathologic parameters revealed a significant relationship only between HMB-45 and microsatellitosis (P = .02). No significant differences among clinicopathologic criteria were observed across the HMB-45/Melan-A joint distribution categories. Neither marginal HMB-45 (P = .56) nor Melan-A (P = .81), or their joint distributions (P = .88), was associated with melanoma-specific survival. Comprehensive characterization of the marginal and joint distributions for S100, HMB-45, and Melan-A across a large series of cutaneous melanomas revealed diversity of expression across this group of antigens. However, these immunohistochemically defined subclasses of melanomas do not significantly differ according to clinicopathologic correlates or outcome.
Study on reasonable curtailment rate of large scale renewable energy
NASA Astrophysics Data System (ADS)
Li, Nan; Yuan, Bo; Zhang, Fuqiang
2018-02-01
Energy curtailment rate of renewable energy generation is an important indicator to measure renewable energy consumption, it is also an important parameters to determine the other power sources and grids arrangement in the planning stage. In general, to consume the spike power of the renewable energy which is just a small proportion, it is necessary to dispatch a large number of peaking resources, which will reduce the safety and stability of the system. In planning aspect, if it is allowed to give up a certain amount of renewable energy, overall peaking demand of the system will be reduced, the peak power supply construction can be put off to avoid the expensive cost of marginal absorption. In this paper, we introduce the reasonable energy curtailment rate into the power system planning, and use the GESP power planning software, conclude that the reasonable energy curtailment rate of the regional grids in China is 3% -10% in 2020.
Life Cycle Assessment of Solar Photovoltaic Microgrid Systems in Off-Grid Communities.
Bilich, Andrew; Langham, Kevin; Geyer, Roland; Goyal, Love; Hansen, James; Krishnan, Anjana; Bergesen, Joseph; Sinha, Parikhit
2017-01-17
Access to a reliable source of electricity creates significant benefits for developing communities. Smaller versions of electricity grids, known as microgrids, have been developed as a solution to energy access problems. Using attributional life cycle assessment, this project evaluates the environmental and energy impacts of three photovoltiac (PV) microgrids compared to other energy options for a model village in Kenya. When normalized per kilowatt hour of electricity consumed, PV microgrids, particularly PV-battery systems, have lower impacts than other energy access solutions in climate change, particulate matter, photochemical oxidants, and terrestrial acidification. When compared to small-scale diesel generators, PV-battery systems save 94-99% in the above categories. When compared to the marginal electricity grid in Kenya, PV-battery systems save 80-88%. Contribution analysis suggests that electricity and primary metal use during component, particularly battery, manufacturing are the largest contributors to overall PV-battery microgrid impacts. Accordingly, additional savings could be seen from changing battery manufacturing location and ensuring end of life recycling. Overall, this project highlights the potential for PV microgrids to be feasible, adaptable, long-term energy access solutions, with health and environmental advantages compared to traditional electrification options.
Erosional dynamics, flexural isostasy, and long-lived escarpments: A numerical modeling study
NASA Technical Reports Server (NTRS)
Tucker, Gregory E.; Slingerland, Rudy L.
1994-01-01
Erosional escarpments common features of high-elevation rifted continets. Fission track data suffest that these escarpments form by base level lowering and/or marginal uplift during rifting, followed by lateral retreat of an erosion front across tens to hundreds of kioometers. Previous modeling studies have shown that this characteristic pattern of denudation can have a profound impact upon marginal isostatic uplift and the evolution of offshore sedimentary basins. Yet at present there is only a rudimentary understanding of the geomorphic mechanisms capable of driving such prolonged escarpment retreat. In this study we present a nonlinear, two-dimensional landscape evolution model tha tis used to asses the necessary and sufficient conditions for long-term retreat of a rift-generated escarpment. The model represents topography as a grid of cells, with drainage networkds evolving as water flows across the grid in the direction of steepest descent. The model accounts for sediment production by weathering, fluvial sediment transport, bedrock channel erosion, and hillslope sediment transport by diffusive mechanisms and by mass failure. Numerical experiments presented explore the effects of different combinations of erosion processes and of dynamic coupling between denudation and flexural isostatic uplift. Model results suggest that the necessary and sufficient conditions for long-term escarpment retreat are (1) incising bedrock channels in which the erosion rate increases with increasing drainage area, so that the channels steepen and propagate headward; (2) a low rate of sediment production relative to sediment transport efficiency, which promotes relief-generating processes over diffusive ones; (3) high continental elevation, which allows greater freedom for fluvial dissection; and (4) any process, including flexural isostatic uplift, that helps to maintain a drainage divide near an escarpment crest. Flexural isostatic uplift also facilitates escarpment, thereby increasing channel gradients and accelerating erosion which in turn generates additional isostatic uplift. Of all the above conditions, high continental elevation is common ot most rift margin escarpments and may ultimately be the most important factor.
Integrated geophysical study of the northeastern margin of Tibetan Plateau
NASA Astrophysics Data System (ADS)
Shi, L.; Meng, X.; Guo, L.
2011-12-01
Tibetan Plateau, the so-called "Roof of the World", is a direct consequence of collision of the Indian plate with the Eurasian plate starting in the early Cenozoic time. The continent-continent collision is still going on. The northeastern margin of Tibetan Plateau is the front part of the Tibetan Plateau extends to mainland and favorable area for studying uplift and deformation of the Tibetan Plateau. In the past decades, a variety of geophysical methods were conducted to study geodynamics and geological tectonics of this region. We assembled satellite-derived free-air gravity anomalies with a resolution of one arc-minute from the Scripps Institution of Oceanography, and reduced them to obtain Complete Bouguer Gravity Anomalies. Then we gridded Complete Bouguer Gravity Anomalies on a regular grid, and subsequently processed them with the preferential continuation method to attenuate high-frequency noise and analyzed regional and residual anomalies. We also calculated tilt-angle derivative of Complete Bouguer Gravity Anomalies to derive clearer geological structures with more details. Then we calculated the depth distribution of the Moho discontinuity surface in this area by 3D density interface inversion. From the results of preliminary processing, we analyzed the main deep faults and geological tectonics in this region. We extracted seven important profiles' data of Complete Bouguer Gravity Anomalies in this area, and then did forward modeling and inversion on each profile with constraints of geological information and other geophysical data. In the future, we will perform 3D constrained inversion of Complete Bouguer Gravity Anomalies in this region for better understanding deep structure and tectonics of the northeastern margin of Tibetan Plateau. Acknowledgment: We acknowledge the financial support of the SinoProbe project (201011039), the Fundamental Research Funds for the Central Universities (2010ZY26 2011PY0184), and the National Natural Science Foundation of China (40904033).
An Improved Nested Sampling Algorithm for Model Selection and Assessment
NASA Astrophysics Data System (ADS)
Zeng, X.; Ye, M.; Wu, J.; WANG, D.
2017-12-01
Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.
NASA Astrophysics Data System (ADS)
Shahraki, Meysam; Schmeling, Harro; Haas, Peter
2018-01-01
Isostatic equilibrium is a good approximation for passive continental margins. In these regions, geoid anomalies are proportional to the local dipole moment of density-depth distributions, which can be used to constrain the amount of oceanic to continental lithospheric thickening (lithospheric jumps). We consider a five- or three-layer 1D model for the oceanic and continental lithosphere, respectively, composed of water, a sediment layer (both for the oceanic case), the crust, the mantle lithosphere and the asthenosphere. The mantle lithosphere is defined by a mantle density, which is a function of temperature and composition, due to melt depletion. In addition, a depth-dependent sediment density associated with compaction and ocean floor variation is adopted. We analyzed satellite derived geoid data and, after filtering, extracted typical averaged profiles across the Western and Eastern passive margins of the South Atlantic. They show geoid jumps of 8.1 m and 7.0 m for the Argentinian and African sides, respectively. Together with topography data and an averaged crustal density at the conjugate margins these jumps are interpreted as isostatic geoid anomalies and yield best-fitting crustal and lithospheric thicknesses. In a grid search approach five parameters are systematically varied, namely the thicknesses of the sediment layer, the oceanic and continental crusts and the oceanic and the continental mantle lithosphere. The set of successful models reveals a clear asymmetry between the South Africa and Argentine lithospheres by 15 km. Preferred models predict a sediment layer at the Argentine margin of 3-6 km and at the South Africa margin of 1-2.5 km. Moreover, we derived a linear relationship between, oceanic lithosphere, sediment thickness and lithospheric jumps at the South Atlantic margins. It suggests that the continental lithospheres on the western and eastern South Atlantic are thicker by 45-70 and 60-80 km than the oceanic lithospheres, respectively.
Integrating Renewable Generation into Grid Operations: Four International Experiences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weimar, Mark R.; Mylrea, Michael E.; Levin, Todd
International experiences with power sector restructuring and the resultant impacts on bulk power grid operations and planning may provide insight into policy questions for the evolving United States power grid as resource mixes are changing in response to fuel prices, an aging generation fleet and to meet climate goals. Australia, Germany, Japan and the UK were selected to represent a range in the level and attributes of electricity industry liberalization in order to draw comparisons across a variety of regions in the United States such as California, ERCOT, the Southwest Power Pool and the Southeast Reliability Region. The study drawsmore » conclusions through a literature review of the four case study countries with regards to the changing resource mix and the electricity industry sector structure and their impact on grid operations and planning. This paper derives lessons learned and synthesizes implications for the United States based on answers to the above questions and the challenges faced by the four selected countries. Each country was examined to determine the challenges to their bulk power sector based on their changing resource mix, market structure, policies driving the changing resource mix, and policies driving restructuring. Each countries’ approach to solving those changes was examined, as well as how each country’s market structure either exacerbated or mitigated the approaches to solving the challenges to their bulk power grid operations and planning. All countries’ policies encourage renewable energy generation. One significant finding included the low- to zero-marginal cost of intermittent renewables and its potential negative impact on long-term resource adequacy. No dominant solution has emerged although a capacity market was introduced in the UK and is being contemplated in Japan. Germany has proposed the Energy Market 2.0 to encourage flexible generation investment. The grid operator in Australia proposed several approaches to maintaining synchronous generation. Interconnections to other regions provides added opportunities for balancing that would not be available otherwise, and at this point, has allowed for integration of renewables.« less
Integrating scientific data for drug discovery and development using the Life Sciences Grid.
Dow, Ernst R; Hughes, James B; Stephens, Susie M; Narayan, Vaibhav A; Bishop, Richard W
2009-06-01
There are many daunting challenges for companies who wish to bring novel drugs to market. The information complexity around potential drug targets has increased greatly with the introduction of microarrays, high-throughput screening and other technological advances over the past decade, but has not yet fundamentally increased our understanding of how to modify a disease with pharmaceuticals. Further, the bar has been raised in getting a successful drug to market as just being new is no longer enough: the drug must demonstrate improved performance compared with the ever increasing generic pharmacopeia to gain support from payers and government authorities. In addition, partly as a consequence of a climate of concern regarding the safety of drugs, regulatory authorities have approved fewer new molecular entities compared to historical norms over the past few years. To overcome these challenges, the pharmaceutical industry must fully embrace information technology to bring better understood compounds to market. An important first step in addressing an unmet medical need is in understanding the disease and identifying the physiological target(s) to be modulated by the drug. Deciding which targets to pursue for a given disease requires a multidisciplinary effort that integrates heterogeneous data from many sources, including genetic variations of populations, changes in gene expression and biochemical assays. The Life Science Grid was developed to provide a flexible framework to integrate such diverse biological, chemical and disease information to help scientists make better-informed decisions. The Life Science Grid has been used to rapidly and effectively integrate scientific information in the pharmaceutical industry and has been placed in the open source community to foster collaboration in the life sciences community.
NASA Astrophysics Data System (ADS)
Voisin, N.; Macknick, J.; Fu, T.; O'Connell, M.; Zhou, T.; Brinkman, G.
2017-12-01
Water resources provide multiple critical services to the electrical grid through hydropower technologies, from generation to regulation of the electric grid (frequency, capacity reserve). Water resources can also represent vulnerabilities to the electric grid, as hydropower and thermo-electric facilities require water for operations. In the Western U.S., hydropower and thermo-electric plants that rely on fresh surface water represent 67% of the generating capacity. Prior studies have looked at the impact of change in water availability under future climate conditions on expected generating capacity in the Western U.S., but have not evaluated operational risks or changes resulting from climate. In this study, we systematically assess the impact of change in water availability and air temperatures on power operations, i.e. we take into account the different grid services that water resources can provide to the electric grid (generation, regulation) in the system-level context of inter-regional coordination through the electric transmission network. We leverage the Coupled Model Intercomparison Project Phase 5 (CMIP5) hydrology simulations under historical and future climate conditions, and force the large scale river routing- water management model MOSART-WM along with 2010-level sectoral water demands. Changes in monthly hydropower potential generation (including generation and reserves), as well as monthly generation capacity of thermo-electric plants are derived for each power plant in the Western U.S. electric grid. We then utilize the PLEXOS electricity production cost model to optimize power system dispatch and cost decisions for the 2010 infrastructure under 100 years of historical and future (2050 horizon) hydroclimate conditions. We use economic metrics as well as operational metrics such as generation portfolio, emissions, and reserve margins to assess the changes in power system operations between historical and future normal and extreme water availability conditions. We provide insight on how this information can be used to support resource adequacy and grid expansion studies over the Western U.S. in the context of inter-annual variability and climate change.
NASA Astrophysics Data System (ADS)
Riverman, K. L.; Anandakrishnan, S.; Alley, R. B.; Peters, L. E.; Christianson, K. A.; Muto, A.
2013-12-01
Northeast Greenland Ice Stream (NEGIS) is the largest ice stream in Greenland, draining approximately 8.4% of the ice sheet's area. The flow pattern and stability mechanism of this ice stream are unique to others in Greenland and Antarctica, and merit further study to ascertain the sensitivity of this ice stream to future climate change. Geophysical methods are valuable tools for this application, but their results are sensitive to the structure of the firn and any spatial variations in firn properties across a given study region. Here we present firn data from a 40-km-long seismic profile across the upper reaches of NEGIS, collected in the summer of 2012 as part of an integrated ground-based geophysical survey. We find considerable variations in firn thickness that are coincident with the ice stream shear margins, where a thinner firn layer is present within the margins, and a thicker, more uniform firn layer is present elsewhere in our study region. Higher accumulation rates in the marginal surface troughs due to drift-snow trapping can account for some of this increased densification; however, our seismic results also highlight enhanced anisotropy within the firn and upper ice column that is confined to narrow bands within the shear margins. We thus interpret these large firn thickness variations and abrupt changes in anisotropy as indicators of firn densification dependent on the effective stress state as well as the overburden pressure, suggesting that the strain rate increases nonlinearly with stress across the shear margins. A GPS strain grid maintained for three weeks across both margins observed strong side shearing, with rapid stretching and then compression along particle paths, indicating large deviatoric stresses in the margins. This work demonstrates the importance of developing a high-resolution firn densification model when conducting geophysical field work in regions possessing a complex ice flow history; it also motivates the need for a more detailed firn densification study along NEGIS to better understand the evolution of these abrupt structural variations within the firn.
NASA Astrophysics Data System (ADS)
Dossing, A.; Olesen, A. V.; Forsberg, R.
2010-12-01
Results of an 800 x 800 km aero-gravity and aeromagnetic survey (LOMGRAV) of the southern Lomonosov Ridge and surrounding area are presented. The survey was acquired by the Danish National Space Center, DTU in cooperation with National Resources Canada in spring 2009 as a net of ~NE-SW flight lines spaced 8-10 km apart. Nominal flight level was 2000 ft. We have compiled a detailed 2.5x2.5 km gravity anomaly grid based on the LOMGRAV data and existing data from the southern Arctic Ocean (NRL98/99) and the North Greenland continental margin (KMS98/99). The gravity grid reveals detailed, elongated high-low anomaly patterns over the Lomonosov Ridge which is interpreted as the presence of narrow ridges and subbasins. Distinct local topography is also interpreted over the southernmost part of the Lomonosov Ridge where existing bathymetry compilations suggest a smooth topography due to the lack of data. A new bathymetry model is presented for the region predicted by formalized inversion of the available gravity data. Finally, a detailed magnetic anomaly grid has been compiled from the LOMGRAV data and existing NRL98/99 and PMAP data. New tectonic features are revealed, particularly in the Amerasia Basin, compared with existing magnetic anomaly data from the region.
Analysis of Gene Regulatory Networks of Maize in Response to Nitrogen.
Jiang, Lu; Ball, Graham; Hodgman, Charlie; Coules, Anne; Zhao, Han; Lu, Chungui
2018-03-08
Nitrogen (N) fertilizer has a major influence on the yield and quality. Understanding and optimising the response of crop plants to nitrogen fertilizer usage is of central importance in enhancing food security and agricultural sustainability. In this study, the analysis of gene regulatory networks reveals multiple genes and biological processes in response to N. Two microarray studies have been used to infer components of the nitrogen-response network. Since they used different array technologies, a map linking the two probe sets to the maize B73 reference genome has been generated to allow comparison. Putative Arabidopsis homologues of maize genes were used to query the Biological General Repository for Interaction Datasets (BioGRID) network, which yielded the potential involvement of three transcription factors (TFs) (GLK5, MADS64 and bZIP108) and a Calcium-dependent protein kinase. An Artificial Neural Network was used to identify influential genes and retrieved bZIP108 and WRKY36 as significant TFs in both microarray studies, along with genes for Asparagine Synthetase, a dual-specific protein kinase and a protein phosphatase. The output from one study also suggested roles for microRNA (miRNA) 399b and Nin-like Protein 15 (NLP15). Co-expression-network analysis of TFs with closely related profiles to known Nitrate-responsive genes identified GLK5, GLK8 and NLP15 as candidate regulators of genes repressed under low Nitrogen conditions, while bZIP108 might play a role in gene activation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Huang, Zhenyu; Chavarría-Miranda, Daniel
Contingency analysis is a key function in the Energy Management System (EMS) to assess the impact of various combinations of power system component failures based on state estimation. Contingency analysis is also extensively used in power market operation for feasibility test of market solutions. High performance computing holds the promise of faster analysis of more contingency cases for the purpose of safe and reliable operation of today’s power grids with less operating margin and more intermittent renewable energy sources. This paper evaluates the performance of counter-based dynamic load balancing schemes for massive contingency analysis under different computing environments. Insights frommore » the performance evaluation can be used as guidance for users to select suitable schemes in the application of massive contingency analysis. Case studies, as well as MATLAB simulations, of massive contingency cases using the Western Electricity Coordinating Council power grid model are presented to illustrate the application of high performance computing with counter-based dynamic load balancing schemes.« less
Magnetic Field Would Reduce Electron Backstreaming in Ion Thrusters
NASA Technical Reports Server (NTRS)
Foster, John E.
2003-01-01
The imposition of a magnetic field has been proposed as a means of reducing the electron backstreaming problem in ion thrusters. Electron backstreaming refers to the backflow of electrons into the ion thruster. Backstreaming electrons are accelerated by the large potential difference that exists between the ion-thruster acceleration electrodes, which otherwise accelerates positive ions out of the engine to develop thrust. The energetic beam formed by the backstreaming electrons can damage the discharge cathode, as well as other discharge surfaces upstream of the acceleration electrodes. The electron-backstreaming condition occurs when the center potential of the ion accelerator grid is no longer sufficiently negative to prevent electron diffusion back into the ion thruster. This typically occurs over extended periods of operation as accelerator-grid apertures enlarge due to erosion. As a result, ion thrusters are required to operate at increasingly negative accelerator-grid voltages in order to prevent electron backstreaming. These larger negative voltages give rise to higher accelerator grid erosion rates, which in turn accelerates aperture enlargement. Electron backstreaming due to accelerator-gridhole enlargement has been identified as a failure mechanism that will limit ionthruster service lifetime. The proposed method would make it possible to not only reduce the electron backstreaming current at and below the backstreaming voltage limit, but also reduce the backstreaming voltage limit itself. This reduction in the voltage at which electron backstreaming occurs provides operating margin and thereby reduces the magnitude of negative voltage that must be placed on the accelerator grid. Such a reduction reduces accelerator- grid erosion rates. The basic idea behind the proposed method is to impose a spatially uniform magnetic field downstream of the accelerator electrode that is oriented transverse to the thruster axis. The magnetic field must be sufficiently strong to impede backstreaming electrons, but not so strong as to significantly perturb ion trajectories. An electromagnet or permanent magnetic circuit can be used to impose the transverse magnetic field downstream of the accelerator-grid electrode. For example, in the case of an accelerator grid containing straight, parallel rows of apertures, one can apply nearly uniform magnetic fields across all the apertures by the use of permanent magnets of alternating polarity connected to pole pieces laid out parallel to the rows, as shown in the left part of the figure. For low-temperature operation, the pole pieces can be replaced with bar magnets of alternating polarity. Alternatively, for the same accelerator grid, one could use an electromagnet in the form of current-carrying rods laid out parallel to the rows.
Marginalizing Instrument Systematics in HST WFC3 Transit Light Curves
NASA Astrophysics Data System (ADS)
Wakeford, H. R.; Sing, D. K.; Evans, T.; Deming, D.; Mandell, A.
2016-03-01
Hubble Space Telescope (HST) Wide Field Camera 3 (WFC3) infrared observations at 1.1-1.7 μm probe primarily the H2O absorption band at 1.4 μm, and have provided low-resolution transmission spectra for a wide range of exoplanets. We present the application of marginalization based on Gibson to analyze exoplanet transit light curves obtained from HST WFC3 to better determine important transit parameters such as Rp/R*, which are important for accurate detections of H2O. We approximate the evidence, often referred to as the marginal likelihood, for a grid of systematic models using the Akaike Information Criterion. We then calculate the evidence-based weight assigned to each systematic model and use the information from all tested models to calculate the final marginalized transit parameters for both the band-integrated and spectroscopic light curves to construct the transmission spectrum. We find that a majority of the highest weight models contain a correction for a linear trend in time as well as corrections related to HST orbital phase. We additionally test the dependence on the shift in spectral wavelength position over the course of the observations and find that spectroscopic wavelength shifts {δ }λ (λ ) best describe the associated systematic in the spectroscopic light curves for most targets while fast scan rate observations of bright targets require an additional level of processing to produce a robust transmission spectrum. The use of marginalization allows for transparent interpretation and understanding of the instrument and the impact of each systematic evaluated statistically for each data set, expanding the ability to make true and comprehensive comparisons between exoplanet atmospheres.
Marginalizing Instrument Systematics in HST WFC3 Transit Light Curves
NASA Technical Reports Server (NTRS)
Wakeford, H. R.; Sing, D.K.; Deming, D.; Mandell, A.
2016-01-01
Hubble Space Telescope (HST) Wide Field Camera 3 (WFC3) infrared observations at 1.1-1.7 microns probe primarily the H2O absorption band at 1.4 microns, and have provided low-resolution transmission spectra for a wide range of exoplanets. We present the application of marginalization based on Gibson to analyze exoplanet transit light curves obtained from HST WFC3 to better determine important transit parameters such as "ramp" probability (R (sub p)) divided by "ramp" total (R (sub asterisk)), which are important for accurate detections of H2O. We approximate the evidence, often referred to as the marginal likelihood, for a grid of systematic models using the Akaike Information Criterion. We then calculate the evidence-based weight assigned to each systematic model and use the information from all tested models to calculate the final marginalized transit parameters for both the band-integrated and spectroscopic light curves to construct the transmission spectrum. We find that a majority of the highest weight models contain a correction for a linear trend in time as well as corrections related to HST orbital phase. We additionally test the dependence on the shift in spectral wavelength position over the course of the observations and find that spectroscopic wavelength shifts delta (sub lambda) times lambda) best describe the associated systematic in the spectroscopic light curves for most targets while fast scan rate observations of bright targets require an additional level of processing to produce a robust transmission spectrum. The use of marginalization allows for transparent interpretation and understanding of the instrument and the impact of each systematic evaluated statistically for each data set, expanding the ability to make true and comprehensive comparisons between exoplanet atmospheres.
Importing MAGE-ML format microarray data into BioConductor.
Durinck, Steffen; Allemeersch, Joke; Carey, Vincent J; Moreau, Yves; De Moor, Bart
2004-12-12
The microarray gene expression markup language (MAGE-ML) is a widely used XML (eXtensible Markup Language) standard for describing and exchanging information about microarray experiments. It can describe microarray designs, microarray experiment designs, gene expression data and data analysis results. We describe RMAGEML, a new Bioconductor package that provides a link between cDNA microarray data stored in MAGE-ML format and the Bioconductor framework for preprocessing, visualization and analysis of microarray experiments. http://www.bioconductor.org. Open Source.
NASA Astrophysics Data System (ADS)
Hitaj, Claudia
In this dissertation, I analyze the drivers of wind power development in the United States as well as the relationship between renewable power plant location and transmission congestion and emissions levels. I first examine the role of government renewable energy incentives and access to the electricity grid on investment in wind power plants across counties from 1998-2007. The results indicate that the federal production tax credit, state-level sales tax credit and production incentives play an important role in promoting wind power. In addition, higher wind power penetration levels can be achieved by bringing more parts of the electricity transmission grid under independent system operator regulation. I conclude that state and federal government policies play a significant role in wind power development both by providing financial support and by improving physical and procedural access to the electricity grid. Second, I examine the effect of renewable power plant location on electricity transmission congestion levels and system-wide emissions levels in a theoretical model and a simulation study. A new renewable plant takes the effect of congestion on its own output into account, but ignores the effect of its marginal contribution to congestion on output from existing plants, which results in curtailment of renewable power. Though pricing congestion removes the externality and reduces curtailment, I find that in the absence of a price on emissions, pricing congestion may in some cases actually increase system-wide emissions. The final part of my dissertation deals with an econometric issue that emerged from the empirical analysis of the drivers of wind power. I study the effect of the degree of censoring on random-effects Tobit estimates in finite sample with a particular focus on severe censoring, when the percentage of uncensored observations reaches 1 to 5 percent. The results show that the Tobit model performs well even at 5 percent uncensored observations with the bias in the Tobit estimates remaining at or below 5 percent. Under severe censoring (1 percent uncensored observations), large biases appear in the estimated standard errors and marginal effects. These are generally reduced as the sample size increases in both N and T.
Dai, Yilin; Guo, Ling; Li, Meng; Chen, Yi-Bu
2012-06-08
Microarray data analysis presents a significant challenge to researchers who are unable to use the powerful Bioconductor and its numerous tools due to their lack of knowledge of R language. Among the few existing software programs that offer a graphic user interface to Bioconductor packages, none have implemented a comprehensive strategy to address the accuracy and reliability issue of microarray data analysis due to the well known probe design problems associated with many widely used microarray chips. There is also a lack of tools that would expedite the functional analysis of microarray results. We present Microarray Я US, an R-based graphical user interface that implements over a dozen popular Bioconductor packages to offer researchers a streamlined workflow for routine differential microarray expression data analysis without the need to learn R language. In order to enable a more accurate analysis and interpretation of microarray data, we incorporated the latest custom probe re-definition and re-annotation for Affymetrix and Illumina chips. A versatile microarray results output utility tool was also implemented for easy and fast generation of input files for over 20 of the most widely used functional analysis software programs. Coupled with a well-designed user interface, Microarray Я US leverages cutting edge Bioconductor packages for researchers with no knowledge in R language. It also enables a more reliable and accurate microarray data analysis and expedites downstream functional analysis of microarray results.
A digital repository with an extensible data model for biobanking and genomic analysis management.
Izzo, Massimiliano; Mortola, Francesco; Arnulfo, Gabriele; Fato, Marco M; Varesio, Luigi
2014-01-01
Molecular biology laboratories require extensive metadata to improve data collection and analysis. The heterogeneity of the collected metadata grows as research is evolving in to international multi-disciplinary collaborations and increasing data sharing among institutions. Single standardization is not feasible and it becomes crucial to develop digital repositories with flexible and extensible data models, as in the case of modern integrated biobanks management. We developed a novel data model in JSON format to describe heterogeneous data in a generic biomedical science scenario. The model is built on two hierarchical entities: processes and events, roughly corresponding to research studies and analysis steps within a single study. A number of sequential events can be grouped in a process building up a hierarchical structure to track patient and sample history. Each event can produce new data. Data is described by a set of user-defined metadata, and may have one or more associated files. We integrated the model in a web based digital repository with a data grid storage to manage large data sets located in geographically distinct areas. We built a graphical interface that allows authorized users to define new data types dynamically, according to their requirements. Operators compose queries on metadata fields using a flexible search interface and run them on the database and on the grid. We applied the digital repository to the integrated management of samples, patients and medical history in the BIT-Gaslini biobank. The platform currently manages 1800 samples of over 900 patients. Microarray data from 150 analyses are stored on the grid storage and replicated on two physical resources for preservation. The system is equipped with data integration capabilities with other biobanks for worldwide information sharing. Our data model enables users to continuously define flexible, ad hoc, and loosely structured metadata, for information sharing in specific research projects and purposes. This approach can improve sensitively interdisciplinary research collaboration and allows to track patients' clinical records, sample management information, and genomic data. The web interface allows the operators to easily manage, query, and annotate the files, without dealing with the technicalities of the data grid.
A digital repository with an extensible data model for biobanking and genomic analysis management
2014-01-01
Motivation Molecular biology laboratories require extensive metadata to improve data collection and analysis. The heterogeneity of the collected metadata grows as research is evolving in to international multi-disciplinary collaborations and increasing data sharing among institutions. Single standardization is not feasible and it becomes crucial to develop digital repositories with flexible and extensible data models, as in the case of modern integrated biobanks management. Results We developed a novel data model in JSON format to describe heterogeneous data in a generic biomedical science scenario. The model is built on two hierarchical entities: processes and events, roughly corresponding to research studies and analysis steps within a single study. A number of sequential events can be grouped in a process building up a hierarchical structure to track patient and sample history. Each event can produce new data. Data is described by a set of user-defined metadata, and may have one or more associated files. We integrated the model in a web based digital repository with a data grid storage to manage large data sets located in geographically distinct areas. We built a graphical interface that allows authorized users to define new data types dynamically, according to their requirements. Operators compose queries on metadata fields using a flexible search interface and run them on the database and on the grid. We applied the digital repository to the integrated management of samples, patients and medical history in the BIT-Gaslini biobank. The platform currently manages 1800 samples of over 900 patients. Microarray data from 150 analyses are stored on the grid storage and replicated on two physical resources for preservation. The system is equipped with data integration capabilities with other biobanks for worldwide information sharing. Conclusions Our data model enables users to continuously define flexible, ad hoc, and loosely structured metadata, for information sharing in specific research projects and purposes. This approach can improve sensitively interdisciplinary research collaboration and allows to track patients' clinical records, sample management information, and genomic data. The web interface allows the operators to easily manage, query, and annotate the files, without dealing with the technicalities of the data grid. PMID:25077808
NASA Astrophysics Data System (ADS)
Kennedy, Stephanie; Caldwell, Matthew; Bydlon, Torre; Mulvey, Christine; Mueller, Jenna; Wilke, Lee; Barry, William; Ramanujam, Nimmi; Geradts, Joseph
2016-06-01
Optical spectroscopy is sensitive to morphological composition and has potential applications in intraoperative margin assessment. Here, we evaluate ex vivo breast tissue and corresponding quantified hematoxylin & eosin images to correlate optical scattering signatures to tissue composition stratified by patient characteristics. Adipose sites (213) were characterized by their cell area and density. All other benign and malignant sites (181) were quantified using a grid method to determine composition. The relationships between mean reduced scattering coefficient (<μs‧>), and % adipose, % collagen, % glands, adipocyte cell area, and adipocyte density were investigated. These relationships were further stratified by age, menopausal status, body mass index (BMI), and breast density. We identified a positive correlation between <μs‧> and % collagen and a negative correlation between <μs‧> and age and BMI. Increased collagen corresponded to increased <μs‧> variability. In postmenopausal women, <μs‧> was similar regardless of fibroglandular content. Contributions from collagen and glands to <μs‧> were independent and equivalent in benign sites; glands showed a stronger positive correlation than collagen to <μs‧> in malignant sites. Our data suggest that scattering could differentiate highly scattering malignant from benign tissues in postmenopausal women. The relationship between scattering and tissue composition will support improved scattering models and technologies to enhance intraoperative optical margin assessment.
Experimental validation of the van Herk margin formula for lung radiation therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ecclestone, Gillian; Heath, Emily; Bissonnette, Jean-Pierre
2013-11-15
Purpose: To validate the van Herk margin formula for lung radiation therapy using realistic dose calculation algorithms and respiratory motion modeling. The robustness of the margin formula against variations in lesion size, peak-to-peak motion amplitude, tissue density, treatment technique, and plan conformity was assessed, along with the margin formula assumption of a homogeneous dose distribution with perfect plan conformity.Methods: 3DCRT and IMRT lung treatment plans were generated within the ORBIT treatment planning platform (RaySearch Laboratories, Sweden) on 4DCT datasets of virtual phantoms. Random and systematic respiratory motion induced errors were simulated using deformable registration and dose accumulation tools available withinmore » ORBIT for simulated cases of varying lesion sizes, peak-to-peak motion amplitudes, tissue densities, and plan conformities. A detailed comparison between the margin formula dose profile model, the planned dose profiles, and penumbra widths was also conducted to test the assumptions of the margin formula. Finally, a correction to account for imperfect plan conformity was tested as well as a novel application of the margin formula that accounts for the patient-specific motion trajectory.Results: The van Herk margin formula ensured full clinical target volume coverage for all 3DCRT and IMRT plans of all conformities with the exception of small lesions in soft tissue. No dosimetric trends with respect to plan technique or lesion size were observed for the systematic and random error simulations. However, accumulated plans showed that plan conformity decreased with increasing tumor motion amplitude. When comparing dose profiles assumed in the margin formula model to the treatment plans, discrepancies in the low dose regions were observed for the random and systematic error simulations. However, the margin formula respected, in all experiments, the 95% dose coverage required for planning target volume (PTV) margin derivation, as defined by the ICRU; thus, suitable PTV margins were estimated. The penumbra widths calculated in lung tissue for each plan were found to be very similar to the 6.4 mm value assumed by the margin formula model. The plan conformity correction yielded inconsistent results which were largely affected by image and dose grid resolution while the trajectory modified PTV plans yielded a dosimetric benefit over the standard internal target volumes approach with up to a 5% decrease in the V20 value.Conclusions: The margin formula showed to be robust against variations in tumor size and motion, treatment technique, plan conformity, as well as low tissue density. This was validated by maintaining coverage of all of the derived PTVs by 95% dose level, as required by the formal definition of the PTV. However, the assumption of perfect plan conformity in the margin formula derivation yields conservative margin estimation. Future modifications to the margin formula will require a correction for plan conformity. Plan conformity can also be improved by using the proposed trajectory modified PTV planning approach. This proves especially beneficial for tumors with a large anterior–posterior component of respiratory motion.« less
Huerta, Mario; Munyi, Marc; Expósito, David; Querol, Enric; Cedano, Juan
2014-06-15
The microarrays performed by scientific teams grow exponentially. These microarray data could be useful for researchers around the world, but unfortunately they are underused. To fully exploit these data, it is necessary (i) to extract these data from a repository of the high-throughput gene expression data like Gene Expression Omnibus (GEO) and (ii) to make the data from different microarrays comparable with tools easy to use for scientists. We have developed these two solutions in our server, implementing a database of microarray marker genes (Marker Genes Data Base). This database contains the marker genes of all GEO microarray datasets and it is updated monthly with the new microarrays from GEO. Thus, researchers can see whether the marker genes of their microarray are marker genes in other microarrays in the database, expanding the analysis of their microarray to the rest of the public microarrays. This solution helps not only to corroborate the conclusions regarding a researcher's microarray but also to identify the phenotype of different subsets of individuals under investigation, to frame the results with microarray experiments from other species, pathologies or tissues, to search for drugs that promote the transition between the studied phenotypes, to detect undesirable side effects of the treatment applied, etc. Thus, the researcher can quickly add relevant information to his/her studies from all of the previous analyses performed in other studies as long as they have been deposited in public repositories. Marker-gene database tool: http://ibb.uab.es/mgdb © The Author 2014. Published by Oxford University Press.
2008 Microarray Research Group (MARG Survey): Sensing the State of Microarray Technology
Over the past several years, the field of microarrays has grown and evolved drastically. In its continued efforts to track this evolution and transformation, the ABRF-MARG has once again conducted a survey of international microarray facilities and individual microarray users. Th...
THE ABRF-MARG MICROARRAY SURVEY 2004: TAKING THE PULSE OF THE MICROARRAY FIELD
Over the past several years, the field of microarrays has grown and evolved drastically. In its continued efforts to track this evolution, the ABRF-MARG has once again conducted a survey of international microarray facilities and individual microarray users. The goal of the surve...
Contributions to Statistical Problems Related to Microarray Data
ERIC Educational Resources Information Center
Hong, Feng
2009-01-01
Microarray is a high throughput technology to measure the gene expression. Analysis of microarray data brings many interesting and challenging problems. This thesis consists three studies related to microarray data. First, we propose a Bayesian model for microarray data and use Bayes Factors to identify differentially expressed genes. Second, we…
NASA Astrophysics Data System (ADS)
Bogdanov, Valery L.; Boyce-Jacino, Michael
1999-05-01
Confined arrays of biochemical probes deposited on a solid support surface (analytical microarray or 'chip') provide an opportunity to analysis multiple reactions simultaneously. Microarrays are increasingly used in genetics, medicine and environment scanning as research and analytical instruments. A power of microarray technology comes from its parallelism which grows with array miniaturization, minimization of reagent volume per reaction site and reaction multiplexing. An optical detector of microarray signals should combine high sensitivity, spatial and spectral resolution. Additionally, low-cost and a high processing rate are needed to transfer microarray technology into biomedical practice. We designed an imager that provides confocal and complete spectrum detection of entire fluorescently-labeled microarray in parallel. Imager uses microlens array, non-slit spectral decomposer, and high- sensitive detector (cooled CCD). Two imaging channels provide a simultaneous detection of localization, integrated and spectral intensities for each reaction site in microarray. A dimensional matching between microarray and imager's optics eliminates all in moving parts in instrumentation, enabling highly informative, fast and low-cost microarray detection. We report theory of confocal hyperspectral imaging with microlenses array and experimental data for implementation of developed imager to detect fluorescently labeled microarray with a density approximately 103 sites per cm2.
Chemiluminescence microarrays in analytical chemistry: a critical review.
Seidel, Michael; Niessner, Reinhard
2014-09-01
Multi-analyte immunoassays on microarrays and on multiplex DNA microarrays have been described for quantitative analysis of small organic molecules (e.g., antibiotics, drugs of abuse, small molecule toxins), proteins (e.g., antibodies or protein toxins), and microorganisms, viruses, and eukaryotic cells. In analytical chemistry, multi-analyte detection by use of analytical microarrays has become an innovative research topic because of the possibility of generating several sets of quantitative data for different analyte classes in a short time. Chemiluminescence (CL) microarrays are powerful tools for rapid multiplex analysis of complex matrices. A wide range of applications for CL microarrays is described in the literature dealing with analytical microarrays. The motivation for this review is to summarize the current state of CL-based analytical microarrays. Combining analysis of different compound classes on CL microarrays reduces analysis time, cost of reagents, and use of laboratory space. Applications are discussed, with examples from food safety, water safety, environmental monitoring, diagnostics, forensics, toxicology, and biosecurity. The potential and limitations of research on multiplex analysis by use of CL microarrays are discussed in this review.
A refined age grid for the Central North Atlantic
NASA Astrophysics Data System (ADS)
Luis, J. M.; Miranda, J.
2012-12-01
We present a digital model for the age of the Central North Atlantic as a geographical grid with 1 arc minute resolution. Our seafloor isochrons are identified following the 'grid procedure' described in the work of Luis and Miranda (2008). The grid itself, which was initially a locally improved version of the Verhoef et al. (1996) compilation, was improved in 2011 (Luis and Miranda, 2011) and further refined with the inclusion of Russian data north of Charlie Gibbs FZ (personal communication, S. Mercuriev). The location and geometry of the Mid-Atlantic Ridge is now very well constrained by both magnetic anomalies and swath bathymetry data down to ~10 degrees N. We identified an extensive set of chrons 0, 2A, 3, 3A, 4, 4A, 5, 6, 6C, 11-12, 13, 18, 20, 21, 22, 23, 24, 25, 26, 28, 29, 30, 32, 33r, M0, M2, M4, M10, M16, M21 and M25. The ages at each grid node are computed by linear interpolation of adjacent isochrons along the direction of the flow-lines. As a pre-processing step each conjugate pair of isochrones was simplified by rotating one of them about the finite pole of that anomaly and use both, original picks plus rotated ones, to calculate an average segment. Fractures zones are used to constrain the chron's shape. These procedures minimize the uncertainties in locations where on one side of the basin one has good identifications but the other is poorly defined as is typical of many of the old isochrones. Care has also taken to account for locations where significant ridge jumps were found. Ages of the ocean floor between the oldest identified magnetic anomalies and continental crust are interpolated using the oldest ages of the Muller at al. (2008), which were themselves estimated from the ages of passive continental margin segments. This is a contribution to MAREKH project (PTDC/MAR/108142/2008) funded by the Portuguese Science Foundation.
Analysis of High-Throughput ELISA Microarray Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Amanda M.; Daly, Don S.; Zangar, Richard C.
Our research group develops analytical methods and software for the high-throughput analysis of quantitative enzyme-linked immunosorbent assay (ELISA) microarrays. ELISA microarrays differ from DNA microarrays in several fundamental aspects and most algorithms for analysis of DNA microarray data are not applicable to ELISA microarrays. In this review, we provide an overview of the steps involved in ELISA microarray data analysis and how the statistically sound algorithms we have developed provide an integrated software suite to address the needs of each data-processing step. The algorithms discussed are available in a set of open-source software tools (http://www.pnl.gov/statistics/ProMAT).
Modeling the near-ultraviolet band of GK stars. III. Dependence on abundance pattern
DOE Office of Scientific and Technical Information (OSTI.GOV)
Short, C. Ian; Campbell, Eamonn A., E-mail: ishort@ap.smu.ca
2013-06-01
We extend the grid of non-LTE (NLTE) models presented in Paper II to explore variations in abundance pattern in two ways: (1) the adoption of the Asplund et al. (GASS10) abundances, (2) for stars of metallicity, [M/H], of –0.5, the adoption of a non-solar enhancement of α-elements by +0.3 dex. Moreover, our grid of synthetic spectral energy distributions (SEDs) is interpolated to a finer numerical resolution in both T {sub eff} (ΔT {sub eff} = 25 K) and log g (Δlog g = 0.25). We compare the values of T {sub eff} and log g inferred from fitting LTE andmore » NLTE SEDs to observed SEDs throughout the entire visible band, and in an ad hoc 'blue' band. We compare our spectrophotometrically derived T {sub eff} values to a variety of T {sub eff} calibrations, including more empirical ones, drawn from the literature. For stars of solar metallicity, we find that the adoption of the GASS10 abundances lowers the inferred T {sub eff} value by 25-50 K for late-type giants, and NLTE models computed with the GASS10 abundances give T {sub eff} results that are marginally in better agreement with other T {sub eff} calibrations. For stars of [M/H] = –0.5 there is marginal evidence that adoption of α-enhancement further lowers the derived T {sub eff} value by 50 K. Stellar parameters inferred from fitting NLTE models to SEDs are more dependent than LTE models on the wavelength region being fitted, and we find that the effect depends on how heavily line blanketed the fitting region is, whether the fitting region is to the blue of the Wien peak of the star's SED, or both.« less
NASA Astrophysics Data System (ADS)
Greene, John A.; Tominaga, Masako; Miller, Nathaniel C.; Hutchinson, Deborah R.; Karl, Matthew R.
2017-11-01
To investigate the oceanic lithosphere formation and early seafloor spreading history of the North Atlantic Ocean, we examine multiscale magnetic anomaly data from the Jurassic/Early Cretaceous age Eastern North American Margin (ENAM) between 31 and 40°N. We integrate newly acquired sea surface magnetic anomaly and seismic reflection data with publicly available aeromagnetic and composite magnetic anomaly grids, satellite-derived gravity anomaly, and satellite-derived and shipboard bathymetry data. We evaluate these data sets to (1) refine magnetic anomaly correlations throughout the ENAM and assign updated ages and chron numbers to M0-M25 and eight pre-M25 anomalies; (2) identify five correlatable magnetic anomalies between the East Coast Magnetic Anomaly (ECMA) and Blake Spur Magnetic Anomaly (BSMA), which may document the earliest Atlantic seafloor spreading or synrift magmatism; (3) suggest preexisting margin structure and rifting segmentation may have influenced the seafloor spreading regimes in the Atlantic Jurassic Quiet Zone (JQZ); (4) suggest that, if the BSMA source is oceanic crust, the BSMA may be M series magnetic anomaly M42 ( 168.5 Ma); (5) examine the along and across margin variation in seafloor spreading rates and spreading center orientations from the BSMA to M25, suggesting asymmetric crustal accretion accommodated the straightening of the ridge from the bend in the ECMA to the more linear M25; and (6) observe anomalously high-amplitude magnetic anomalies near the Hudson Fan, which may be related to a short-lived propagating rift segment that could have helped accommodate the crustal alignment during the early Atlantic opening.
Greene, John A.; Tominaga, Masako; Miller, Nathaniel; Hutchinson, Deborah; Karl, Matthew R.
2017-01-01
To investigate the oceanic lithosphere formation and early seafloor spreading history of the North Atlantic Ocean, we examine multiscale magnetic anomaly data from the Jurassic/Early Cretaceous age Eastern North American Margin (ENAM) between 31 and 40°N. We integrate newly acquired sea surface magnetic anomaly and seismic reflection data with publicly available aeromagnetic and composite magnetic anomaly grids, satellite-derived gravity anomaly, and satellite-derived and shipboard bathymetry data. We evaluate these data sets to (1) refine magnetic anomaly correlations throughout the ENAM and assign updated ages and chron numbers to M0–M25 and eight pre-M25 anomalies; (2) identify five correlatable magnetic anomalies between the East Coast Magnetic Anomaly (ECMA) and Blake Spur Magnetic Anomaly (BSMA), which may document the earliest Atlantic seafloor spreading or synrift magmatism; (3) suggest preexisting margin structure and rifting segmentation may have influenced the seafloor spreading regimes in the Atlantic Jurassic Quiet Zone (JQZ); (4) suggest that, if the BSMA source is oceanic crust, the BSMA may be M series magnetic anomaly M42 (~168.5 Ma); (5) examine the along and across margin variation in seafloor spreading rates and spreading center orientations from the BSMA to M25, suggesting asymmetric crustal accretion accommodated the straightening of the ridge from the bend in the ECMA to the more linear M25; and (6) observe anomalously high-amplitude magnetic anomalies near the Hudson Fan, which may be related to a short-lived propagating rift segment that could have helped accommodate the crustal alignment during the early Atlantic opening.
Ananth, D V N; Nagesh Kumar, G V
2016-05-01
With increase in electric power demand, transmission lines were forced to operate close to its full load and due to the drastic change in weather conditions, thermal limit is increasing and the system is operating with less security margin. To meet the increased power demand, a doubly fed induction generator (DFIG) based wind generation system is a better alternative. For improving power flow capability and increasing security STATCOM can be adopted. As per modern grid rules, DFIG needs to operate without losing synchronism called low voltage ride through (LVRT) during severe grid faults. Hence, an enhanced field oriented control technique (EFOC) was adopted in Rotor Side Converter of DFIG converter to improve power flow transfer and to improve dynamic and transient stability. A STATCOM is coordinated to the system for obtaining much better stability and enhanced operation during grid fault. For the EFOC technique, rotor flux reference changes its value from synchronous speed to zero during fault for injecting current at the rotor slip frequency. In this process DC-Offset component of flux is controlled, decomposition during symmetric and asymmetric faults. The offset decomposition of flux will be oscillatory in a conventional field oriented control, whereas in EFOC it was aimed to damp quickly. This paper mitigates voltage and limits surge currents to enhance the operation of DFIG during symmetrical and asymmetrical faults. The system performance with different types of faults like single line to ground, double line to ground and triple line to ground was applied and compared without and with a STATCOM occurring at the point of common coupling with fault resistance of a very small value at 0.001Ω. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Intra-Platform Repeatability and Inter-Platform Comparability of MicroRNA Microarray Technology
Sato, Fumiaki; Tsuchiya, Soken; Terasawa, Kazuya; Tsujimoto, Gozoh
2009-01-01
Over the last decade, DNA microarray technology has provided a great contribution to the life sciences. The MicroArray Quality Control (MAQC) project demonstrated the way to analyze the expression microarray. Recently, microarray technology has been utilized to analyze a comprehensive microRNA expression profiling. Currently, several platforms of microRNA microarray chips are commercially available. Thus, we compared repeatability and comparability of five different microRNA microarray platforms (Agilent, Ambion, Exiqon, Invitrogen and Toray) using 309 microRNAs probes, and the Taqman microRNA system using 142 microRNA probes. This study demonstrated that microRNA microarray has high intra-platform repeatability and comparability to quantitative RT-PCR of microRNA. Among the five platforms, Agilent and Toray array showed relatively better performances than the others. However, the current lineup of commercially available microRNA microarray systems fails to show good inter-platform concordance, probably because of lack of an adequate normalization method and severe divergence in stringency of detection call criteria between different platforms. This study provided the basic information about the performance and the problems specific to the current microRNA microarray systems. PMID:19436744
Living Cell Microarrays: An Overview of Concepts
Jonczyk, Rebecca; Kurth, Tracy; Lavrentieva, Antonina; Walter, Johanna-Gabriela; Scheper, Thomas; Stahl, Frank
2016-01-01
Living cell microarrays are a highly efficient cellular screening system. Due to the low number of cells required per spot, cell microarrays enable the use of primary and stem cells and provide resolution close to the single-cell level. Apart from a variety of conventional static designs, microfluidic microarray systems have also been established. An alternative format is a microarray consisting of three-dimensional cell constructs ranging from cell spheroids to cells encapsulated in hydrogel. These systems provide an in vivo-like microenvironment and are preferably used for the investigation of cellular physiology, cytotoxicity, and drug screening. Thus, many different high-tech microarray platforms are currently available. Disadvantages of many systems include their high cost, the requirement of specialized equipment for their manufacture, and the poor comparability of results between different platforms. In this article, we provide an overview of static, microfluidic, and 3D cell microarrays. In addition, we describe a simple method for the printing of living cell microarrays on modified microscope glass slides using standard DNA microarray equipment available in most laboratories. Applications in research and diagnostics are discussed, e.g., the selective and sensitive detection of biomarkers. Finally, we highlight current limitations and the future prospects of living cell microarrays. PMID:27600077
ELISA-BASE: An Integrated Bioinformatics Tool for Analyzing and Tracking ELISA Microarray Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Amanda M.; Collett, James L.; Seurynck-Servoss, Shannon L.
ELISA-BASE is an open-source database for capturing, organizing and analyzing protein enzyme-linked immunosorbent assay (ELISA) microarray data. ELISA-BASE is an extension of the BioArray Soft-ware Environment (BASE) database system, which was developed for DNA microarrays. In order to make BASE suitable for protein microarray experiments, we developed several plugins for importing and analyzing quantitative ELISA microarray data. Most notably, our Protein Microarray Analysis Tool (ProMAT) for processing quantita-tive ELISA data is now available as a plugin to the database.
Thermodynamically optimal whole-genome tiling microarray design and validation.
Cho, Hyejin; Chou, Hui-Hsien
2016-06-13
Microarray is an efficient apparatus to interrogate the whole transcriptome of species. Microarray can be designed according to annotated gene sets, but the resulted microarrays cannot be used to identify novel transcripts and this design method is not applicable to unannotated species. Alternatively, a whole-genome tiling microarray can be designed using only genomic sequences without gene annotations, and it can be used to detect novel RNA transcripts as well as known genes. The difficulty with tiling microarray design lies in the tradeoff between probe-specificity and coverage of the genome. Sequence comparison methods based on BLAST or similar software are commonly employed in microarray design, but they cannot precisely determine the subtle thermodynamic competition between probe targets and partially matched probe nontargets during hybridizations. Using the whole-genome thermodynamic analysis software PICKY to design tiling microarrays, we can achieve maximum whole-genome coverage allowable under the thermodynamic constraints of each target genome. The resulted tiling microarrays are thermodynamically optimal in the sense that all selected probes share the same melting temperature separation range between their targets and closest nontargets, and no additional probes can be added without violating the specificity of the microarray to the target genome. This new design method was used to create two whole-genome tiling microarrays for Escherichia coli MG1655 and Agrobacterium tumefaciens C58 and the experiment results validated the design.
Concurrent Tumor Segmentation and Registration with Uncertainty-based Sparse non-Uniform Graphs
Parisot, Sarah; Wells, William; Chemouny, Stéphane; Duffau, Hugues; Paragios, Nikos
2014-01-01
In this paper, we present a graph-based concurrent brain tumor segmentation and atlas to diseased patient registration framework. Both segmentation and registration problems are modeled using a unified pairwise discrete Markov Random Field model on a sparse grid superimposed to the image domain. Segmentation is addressed based on pattern classification techniques, while registration is performed by maximizing the similarity between volumes and is modular with respect to the matching criterion. The two problems are coupled by relaxing the registration term in the tumor area, corresponding to areas of high classification score and high dissimilarity between volumes. In order to overcome the main shortcomings of discrete approaches regarding appropriate sampling of the solution space as well as important memory requirements, content driven samplings of the discrete displacement set and the sparse grid are considered, based on the local segmentation and registration uncertainties recovered by the min marginal energies. State of the art results on a substantial low-grade glioma database demonstrate the potential of our method, while our proposed approach shows maintained performance and strongly reduced complexity of the model. PMID:24717540
Optimal Load-Side Control for Frequency Regulation in Smart Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Changhong; Mallada, Enrique; Low, Steven
Frequency control rebalances supply and demand while maintaining the network state within operational margins. It is implemented using fast ramping reserves that are expensive and wasteful, and which are expected to become increasingly necessary with the current acceleration of renewable penetration. The most promising solution to this problem is the use of demand response, i.e., load participation in frequency control. Yet it is still unclear how to efficiently integrate load participation without introducing instabilities and violating operational constraints. In this paper, we present a comprehensive load-side frequency control mechanism that can maintain the grid within operational constraints. In particular, ourmore » controllers can rebalance supply and demand after disturbances, restore the frequency to its nominal value, and preserve interarea power flows. Furthermore, our controllers are distributed (unlike the currently implemented frequency control), can allocate load updates optimally, and can maintain line flows within thermal limits. We prove that such a distributed load-side control is globally asymptotically stable and robust to unknown load parameters. We illustrate its effectiveness through simulations.« less
NASA Astrophysics Data System (ADS)
Steier, A.; Mann, P.
2017-12-01
Gravity slides on salt or shale detachment surfaces linking updip extension with down dip compression have been described from several margins of the Gulf of Mexico (GOM). In a region 250 km offshore from the southwestern coast of Florida, the late Jurassic section near Destin Dome and Desoto Canyon has undergone late Jurassic to Cretaceous gravity sliding and downdip dispersion of rigid blocks along the top of the underlying Louann salt. Yet there has been no previous study of similar structural styles on the slope and deep basin of its late Jurassic conjugate margin located 200 km offshore of the northern margin of the Yucatan Peninsula. This study describes an extensive area of Mesozoic gravity sliding from the northern Yucatan slope using a grid of 2D seismic data covering a 134,000 km2 area of the northern Yucatan margin tied to nine wells. These data allow the northern Yucatan margin to be divided into three slope and basinal provinces: 1) a 225 km length of the northeastern margin consisting of late Jurassic-Cretaceous section that is not underlain by salt, exhibits no gravity sliding features, and has sub-horizontal dips; 2) a 120 km length of the north-central Yucatan margin with gravity slide features characterized by an 80-km-wide updip zone of normal faults occupying the shelf edge and upper slope and a 50-km-wide downdip zone of folds and thrust faults at the base of the slope; the slide area exhibits multiple detached slide blocks composed of late Jurassic sandstones and marine mudstones separated by intervening salt rollers; growth wedges adjacent to listric, normal faults suggest a gradual and long-lived downdip motion of rigid fault blocks throughout much of the late Jurassic and Cretaceous rather than a catastrophic and instantaneous collapse of the shelf edge; the basal, normal detachment fault averages 3° in dip and is overlain by salt that varies from 0-500 ms in time thickness; by the end of the Cretaceous, most gravity sliding and vertical salt movement off the north-central Yucatan had ceased and was capped by the post-sliding Cretaceous-Paleocene boundary deposit (KPBD); and 3) a 150 km length of the southwestern margin with the largest thicknesses of salt; smaller salt rollers are less common as large diapirs are frequent and extensively deform the late Mesozoic section as well as overlying younger strata.
Is Ki67 prognostic for aggressive prostate cancer? A multicenter real-world study.
Fantony, Joseph J; Howard, Lauren E; Csizmadi, Ilona; Armstrong, Andrew J; Lark, Amy L; Galet, Colette; Aronson, William J; Freedland, Stephen J
2018-06-15
To test if Ki67 expression is prognostic for biochemical recurrence (BCR) after radical prostatectomy (RP). Ki67 immunohistochemistry was performed on tissue microarrays constructed from specimens obtained from 464 men undergoing RP at the Durham and West LA Veterans Affairs Hospitals. Hazard ratios (HR) for Ki67 expression and time to BCR were estimated using Cox regression. Ki67 was associated with more recent surgery year (p < 0.001), positive margins (p = 0.001) and extracapsular extension (p < 0.001). In center-stratified analyses, the adjusted HR for Ki67 expression and BCR approached statistical significance for west LA (HR: 1.54; p = 0.06), but not Durham (HR: 1.10; p = 0.74). This multi-institutional 'real-world' study provides limited evidence for the prognostic role of Ki67 in predicting outcome after RP.
cDNA microarray analysis of esophageal cancer: discoveries and prospects.
Shimada, Yutaka; Sato, Fumiaki; Shimizu, Kazuharu; Tsujimoto, Gozoh; Tsukada, Kazuhiro
2009-07-01
Recent progress in molecular biology has revealed many genetic and epigenetic alterations that are involved in the development and progression of esophageal cancer. Microarray analysis has also revealed several genetic networks that are involved in esophageal cancer. However, clinical application of microarray techniques and use of microarray data have not yet occurred. In this review, we focus on the recent developments and problems with microarray analysis of esophageal cancer.
Petersen, David W; Kawasaki, Ernest S
2007-01-01
DNA microarray technology has become a powerful tool in the arsenal of the molecular biologist. Capitalizing on high precision robotics and the wealth of DNA sequences annotated from the genomes of a large number of organisms, the manufacture of microarrays is now possible for the average academic laboratory with the funds and motivation. Microarray production requires attention to both biological and physical resources, including DNA libraries, robotics, and qualified personnel. While the fabrication of microarrays is a very labor-intensive process, production of quality microarrays individually tailored on a project-by-project basis will help researchers shed light on future scientific questions.
Killion, Patrick J; Sherlock, Gavin; Iyer, Vishwanath R
2003-01-01
Background The power of microarray analysis can be realized only if data is systematically archived and linked to biological annotations as well as analysis algorithms. Description The Longhorn Array Database (LAD) is a MIAME compliant microarray database that operates on PostgreSQL and Linux. It is a fully open source version of the Stanford Microarray Database (SMD), one of the largest microarray databases. LAD is available at Conclusions Our development of LAD provides a simple, free, open, reliable and proven solution for storage and analysis of two-color microarray data. PMID:12930545
Volcano spacings and lithospheric attenuation in the Eastern Rift of Africa
NASA Technical Reports Server (NTRS)
Mohr, P. A.; Wood, C. A.
1976-01-01
The Eastern Rift of Africa runs the gamut of crustal and lithospheric attenuation from undeformed shield through attenuated rift margin to active neo-oceanic spreading zones. It is therefore peculiarly well suited to an examination of relationships between volcano spacings and crust/lithosphere thickness. Although lithospheric thickness is not well known in Eastern Africa, it appears to have direct expression in the surface spacing of volcanoes for any given tectonic regime. This applies whether the volcanoes are essentially basaltic, silicic, or alkaline-carbonatitic. No evidence is found for control of volcano sites by a pre-existing fracture grid in the crust.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Energy Operation Model (EOM) simulates the operation of the electric grid at the zonal scale, including inter-zonal transmission constraints. It generates the production cost, power generation by plant and category, fuel usage, and locational marginal price (LMP) with a flexible way to constrain the power production by environmental constraints, e.g. heat waves, drought conditions). Different from commercial software such as PROMOD IV where generator capacity and heat rate efficiency can only be adjusted on a monthly basis, EOM calculates capacity impacts and plant efficiencies based on hourly ambient conditions (air temperature and humidity) and cooling water availability for thermal plants.more » What is missing is a hydro power dispatch.« less
Zhu, Yuerong; Zhu, Yuelin; Xu, Wei
2008-01-01
Background Though microarray experiments are very popular in life science research, managing and analyzing microarray data are still challenging tasks for many biologists. Most microarray programs require users to have sophisticated knowledge of mathematics, statistics and computer skills for usage. With accumulating microarray data deposited in public databases, easy-to-use programs to re-analyze previously published microarray data are in high demand. Results EzArray is a web-based Affymetrix expression array data management and analysis system for researchers who need to organize microarray data efficiently and get data analyzed instantly. EzArray organizes microarray data into projects that can be analyzed online with predefined or custom procedures. EzArray performs data preprocessing and detection of differentially expressed genes with statistical methods. All analysis procedures are optimized and highly automated so that even novice users with limited pre-knowledge of microarray data analysis can complete initial analysis quickly. Since all input files, analysis parameters, and executed scripts can be downloaded, EzArray provides maximum reproducibility for each analysis. In addition, EzArray integrates with Gene Expression Omnibus (GEO) and allows instantaneous re-analysis of published array data. Conclusion EzArray is a novel Affymetrix expression array data analysis and sharing system. EzArray provides easy-to-use tools for re-analyzing published microarray data and will help both novice and experienced users perform initial analysis of their microarray data from the location of data storage. We believe EzArray will be a useful system for facilities with microarray services and laboratories with multiple members involved in microarray data analysis. EzArray is freely available from . PMID:18218103
A Java-based tool for the design of classification microarrays.
Meng, Da; Broschat, Shira L; Call, Douglas R
2008-08-04
Classification microarrays are used for purposes such as identifying strains of bacteria and determining genetic relationships to understand the epidemiology of an infectious disease. For these cases, mixed microarrays, which are composed of DNA from more than one organism, are more effective than conventional microarrays composed of DNA from a single organism. Selection of probes is a key factor in designing successful mixed microarrays because redundant sequences are inefficient and limited representation of diversity can restrict application of the microarray. We have developed a Java-based software tool, called PLASMID, for use in selecting the minimum set of probe sequences needed to classify different groups of plasmids or bacteria. The software program was successfully applied to several different sets of data. The utility of PLASMID was illustrated using existing mixed-plasmid microarray data as well as data from a virtual mixed-genome microarray constructed from different strains of Streptococcus. Moreover, use of data from expression microarray experiments demonstrated the generality of PLASMID. In this paper we describe a new software tool for selecting a set of probes for a classification microarray. While the tool was developed for the design of mixed microarrays-and mixed-plasmid microarrays in particular-it can also be used to design expression arrays. The user can choose from several clustering methods (including hierarchical, non-hierarchical, and a model-based genetic algorithm), several probe ranking methods, and several different display methods. A novel approach is used for probe redundancy reduction, and probe selection is accomplished via stepwise discriminant analysis. Data can be entered in different formats (including Excel and comma-delimited text), and dendrogram, heat map, and scatter plot images can be saved in several different formats (including jpeg and tiff). Weights generated using stepwise discriminant analysis can be stored for analysis of subsequent experimental data. Additionally, PLASMID can be used to construct virtual microarrays with genomes from public databases, which can then be used to identify an optimal set of probes.
THE ABRF MARG MICROARRAY SURVEY 2005: TAKING THE PULSE ON THE MICROARRAY FIELD
Over the past several years microarray technology has evolved into a critical component of any discovery based program. Since 1999, the Association of Biomolecular Resource Facilities (ABRF) Microarray Research Group (MARG) has conducted biennial surveys designed to generate a pr...
Development of a Digital Microarray with Interferometric Reflectance Imaging
NASA Astrophysics Data System (ADS)
Sevenler, Derin
This dissertation describes a new type of molecular assay for nucleic acids and proteins. We call this technique a digital microarray since it is conceptually similar to conventional fluorescence microarrays, yet it performs enumerative ('digital') counting of the number captured molecules. Digital microarrays are approximately 10,000-fold more sensitive than fluorescence microarrays, yet maintain all of the strengths of the platform including low cost and high multiplexing (i.e., many different tests on the same sample simultaneously). Digital microarrays use gold nanorods to label the captured target molecules. Each gold nanorod on the array is individually detected based on its light scattering, with an interferometric microscopy technique called SP-IRIS. Our optimized high-throughput version of SP-IRIS is able to scan a typical array of 500 spots in less than 10 minutes. Digital DNA microarrays may have utility in applications where sequencing is prohibitively expensive or slow. As an example, we describe a digital microarray assay for gene expression markers of bacterial drug resistance.
Implementation of mutual information and bayes theorem for classification microarray data
NASA Astrophysics Data System (ADS)
Dwifebri Purbolaksono, Mahendra; Widiastuti, Kurnia C.; Syahrul Mubarok, Mohamad; Adiwijaya; Aminy Ma’ruf, Firda
2018-03-01
Microarray Technology is one of technology which able to read the structure of gen. The analysis is important for this technology. It is for deciding which attribute is more important than the others. Microarray technology is able to get cancer information to diagnose a person’s gen. Preparation of microarray data is a huge problem and takes a long time. That is because microarray data contains high number of insignificant and irrelevant attributes. So, it needs a method to reduce the dimension of microarray data without eliminating important information in every attribute. This research uses Mutual Information to reduce dimension. System is built with Machine Learning approach specifically Bayes Theorem. This theorem uses a statistical and probability approach. By combining both methods, it will be powerful for Microarray Data Classification. The experiment results show that system is good to classify Microarray data with highest F1-score using Bayesian Network by 91.06%, and Naïve Bayes by 88.85%.
Zhao, Yuanshun; Zhang, Yonghong; Lin, Dongdong; Li, Kang; Yin, Chengzeng; Liu, Xiuhong; Jin, Boxun; Sun, Libo; Liu, Jinhua; Zhang, Aiying; Li, Ning
2015-10-01
To develop and evaluate a protein microarray assay with horseradish peroxidase (HRP) chemiluminescence for quantification of α-fetoprotein (AFP) in serum from patients with hepatocellular carcinoma (HCC). A protein microarray assay for AFP was developed. Serum was collected from patients with HCC and healthy control subjects. AFP was quantified using protein microarray and enzyme-linked immunosorbent assay (ELISA). Serum AFP concentrations determined via protein microarray were positively correlated (r = 0.973) with those determined via ELISA in patients with HCC (n = 60) and healthy control subjects (n = 30). Protein microarray showed 80% sensitivity and 100% specificity for HCC diagnosis. ELISA had 83.3% sensitivity and 100% specificity. Protein microarray effectively distinguished between patients with HCC and healthy control subjects (area under ROC curve 0.974; 95% CI 0.000, 1.000). Protein microarray is a rapid, simple and low-cost alternative to ELISA for detecting AFP in human serum. © The Author(s) 2015.
Isehed, Catrine; Holmlund, Anders; Renvert, Stefan; Svenson, Björn; Johansson, Ingegerd; Lundberg, Pernilla
2016-10-01
This randomized clinical trial aimed at comparing radiological, clinical and microbial effects of surgical treatment of peri-implantitis alone or in combination with enamel matrix derivative (EMD). Twenty-six subjects were treated with open flap debridement and decontamination of the implant surfaces with gauze and saline preceding adjunctive EMD or no EMD. Bone level (BL) change was primary outcome and secondary outcomes were changes in pocket depth (PD), plaque, pus, bleeding and the microbiota of the peri-implant biofilm analyzed by the Human Oral Microbe Identification Microarray over a time period of 12 months. In multivariate modelling, increased marginal BL at implant site was significantly associated with EMD, the number of osseous walls in the peri-implant bone defect and a Gram+/aerobic microbial flora, whereas reduced BL was associated with a Gram-/anaerobic microbial flora and presence of bleeding and pus, with a cross-validated predictive capacity (Q(2) ) of 36.4%. Similar, but statistically non-significant, trends were seen for BL, PD, plaque, pus and bleeding in univariate analysis. Adjunctive EMD to surgical treatment of peri-implantitis was associated with prevalence of Gram+/aerobic bacteria during the follow-up period and increased marginal BL 12 months after treatment. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Pre-gastrula expression of zebrafish extraembryonic genes
2010-01-01
Background Many species form extraembryonic tissues during embryogenesis, such as the placenta of humans and other viviparous mammals. Extraembryonic tissues have various roles in protecting, nourishing and patterning embryos. Prior to gastrulation in zebrafish, the yolk syncytial layer - an extraembryonic nuclear syncytium - produces signals that induce mesoderm and endoderm formation. Mesoderm and endoderm precursor cells are situated in the embryonic margin, an external ring of cells along the embryo-yolk interface. The yolk syncytial layer initially forms below the margin, in a domain called the external yolk syncytial layer (E-YSL). Results We hypothesize that key components of the yolk syncytial layer's mesoderm and endoderm inducing activity are expressed as mRNAs in the E-YSL. To identify genes expressed in the E-YSL, we used microarrays to compare the transcription profiles of intact pre-gastrula embryos with pre-gastrula embryonic cells that we had separated from the yolk and yolk syncytial layer. This identified a cohort of genes with enriched expression in intact embryos. Here we describe our whole mount in situ hybridization analysis of sixty-eight of them. This includes ten genes with E-YSL expression (camsap1l1, gata3, znf503, hnf1ba, slc26a1, slc40a1, gata6, gpr137bb, otop1 and cebpa), four genes with expression in the enveloping layer (EVL), a superficial epithelium that protects the embryo (zgc:136817, zgc:152778, slc14a2 and elovl6l), three EVL genes whose expression is transiently confined to the animal pole (elovl6l, zgc:136359 and clica), and six genes with transient maternal expression (mtf1, wu:fj59f04, mospd2, rftn2, arrdc1a and pho). We also assessed the requirement of Nodal signaling for the expression of selected genes in the E-YSL, EVL and margin. Margin expression was Nodal dependent for all genes we tested, including the concentrated margin expression of an EVL gene: zgc:110712. All other instances of EVL and E-YSL expression that we tested were Nodal independent. Conclusion We have devised an effective strategy for enriching and identifying genes expressed in the E-YSL of pre-gastrula embryos. To our surprise, maternal genes and genes expressed in the EVL were also enriched by this strategy. A number of these genes are promising candidates for future functional studies on early embryonic patterning. PMID:20423468
Electricity market pricing, risk hedging and modeling
NASA Astrophysics Data System (ADS)
Cheng, Xu
In this dissertation, we investigate the pricing, price risk hedging/arbitrage, and simplified system modeling for a centralized LMP-based electricity market. In an LMP-based market model, the full AC power flow model and the DC power flow model are most widely used to represent the transmission system. We investigate the differences of dispatching results, congestion pattern, and LMPs for the two power flow models. An appropriate LMP decomposition scheme to quantify the marginal costs of the congestion and real power losses is critical for the implementation of financial risk hedging markets. However, the traditional LMP decomposition heavily depends on the slack bus selection. In this dissertation we propose a slack-independent scheme to break LMP down into energy, congestion, and marginal loss components by analyzing the actual marginal cost of each bus at the optimal solution point. The physical and economic meanings of the marginal effect at each bus provide accurate price information for both congestion and losses, and thus the slack-dependency of the traditional scheme is eliminated. With electricity priced at the margin instead of the average value, the market operator typically collects more revenue from power sellers than that paid to power buyers. According to the LMP decomposition results, the revenue surplus is then divided into two parts: congestion charge surplus and marginal loss revenue surplus. We apply the LMP decomposition results to the financial tools, such as financial transmission right (FTR) and loss hedging right (LHR), which have been introduced to hedge against price risks associated to congestion and losses, to construct a full price risk hedging portfolio. The two-settlement market structure and the introduction of financial tools inevitably create market manipulation opportunities. We investigate several possible market manipulation behaviors by virtual bidding and propose a market monitor approach to identify and quantify such behavior. Finally, the complexity of the power market and size of the transmission grid make it difficult for market participants to efficiently analyze the long-term market behavior. We propose a simplified power system commercial model by simulating the PTDFs of critical transmission bottlenecks of the original system.
The Microarray Revolution: Perspectives from Educators
ERIC Educational Resources Information Center
Brewster, Jay L.; Beason, K. Beth; Eckdahl, Todd T.; Evans, Irene M.
2004-01-01
In recent years, microarray analysis has become a key experimental tool, enabling the analysis of genome-wide patterns of gene expression. This review approaches the microarray revolution with a focus upon four topics: 1) the early development of this technology and its application to cancer diagnostics; 2) a primer of microarray research,…
Controlling Electron Backstreaming Phenomena Through the Use of a Transverse Magnetic Field
NASA Technical Reports Server (NTRS)
Foster, John E.; Patterson, Michael J.
2002-01-01
DEEP-SPACE mission propulsion requirements can be satisfied by the use of high specific impulse systems such as ion thrusters. For such missions. however. the ion thruster will be required to provide thrust for long periods of time. To meet the long operation time and high-propellant throughput requirements, thruster lifetime must be increased. In general, potential ion thruster failure mechanisms associated with long-duration thrusting can be grouped into four areas: (1) ion optics failure; (2) discharge cathode failure; (3) neutralizer failure; and (4) electron backstreaming caused by accelerator grid aperture enlargement brought on by accelerator grid erosion. The work presented here focuses on electron backstreaming. which occurs when the potential at the center of an accelerator grid aperture is insufficient to prevent the backflow of electrons into the ion thruster. The likelihood of this occurring depends on ion source operation time. plasma density, and grid voltages, as accelerator grid apertures enlarge as a result of erosion. Electrons that enter the gap between the high-voltage screen and accelerator grids are accelerated to the energies approximately equal to the beam voltage. This energetic electron beam (typically higher than 1 kV) can damage not only the ion source discharge cathode assembly. but also any of the discharge surfaces upstream of the ion acceleration optics that the electrons happen to impact. Indeed. past backstreaming studies have shown that near the backstreaming limit, which corresponds to the absolute value of the accelerator grid voltage below which electrons can backflow into the thruster, there is a rather sharp rise in temperature at structures such as the cathode keeper electrode. In this respect operation at accelerator grid voltages near the backstreaming limit is avoided. Generally speaking, electron backstreaming is prevented by operating the accelerator grid at a sufficiently negative voltage to ensure a sufficiently negative aperture center potential. This approach can provide the necessary margin assuming an expected aperture enlargement. Operation at very negative accelerator grid voltages, however, enhances ion charge-exchange and direct impingement erosion of the accelerator grid. The focus of the work presented here is the mitigation of electron backstreaming by the use of a magnetic field. The presence of a magnetic field oriented perpendicular to the thruster axis can significantly decrease the magnitude of the backflowing electron current by significantly reducing the electron diffusion coefficient. Negative ion sources utilize this principle to reduce the fraction of electrons in the negative ion beam. The focus of these efforts has been on the attenuation of electron current diffusing from the discharge plasma into the negative ion extraction optics by placing the transverse magnetic field upstream of the extraction electrodes. In contrast. in the case of positive ion sources such as ion thrusters, the approach taken in the work presented here is to apply the transverse field downstream of the ion extraction system so as to prevent electrons from flowing back into the source. It was found in the work presented here that the magnetic field also reduces the absolute value of the electron backstreaming limit voltage. In this respect. the applied transverse magnetic field provides two mechanisms for electron backstreaming mitigation: (1) electron current attenuation and (2) backstreaming limit voltage shift. Such a shift to less negative voltages can lead to reduced accelerator grid erosion rates.
NASA Astrophysics Data System (ADS)
Yuksel, Tugce; Tamayao, Mili-Ann M.; Hendrickson, Chris; Azevedo, Inês M. L.; Michalek, Jeremy J.
2016-04-01
We compare life cycle greenhouse gas (GHG) emissions from several light-duty passenger gasoline and plug-in electric vehicles (PEVs) across US counties by accounting for regional differences due to marginal grid mix, ambient temperature, patterns of vehicle miles traveled (VMT), and driving conditions (city versus highway). We find that PEVs can have larger or smaller carbon footprints than gasoline vehicles, depending on these regional factors and the specific vehicle models being compared. The Nissan Leaf battery electric vehicle has a smaller carbon footprint than the most efficient gasoline vehicle (the Toyota Prius) in the urban counties of California, Texas and Florida, whereas the Prius has a smaller carbon footprint in the Midwest and the South. The Leaf is lower emitting than the Mazda 3 conventional gasoline vehicle in most urban counties, but the Mazda 3 is lower emitting in rural Midwest counties. The Chevrolet Volt plug-in hybrid electric vehicle has a larger carbon footprint than the Prius throughout the continental US, though the Volt has a smaller carbon footprint than the Mazda 3 in many urban counties. Regional grid mix, temperature, driving conditions, and vehicle model all have substantial implications for identifying which technology has the lowest carbon footprint, whereas regional patterns of VMT have a much smaller effect. Given the variation in relative GHG implications, it is unlikely that blunt policy instruments that favor specific technology categories can ensure emission reductions universally.
Recent progress in making protein microarray through BioLP
NASA Astrophysics Data System (ADS)
Yang, Rusong; Wei, Lian; Feng, Ying; Li, Xiujian; Zhou, Quan
2017-02-01
Biological laser printing (BioLP) is a promising biomaterial printing technique. It has the advantage of high resolution, high bioactivity, high printing frequency and small transported liquid amount. In this paper, a set of BioLP device is design and made, and protein microarrays are printed by this device. It's found that both laser intensity and fluid layer thickness have an influence on the microarrays acquired. Besides, two kinds of the fluid layer coating methods are compared, and the results show that blade coating method is better than well-coating method in BioLP. A microarray of 0.76pL protein microarray and a "NUDT" patterned microarray are printed to testify the printing ability of BioLP.
The second phase of the MicroArray Quality Control (MAQC-II) project evaluated common practices for developing and validating microarray-based models aimed at predicting toxicological and clinical endpoints. Thirty-six teams developed classifiers for 13 endpoints - some easy, som...
Flow-pattern Guided Fabrication of High-density Barcode Antibody Microarray
Ramirez, Lisa S.; Wang, Jun
2016-01-01
Antibody microarray as a well-developed technology is currently challenged by a few other established or emerging high-throughput technologies. In this report, we renovate the antibody microarray technology by using a novel approach for manufacturing and by introducing new features. The fabrication of our high-density antibody microarray is accomplished through perpendicularly oriented flow-patterning of single stranded DNAs and subsequent conversion mediated by DNA-antibody conjugates. This protocol outlines the critical steps in flow-patterning DNA, producing and purifying DNA-antibody conjugates, and assessing the quality of the fabricated microarray. The uniformity and sensitivity are comparable with conventional microarrays, while our microarray fabrication does not require the assistance of an array printer and can be performed in most research laboratories. The other major advantage is that the size of our microarray units is 10 times smaller than that of printed arrays, offering the unique capability of analyzing functional proteins from single cells when interfacing with generic microchip designs. This barcode technology can be widely employed in biomarker detection, cell signaling studies, tissue engineering, and a variety of clinical applications. PMID:26780370
Microarray platform for omics analysis
NASA Astrophysics Data System (ADS)
Mecklenburg, Michael; Xie, Bin
2001-09-01
Microarray technology has revolutionized genetic analysis. However, limitations in genome analysis has lead to renewed interest in establishing 'omic' strategies. As we enter the post-genomic era, new microarray technologies are needed to address these new classes of 'omic' targets, such as proteins, as well as lipids and carbohydrates. We have developed a microarray platform that combines self- assembling monolayers with the biotin-streptavidin system to provide a robust, versatile immobilization scheme. A hydrophobic film is patterned on the surface creating an array of tension wells that eliminates evaporation effects thereby reducing the shear stress to which biomolecules are exposed to during immobilization. The streptavidin linker layer makes it possible to adapt and/or develop microarray based assays using virtually any class of biomolecules including: carbohydrates, peptides, antibodies, receptors, as well as them ore traditional DNA based arrays. Our microarray technology is designed to furnish seamless compatibility across the various 'omic' platforms by providing a common blueprint for fabricating and analyzing arrays. The prototype microarray uses a microscope slide footprint patterned with 2 by 96 flat wells. Data on the microarray platform will be presented.
Seefeld, Ting H.; Halpern, Aaron R.; Corn, Robert M.
2012-01-01
Protein microarrays are fabricated from double-stranded DNA (dsDNA) microarrays by a one-step, multiplexed enzymatic synthesis in an on-chip microfluidic format and then employed for antibody biosensing measurements with surface plasmon resonance imaging (SPRI). A microarray of dsDNA elements (denoted as generator elements) that encode either a His-tagged green fluorescent protein (GFP) or a His-tagged luciferase protein is utilized to create multiple copies of messenger RNA (mRNA) in a surface RNA polymerase reaction; the mRNA transcripts are then translated into proteins by cell-free protein synthesis in a microfluidic format. The His-tagged proteins diffuse to adjacent Cu(II)-NTA microarray elements (denoted as detector elements) and are specifically adsorbed. The net result is the on-chip, cell-free synthesis of a protein microarray that can be used immediately for SPRI protein biosensing. The dual element format greatly reduces any interference from the nonspecific adsorption of enzyme or proteins. SPRI measurements for the detection of the antibodies anti-GFP and anti-luciferase were used to verify the formation of the protein microarray. This convenient on-chip protein microarray fabrication method can be implemented for multiplexed SPRI biosensing measurements in both clinical and research applications. PMID:22793370
Fully Automated Complementary DNA Microarray Segmentation using a Novel Fuzzy-based Algorithm.
Saberkari, Hamidreza; Bahrami, Sheyda; Shamsi, Mousa; Amoshahy, Mohammad Javad; Ghavifekr, Habib Badri; Sedaaghi, Mohammad Hossein
2015-01-01
DNA microarray is a powerful approach to study simultaneously, the expression of 1000 of genes in a single experiment. The average value of the fluorescent intensity could be calculated in a microarray experiment. The calculated intensity values are very close in amount to the levels of expression of a particular gene. However, determining the appropriate position of every spot in microarray images is a main challenge, which leads to the accurate classification of normal and abnormal (cancer) cells. In this paper, first a preprocessing approach is performed to eliminate the noise and artifacts available in microarray cells using the nonlinear anisotropic diffusion filtering method. Then, the coordinate center of each spot is positioned utilizing the mathematical morphology operations. Finally, the position of each spot is exactly determined through applying a novel hybrid model based on the principle component analysis and the spatial fuzzy c-means clustering (SFCM) algorithm. Using a Gaussian kernel in SFCM algorithm will lead to improving the quality in complementary DNA microarray segmentation. The performance of the proposed algorithm has been evaluated on the real microarray images, which is available in Stanford Microarray Databases. Results illustrate that the accuracy of microarray cells segmentation in the proposed algorithm reaches to 100% and 98% for noiseless/noisy cells, respectively.
Zhang, Aiying; Yin, Chengzeng; Wang, Zhenshun; Zhang, Yonghong; Zhao, Yuanshun; Li, Ang; Sun, Huanqin; Lin, Dongdong; Li, Ning
2016-12-01
Objective To develop a simple, effective, time-saving and low-cost fluorescence protein microarray method for detecting serum alpha-fetoprotein (AFP) in patients with hepatocellular carcinoma (HCC). Method Non-contact piezoelectric print techniques were applied to fluorescence protein microarray to reduce the cost of prey antibody. Serum samples from patients with HCC and healthy control subjects were collected and evaluated for the presence of AFP using a novel fluorescence protein microarray. To validate the fluorescence protein microarray, serum samples were tested for AFP using an enzyme-linked immunosorbent assay (ELISA). Results A total of 110 serum samples from patients with HCC ( n = 65) and healthy control subjects ( n = 45) were analysed. When the AFP cut-off value was set at 20 ng/ml, the fluorescence protein microarray had a sensitivity of 91.67% and a specificity of 93.24% for detecting serum AFP. Serum AFP quantified via fluorescence protein microarray had a similar diagnostic performance compared with ELISA in distinguishing patients with HCC from healthy control subjects (area under receiver operating characteristic curve: 0.906 for fluorescence protein microarray; 0.880 for ELISA). Conclusion A fluorescence protein microarray method was developed for detecting serum AFP in patients with HCC.
Zhang, Aiying; Yin, Chengzeng; Wang, Zhenshun; Zhang, Yonghong; Zhao, Yuanshun; Li, Ang; Sun, Huanqin; Lin, Dongdong
2016-01-01
Objective To develop a simple, effective, time-saving and low-cost fluorescence protein microarray method for detecting serum alpha-fetoprotein (AFP) in patients with hepatocellular carcinoma (HCC). Method Non-contact piezoelectric print techniques were applied to fluorescence protein microarray to reduce the cost of prey antibody. Serum samples from patients with HCC and healthy control subjects were collected and evaluated for the presence of AFP using a novel fluorescence protein microarray. To validate the fluorescence protein microarray, serum samples were tested for AFP using an enzyme-linked immunosorbent assay (ELISA). Results A total of 110 serum samples from patients with HCC (n = 65) and healthy control subjects (n = 45) were analysed. When the AFP cut-off value was set at 20 ng/ml, the fluorescence protein microarray had a sensitivity of 91.67% and a specificity of 93.24% for detecting serum AFP. Serum AFP quantified via fluorescence protein microarray had a similar diagnostic performance compared with ELISA in distinguishing patients with HCC from healthy control subjects (area under receiver operating characteristic curve: 0.906 for fluorescence protein microarray; 0.880 for ELISA). Conclusion A fluorescence protein microarray method was developed for detecting serum AFP in patients with HCC. PMID:27885040
García-Hoyos, María; Cortón, Marta; Ávila-Fernández, Almudena; Riveiro-Álvarez, Rosa; Giménez, Ascensión; Hernan, Inma; Carballo, Miguel; Ayuso, Carmen
2012-01-01
Purpose Presently, 22 genes have been described in association with autosomal dominant retinitis pigmentosa (adRP); however, they explain only 50% of all cases, making genetic diagnosis of this disease difficult and costly. The aim of this study was to evaluate a specific genotyping microarray for its application to the molecular diagnosis of adRP in Spanish patients. Methods We analyzed 139 unrelated Spanish families with adRP. Samples were studied by using a genotyping microarray (adRP). All mutations found were further confirmed with automatic sequencing. Rhodopsin (RHO) sequencing was performed in all negative samples for the genotyping microarray. Results The adRP genotyping microarray detected the mutation associated with the disease in 20 of the 139 families with adRP. As in other populations, RHO was found to be the most frequently mutated gene in these families (7.9% of the microarray genotyped families). The rate of false positives (microarray results not confirmed with sequencing) and false negatives (mutations in RHO detected with sequencing but not with the genotyping microarray) were established, and high levels of analytical sensitivity (95%) and specificity (100%) were found. Diagnostic accuracy was 15.1%. Conclusions The adRP genotyping microarray is a quick, cost-efficient first step in the molecular diagnosis of Spanish patients with adRP. PMID:22736939
Electric Power Infrastructure Reliability and Security (EPIRS) Reseach and Development Initiative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rick Meeker; L. Baldwin; Steinar Dale
2010-03-31
Power systems have become increasingly complex and face unprecedented challenges posed by population growth, climate change, national security issues, foreign energy dependence and an aging power infrastructure. Increased demand combined with increased economic and environmental constraints is forcing state, regional and national power grids to expand supply without the large safety and stability margins in generation and transmission capacity that have been the rule in the past. Deregulation, distributed generation, natural and man-made catastrophes and other causes serve to further challenge and complicate management of the electric power grid. To meet the challenges of the 21st century while also maintainingmore » system reliability, the electric power grid must effectively integrate new and advanced technologies both in the actual equipment for energy conversion, transfer and use, and in the command, control, and communication systems by which effective and efficient operation of the system is orchestrated - in essence, the 'smart grid'. This evolution calls for advances in development, integration, analysis, and deployment approaches that ultimately seek to take into account, every step of the way, the dynamic behavior of the system, capturing critical effects due to interdependencies and interaction. This approach is necessary to better mitigate the risk of blackouts and other disruptions and to improve the flexibility and capacity of the grid. Building on prior Navy and Department of Energy investments in infrastructure and resources for electric power systems research, testing, modeling, and simulation at the Florida State University (FSU) Center for Advanced Power Systems (CAPS), this project has continued an initiative aimed at assuring reliable and secure grid operation through a more complete understanding and characterization of some of the key technologies that will be important in a modern electric system, while also fulfilling an education and outreach mission to provide future energy workforce talent and support the electric system stakeholder community. Building upon and extending portions of that research effort, this project has been focused in the following areas: (1) Building high-fidelity integrated power and controls hardware-in-the-loop research and development testbed capabilities (Figure 1). (2) Distributed Energy Resources Integration - (a) Testing Requirements and Methods for Fault Current Limiters, (b) Contributions to the Development of IEEE 1547.7, (c) Analysis of a STATCOM Application for Wind Resource Integration, (d) Development of a Grid-Interactive Inverter with Energy Storage Elements, (e) Simulation-Assisted Advancement of Microgrid Understanding and Applications; (3) Availability of High-Fidelity Dynamic Simulation Tools for Grid Disturbance Investigations; (4) HTS Material Characterization - (a) AC Loss Studies on High Temperature Superconductors, (b) Local Identification of Current-Limiting Mechanisms in Coated Conductors; (5) Cryogenic Dielectric Research; and (6) Workshops, education, and outreach.« less
Honoré, Paul; Granjeaud, Samuel; Tagett, Rebecca; Deraco, Stéphane; Beaudoing, Emmanuel; Rougemont, Jacques; Debono, Stéphane; Hingamp, Pascal
2006-09-20
High throughput gene expression profiling (GEP) is becoming a routine technique in life science laboratories. With experimental designs that repeatedly span thousands of genes and hundreds of samples, relying on a dedicated database infrastructure is no longer an option.GEP technology is a fast moving target, with new approaches constantly broadening the field diversity. This technology heterogeneity, compounded by the informatics complexity of GEP databases, means that software developments have so far focused on mainstream techniques, leaving less typical yet established techniques such as Nylon microarrays at best partially supported. MAF (MicroArray Facility) is the laboratory database system we have developed for managing the design, production and hybridization of spotted microarrays. Although it can support the widely used glass microarrays and oligo-chips, MAF was designed with the specific idiosyncrasies of Nylon based microarrays in mind. Notably single channel radioactive probes, microarray stripping and reuse, vector control hybridizations and spike-in controls are all natively supported by the software suite. MicroArray Facility is MIAME supportive and dynamically provides feedback on missing annotations to help users estimate effective MIAME compliance. Genomic data such as clone identifiers and gene symbols are also directly annotated by MAF software using standard public resources. The MAGE-ML data format is implemented for full data export. Journalized database operations (audit tracking), data anonymization, material traceability and user/project level confidentiality policies are also managed by MAF. MicroArray Facility is a complete data management system for microarray producers and end-users. Particular care has been devoted to adequately model Nylon based microarrays. The MAF system, developed and implemented in both private and academic environments, has proved a robust solution for shared facilities and industry service providers alike.
Honoré, Paul; Granjeaud, Samuel; Tagett, Rebecca; Deraco, Stéphane; Beaudoing, Emmanuel; Rougemont, Jacques; Debono, Stéphane; Hingamp, Pascal
2006-01-01
Background High throughput gene expression profiling (GEP) is becoming a routine technique in life science laboratories. With experimental designs that repeatedly span thousands of genes and hundreds of samples, relying on a dedicated database infrastructure is no longer an option. GEP technology is a fast moving target, with new approaches constantly broadening the field diversity. This technology heterogeneity, compounded by the informatics complexity of GEP databases, means that software developments have so far focused on mainstream techniques, leaving less typical yet established techniques such as Nylon microarrays at best partially supported. Results MAF (MicroArray Facility) is the laboratory database system we have developed for managing the design, production and hybridization of spotted microarrays. Although it can support the widely used glass microarrays and oligo-chips, MAF was designed with the specific idiosyncrasies of Nylon based microarrays in mind. Notably single channel radioactive probes, microarray stripping and reuse, vector control hybridizations and spike-in controls are all natively supported by the software suite. MicroArray Facility is MIAME supportive and dynamically provides feedback on missing annotations to help users estimate effective MIAME compliance. Genomic data such as clone identifiers and gene symbols are also directly annotated by MAF software using standard public resources. The MAGE-ML data format is implemented for full data export. Journalized database operations (audit tracking), data anonymization, material traceability and user/project level confidentiality policies are also managed by MAF. Conclusion MicroArray Facility is a complete data management system for microarray producers and end-users. Particular care has been devoted to adequately model Nylon based microarrays. The MAF system, developed and implemented in both private and academic environments, has proved a robust solution for shared facilities and industry service providers alike. PMID:16987406
Valuation of Electric Power System Services and Technologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kintner-Meyer, Michael C. W.; Homer, Juliet S.; Balducci, Patrick J.
Accurate valuation of existing and new technologies and grid services has been recognized to be important to stimulate investment in grid modernization. Clear, transparent, and accepted methods for estimating the total value (i.e., total benefits minus cost) of grid technologies and services are necessary for decision makers to make informed decisions. This applies to home owners interested in distributed energy technologies, as well as to service providers offering new demand response services, and utility executives evaluating best investment strategies to meet their service obligation. However, current valuation methods lack consistency, methodological rigor, and often the capabilities to identify and quantifymore » multiple benefits of grid assets or new and innovative services. Distributed grid assets often have multiple benefits that are difficult to quantify because of the locational context in which they operate. The value is temporally, operationally, and spatially specific. It varies widely by distribution systems, transmission network topology, and the composition of the generation mix. The Electric Power Research Institute (EPRI) recently established a benefit-cost framework that proposes a process for estimating multiple benefits of distributed energy resources (DERs) and the associated cost. This document proposes an extension of this endeavor that offers a generalizable framework for valuation that quantifies the broad set of values for a wide range of technologies (including energy efficiency options, distributed resources, transmission, and generation) as well as policy options that affect all aspects of the entire generation and delivery system of the electricity infrastructure. The extension includes a comprehensive valuation framework of monetizable and non-monetizable benefits of new technologies and services beyond the traditional reliability objectives. The benefits are characterized into the following categories: sustainability, affordability, and security, flexibility, and resilience. This document defines the elements of a generic valuation framework and process as well as system properties and metrics by which value streams can be derived. The valuation process can be applied to determine the value on the margin of incremental system changes. This process is typically performed when estimating the value of a particular project (e.g., value of a merchant generator, or a distributed photovoltaic (PV) rooftop installation). Alternatively, the framework can be used when a widespread change in the grid operation, generation mix, or transmission topology is to be valued. In this case a comprehensive system analysis is required.« less
Lee, C.-T.A.; Morton, D.M.; Kistler, R.W.; Baird, A.K.
2007-01-01
Mesozoic continental arcs in the North American Cordillera were examined here to establish a baseline model for Phanerozoic continent formation. We combine new trace-element data on lower crustal xenoliths from the Mesozoic Sierra Nevada Batholith with an extensive grid-based geochemical map of the Peninsular Ranges Batholith, the southern equivalent of the Sierras. Collectively, these observations give a three-dimensional view of the crust, which permits the petrogenesis and tectonics of Phanerozoic crust formation to be linked in space and time. Subduction of the Farallon plate beneath North America during the Triassic to early Cretaceous was characterized by trench retreat and slab rollback because old and cold oceanic lithosphere was being subducted. This generated an extensional subduction zone, which created fringing island arcs just off the Paleozoic continental margin. However, as the age of the Farallon plate at the time of subduction decreased, the extensional environment waned, allowing the fringing island arc to accrete onto the continental margin. With continued subduction, a continental arc was born and a progressively more compressional environment developed as the age of subducting slab continued to young. Refinement into a felsic crust occurred after accretion, that is, during the continental arc stage, wherein a thickened crustal and lithospheric column permitted a longer differentiation column. New basaltic arc magmas underplate and intrude the accreted terrane, suture, and former continental margin. Interaction of these basaltic magmas with pre-existing crust and lithospheric mantle created garnet pyroxenitic mafic cumulates by fractional crystallization at depth as well as gabbroic and garnet pyroxenitic restites at shallower levels by melting of pre-existing lower crust. The complementary felsic plutons formed by these deep-seated differentiation processes rose into the upper crust, stitching together the accreted terrane, suture and former continental margin. The mafic cumulates and restites, owing to their high densities, eventually foundered into the mantle, leaving behind a more felsic crust. Our grid-based sampling allows us to estimate an unbiased average upper crustal composition for the Peninsular Ranges Batholith. Major and trace-element compositions are very similar to global continental crust averaged over space and time, but in detail, the Peninsular Ranges are slightly lower in compatible to mildly incompatible elements, MgO, Mg#, V, Sc, Co, and Cr. The compositional similarities suggest a strong arc component in global continental crust, but the slight discrepancies suggest that additional crust formation processes are also important in continent formation as a whole. Finally, the delaminated Sierran garnet pyroxenites have some of the lowest U/Pb ratios ever measured for silicate rocks. Such material, if recycled and stored in the deep mantle, would generate a reservoir with very unradiogenic Pb, providing one solution to the global Pb isotope paradox. ?? 2007 Elsevier B.V. All rights reserved.
Microarrays in brain research: the good, the bad and the ugly.
Mirnics, K
2001-06-01
Making sense of microarray data is a complex process, in which the interpretation of findings will depend on the overall experimental design and judgement of the investigator performing the analysis. As a result, differences in tissue harvesting, microarray types, sample labelling and data analysis procedures make post hoc sharing of microarray data a great challenge. To ensure rapid and meaningful data exchange, we need to create some order out of the existing chaos. In these ground-breaking microarray standardization and data sharing efforts, NIH agencies should take a leading role
Ranjbar, Reza; Behzadi, Payam; Najafi, Ali; Roudi, Raheleh
2017-01-01
A rapid, accurate, flexible and reliable diagnostic method may significantly decrease the costs of diagnosis and treatment. Designing an appropriate microarray chip reduces noises and probable biases in the final result. The aim of this study was to design and construct a DNA Microarray Chip for a rapid detection and identification of 10 important bacterial agents. In the present survey, 10 unique genomic regions relating to 10 pathogenic bacterial agents including Escherichia coli (E.coli), Shigella boydii, Sh.dysenteriae, Sh.flexneri, Sh.sonnei, Salmonella typhi, S.typhimurium, Brucella sp., Legionella pneumophila, and Vibrio cholera were selected for designing specific long oligo microarray probes. For this reason, the in-silico operations including utilization of the NCBI RefSeq database, Servers of PanSeq and Gview, AlleleID 7.7 and Oligo Analyzer 3.1 was done. On the other hand, the in-vitro part of the study comprised stages of robotic microarray chip probe spotting, bacterial DNAs extraction and DNA labeling, hybridization and microarray chip scanning. In wet lab section, different tools and apparatus such as Nexterion® Slide E, Qarray mini spotter, NimbleGen kit, TrayMix TM S4, and Innoscan 710 were used. A DNA microarray chip including 10 long oligo microarray probes was designed and constructed for detection and identification of 10 pathogenic bacteria. The DNA microarray chip was capable to identify all 10 bacterial agents tested simultaneously. The presence of a professional bioinformatician as a probe designer is needed to design appropriate multifunctional microarray probes to increase the accuracy of the outcomes.
Richard, Arianne C; Lyons, Paul A; Peters, James E; Biasci, Daniele; Flint, Shaun M; Lee, James C; McKinney, Eoin F; Siegel, Richard M; Smith, Kenneth G C
2014-08-04
Although numerous investigations have compared gene expression microarray platforms, preprocessing methods and batch correction algorithms using constructed spike-in or dilution datasets, there remains a paucity of studies examining the properties of microarray data using diverse biological samples. Most microarray experiments seek to identify subtle differences between samples with variable background noise, a scenario poorly represented by constructed datasets. Thus, microarray users lack important information regarding the complexities introduced in real-world experimental settings. The recent development of a multiplexed, digital technology for nucleic acid measurement enables counting of individual RNA molecules without amplification and, for the first time, permits such a study. Using a set of human leukocyte subset RNA samples, we compared previously acquired microarray expression values with RNA molecule counts determined by the nCounter Analysis System (NanoString Technologies) in selected genes. We found that gene measurements across samples correlated well between the two platforms, particularly for high-variance genes, while genes deemed unexpressed by the nCounter generally had both low expression and low variance on the microarray. Confirming previous findings from spike-in and dilution datasets, this "gold-standard" comparison demonstrated signal compression that varied dramatically by expression level and, to a lesser extent, by dataset. Most importantly, examination of three different cell types revealed that noise levels differed across tissues. Microarray measurements generally correlate with relative RNA molecule counts within optimal ranges but suffer from expression-dependent accuracy bias and precision that varies across datasets. We urge microarray users to consider expression-level effects in signal interpretation and to evaluate noise properties in each dataset independently.
Gong, Wei; He, Kun; Covington, Mike; Dinesh-Kumar, S. P.; Snyder, Michael; Harmer, Stacey L.; Zhu, Yu-Xian; Deng, Xing Wang
2009-01-01
We used our collection of Arabidopsis transcription factor (TF) ORFeome clones to construct protein microarrays containing as many as 802 TF proteins. These protein microarrays were used for both protein-DNA and protein-protein interaction analyses. For protein-DNA interaction studies, we examined AP2/ERF family TFs and their cognate cis-elements. By careful comparison of the DNA-binding specificity of 13 TFs on the protein microarray with previous non-microarray data, we showed that protein microarrays provide an efficient and high throughput tool for genome-wide analysis of TF-DNA interactions. This microarray protein-DNA interaction analysis allowed us to derive a comprehensive view of DNA-binding profiles of AP2/ERF family proteins in Arabidopsis. It also revealed four TFs that bound the EE (evening element) and had the expected phased gene expression under clock-regulation, thus providing a basis for further functional analysis of their roles in clock regulation of gene expression. We also developed procedures for detecting protein interactions using this TF protein microarray and discovered four novel partners that interact with HY5, which can be validated by yeast two-hybrid assays. Thus, plant TF protein microarrays offer an attractive high-throughput alternative to traditional techniques for TF functional characterization on a global scale. PMID:19802365
Zhao, Zhengshan; Peytavi, Régis; Diaz-Quijada, Gerardo A.; Picard, Francois J.; Huletsky, Ann; Leblanc, Éric; Frenette, Johanne; Boivin, Guy; Veres, Teodor; Dumoulin, Michel M.; Bergeron, Michel G.
2008-01-01
Fabrication of microarray devices using traditional glass slides is not easily adaptable to integration into microfluidic systems. There is thus a need for the development of polymeric materials showing a high hybridization signal-to-background ratio, enabling sensitive detection of microbial pathogens. We have developed such plastic supports suitable for highly sensitive DNA microarray hybridizations. The proof of concept of this microarray technology was done through the detection of four human respiratory viruses that were amplified and labeled with a fluorescent dye via a sensitive reverse transcriptase PCR (RT-PCR) assay. The performance of the microarray hybridization with plastic supports made of PMMA [poly(methylmethacrylate)]-VSUVT or Zeonor 1060R was compared to that with high-quality glass slide microarrays by using both passive and microfluidic hybridization systems. Specific hybridization signal-to-background ratios comparable to that obtained with high-quality commercial glass slides were achieved with both polymeric substrates. Microarray hybridizations demonstrated an analytical sensitivity equivalent to approximately 100 viral genome copies per RT-PCR, which is at least 100-fold higher than the sensitivities of previously reported DNA hybridizations on plastic supports. Testing of these plastic polymers using a microfluidic microarray hybridization platform also showed results that were comparable to those with glass supports. In conclusion, PMMA-VSUVT and Zeonor 1060R are both suitable for highly sensitive microarray hybridizations. PMID:18784318
Development and application of a microarray meter tool to optimize microarray experiments
Rouse, Richard JD; Field, Katrine; Lapira, Jennifer; Lee, Allen; Wick, Ivan; Eckhardt, Colleen; Bhasker, C Ramana; Soverchia, Laura; Hardiman, Gary
2008-01-01
Background Successful microarray experimentation requires a complex interplay between the slide chemistry, the printing pins, the nucleic acid probes and targets, and the hybridization milieu. Optimization of these parameters and a careful evaluation of emerging slide chemistries are a prerequisite to any large scale array fabrication effort. We have developed a 'microarray meter' tool which assesses the inherent variations associated with microarray measurement prior to embarking on large scale projects. Findings The microarray meter consists of nucleic acid targets (reference and dynamic range control) and probe components. Different plate designs containing identical probe material were formulated to accommodate different robotic and pin designs. We examined the variability in probe quality and quantity (as judged by the amount of DNA printed and remaining post-hybridization) using three robots equipped with capillary printing pins. Discussion The generation of microarray data with minimal variation requires consistent quality control of the (DNA microarray) manufacturing and experimental processes. Spot reproducibility is a measure primarily of the variations associated with printing. The microarray meter assesses array quality by measuring the DNA content for every feature. It provides a post-hybridization analysis of array quality by scoring probe performance using three metrics, a) a measure of variability in the signal intensities, b) a measure of the signal dynamic range and c) a measure of variability of the spot morphologies. PMID:18710498
Burgarella, Sarah; Cattaneo, Dario; Masseroli, Marco
2006-01-01
We developed MicroGen, a multi-database Web based system for managing all the information characterizing spotted microarray experiments. It supports information gathering and storing according to the Minimum Information About Microarray Experiments (MIAME) standard. It also allows easy sharing of information and data among all multidisciplinary actors involved in spotted microarray experiments. PMID:17238488
2006-04-27
polysaccharide microarray platform was prepared by immobilizing Burkholderia pseudomallei and Burkholderia mallei polysaccharides . This... polysaccharide array was tested with success for detecting B. pseudomallei and B. mallei serum (human and animal) antibodies. The advantages of this microarray... Polysaccharide microarrays; Burkholderia pseudomallei; Burkholderia mallei; Glanders; Melioidosis1. Introduction There has been a great deal of emphasis on the
Microarray-integrated optoelectrofluidic immunoassay system
Han, Dongsik
2016-01-01
A microarray-based analytical platform has been utilized as a powerful tool in biological assay fields. However, an analyte depletion problem due to the slow mass transport based on molecular diffusion causes low reaction efficiency, resulting in a limitation for practical applications. This paper presents a novel method to improve the efficiency of microarray-based immunoassay via an optically induced electrokinetic phenomenon by integrating an optoelectrofluidic device with a conventional glass slide-based microarray format. A sample droplet was loaded between the microarray slide and the optoelectrofluidic device on which a photoconductive layer was deposited. Under the application of an AC voltage, optically induced AC electroosmotic flows caused by a microarray-patterned light actively enhanced the mass transport of target molecules at the multiple assay spots of the microarray simultaneously, which reduced tedious reaction time from more than 30 min to 10 min. Based on this enhancing effect, a heterogeneous immunoassay with a tiny volume of sample (5 μl) was successfully performed in the microarray-integrated optoelectrofluidic system using immunoglobulin G (IgG) and anti-IgG, resulting in improved efficiency compared to the static environment. Furthermore, the application of multiplex assays was also demonstrated by multiple protein detection. PMID:27190571
Microarray-integrated optoelectrofluidic immunoassay system.
Han, Dongsik; Park, Je-Kyun
2016-05-01
A microarray-based analytical platform has been utilized as a powerful tool in biological assay fields. However, an analyte depletion problem due to the slow mass transport based on molecular diffusion causes low reaction efficiency, resulting in a limitation for practical applications. This paper presents a novel method to improve the efficiency of microarray-based immunoassay via an optically induced electrokinetic phenomenon by integrating an optoelectrofluidic device with a conventional glass slide-based microarray format. A sample droplet was loaded between the microarray slide and the optoelectrofluidic device on which a photoconductive layer was deposited. Under the application of an AC voltage, optically induced AC electroosmotic flows caused by a microarray-patterned light actively enhanced the mass transport of target molecules at the multiple assay spots of the microarray simultaneously, which reduced tedious reaction time from more than 30 min to 10 min. Based on this enhancing effect, a heterogeneous immunoassay with a tiny volume of sample (5 μl) was successfully performed in the microarray-integrated optoelectrofluidic system using immunoglobulin G (IgG) and anti-IgG, resulting in improved efficiency compared to the static environment. Furthermore, the application of multiplex assays was also demonstrated by multiple protein detection.
Advances in cell-free protein array methods.
Yu, Xiaobo; Petritis, Brianne; Duan, Hu; Xu, Danke; LaBaer, Joshua
2018-01-01
Cell-free protein microarrays represent a special form of protein microarray which display proteins made fresh at the time of the experiment, avoiding storage and denaturation. They have been used increasingly in basic and translational research over the past decade to study protein-protein interactions, the pathogen-host relationship, post-translational modifications, and antibody biomarkers of different human diseases. Their role in the first blood-based diagnostic test for early stage breast cancer highlights their value in managing human health. Cell-free protein microarrays will continue to evolve to become widespread tools for research and clinical management. Areas covered: We review the advantages and disadvantages of different cell-free protein arrays, with an emphasis on the methods that have been studied in the last five years. We also discuss the applications of each microarray method. Expert commentary: Given the growing roles and impact of cell-free protein microarrays in research and medicine, we discuss: 1) the current technical and practical limitations of cell-free protein microarrays; 2) the biomarker discovery and verification pipeline using protein microarrays; and 3) how cell-free protein microarrays will advance over the next five years, both in their technology and applications.
Kračun, Stjepan Krešimir; Fangel, Jonatan Ulrik; Rydahl, Maja Gro; Pedersen, Henriette Lodberg; Vidal-Melgosa, Silvia; Willats, William George Tycho
2017-01-01
Cell walls are an important feature of plant cells and a major component of the plant glycome. They have both structural and physiological functions and are critical for plant growth and development. The diversity and complexity of these structures demand advanced high-throughput techniques to answer questions about their structure, functions and roles in both fundamental and applied scientific fields. Microarray technology provides both the high-throughput and the feasibility aspects required to meet that demand. In this chapter, some of the most recent microarray-based techniques relating to plant cell walls are described together with an overview of related contemporary techniques applied to carbohydrate microarrays and their general potential in glycoscience. A detailed experimental procedure for high-throughput mapping of plant cell wall glycans using the comprehensive microarray polymer profiling (CoMPP) technique is included in the chapter and provides a good example of both the robust and high-throughput nature of microarrays as well as their applicability to plant glycomics.
Sedimentary architecture of a Plio-Pleistocene proto-back-arc basin: Wanganui Basin, New Zealand
NASA Astrophysics Data System (ADS)
Proust, Jean-Noël; Lamarche, Geoffroy; Nodder, Scott; Kamp, Peter J. J.
2005-11-01
The sedimentary architecture of active margin basins, including back-arc basins, is known only from a few end-members that barely illustrate the natural diversity of such basins. Documenting more of these basins types is the key to refining our understanding of the tectonic evolution of continental margins. This paper documents the sedimentary architecture of an incipient back-arc basin 200 km behind the active Hikurangi subduction margin, North Island, New Zealand. The Wanganui Basin (WB) is a rapidly subsiding, Plio-Pleistocene sedimentary basin located at the southern termination of the extensional back-arc basin of the active Central Volcanic Region (TVZ). The WB is asymmetric with a steep, thrust-faulted, outer (arc-ward) margin and a gentle inner (craton-ward) margin. It contains a 4-km-thick succession of Plio-Pleistocene sediments, mostly lying offshore, composed of shelf platform sediments. It lacks the late molasse-like deposits derived from erosion of a subaerial volcanic arc and basement observed in classical back-arc basins. Detailed seismic stratigraphic interpretations from an extensive offshore seismic reflection data grid show that the sediment fill comprises two basin-scale mega-sequences: (1) a Pliocene (3.8 to 1.35 Ma), sub-parallel, regressive "pre-growth" sequence that overtops the uplifted craton-ward margin above the reverse Taranaki Fault, and (2) a Pleistocene (1.35 Ma to present), divergent, transgressive, "syn-growth" sequence that onlaps: (i) the craton-ward high to the west, and (ii) uplifted basement blocks associated with the high-angle reverse faults of the arc-ward margin to the east. Along strike, the sediments offlap first progressively southward (mega-sequence 1) and then southeastward (mega-sequence 2), with sediment transport funnelled between the craton- and arc-ward highs, towards the Hikurangi Trough through the Cook Strait. The change in offlap direction corresponds to the onset of arc-ward thrust faulting and the rise of the Axial Ranges at ca 1.75 Ma, resulting in 5100-5700 m of differential subsidence across the fault system. Sedimentation has propagated south- to southeast-ward over the last 4 Myrs at the tip of successive back-arc graben, volcanic arcs and the associated thermally uplifted parts of the North Island, following the southward migration of the Hikurangi subduction margin. Subsidence occurred by mantle flow-driven flexure, the result of active down-drag of the lithosphere by locking of the Hikurangi subduction interface and sediment loading. The WB is considered to be a proto-back-arc basin that represents the intermediate stage of evolution of an epicratonic shelf platform, impacted by active margin processes.
Interim report on updated microarray probes for the LLNL Burkholderia pseudomallei SNP array
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, S; Jaing, C
2012-03-27
The overall goal of this project is to forensically characterize 100 unknown Burkholderia isolates in the US-Australia collaboration. We will identify genome-wide single nucleotide polymorphisms (SNPs) from B. pseudomallei and near neighbor species including B. mallei, B. thailandensis and B. oklahomensis. We will design microarray probes to detect these SNP markers and analyze 100 Burkholderia genomic DNAs extracted from environmental, clinical and near neighbor isolates from Australian collaborators on the Burkholderia SNP microarray. We will analyze the microarray genotyping results to characterize the genetic diversity of these new isolates and triage the samples for whole genome sequencing. In this interimmore » report, we described the SNP analysis and the microarray probe design for the Burkholderia SNP microarray.« less
2010-01-01
Background The development of DNA microarrays has facilitated the generation of hundreds of thousands of transcriptomic datasets. The use of a common reference microarray design allows existing transcriptomic data to be readily compared and re-analysed in the light of new data, and the combination of this design with large datasets is ideal for 'systems'-level analyses. One issue is that these datasets are typically collected over many years and may be heterogeneous in nature, containing different microarray file formats and gene array layouts, dye-swaps, and showing varying scales of log2- ratios of expression between microarrays. Excellent software exists for the normalisation and analysis of microarray data but many data have yet to be analysed as existing methods struggle with heterogeneous datasets; options include normalising microarrays on an individual or experimental group basis. Our solution was to develop the Batch Anti-Banana Algorithm in R (BABAR) algorithm and software package which uses cyclic loess to normalise across the complete dataset. We have already used BABAR to analyse the function of Salmonella genes involved in the process of infection of mammalian cells. Results The only input required by BABAR is unprocessed GenePix or BlueFuse microarray data files. BABAR provides a combination of 'within' and 'between' microarray normalisation steps and diagnostic boxplots. When applied to a real heterogeneous dataset, BABAR normalised the dataset to produce a comparable scaling between the microarrays, with the microarray data in excellent agreement with RT-PCR analysis. When applied to a real non-heterogeneous dataset and a simulated dataset, BABAR's performance in identifying differentially expressed genes showed some benefits over standard techniques. Conclusions BABAR is an easy-to-use software tool, simplifying the simultaneous normalisation of heterogeneous two-colour common reference design cDNA microarray-based transcriptomic datasets. We show BABAR transforms real and simulated datasets to allow for the correct interpretation of these data, and is the ideal tool to facilitate the identification of differentially expressed genes or network inference analysis from transcriptomic datasets. PMID:20128918
A genome-wide 20 K citrus microarray for gene expression analysis
Martinez-Godoy, M Angeles; Mauri, Nuria; Juarez, Jose; Marques, M Carmen; Santiago, Julia; Forment, Javier; Gadea, Jose
2008-01-01
Background Understanding of genetic elements that contribute to key aspects of citrus biology will impact future improvements in this economically important crop. Global gene expression analysis demands microarray platforms with a high genome coverage. In the last years, genome-wide EST collections have been generated in citrus, opening the possibility to create new tools for functional genomics in this crop plant. Results We have designed and constructed a publicly available genome-wide cDNA microarray that include 21,081 putative unigenes of citrus. As a functional companion to the microarray, a web-browsable database [1] was created and populated with information about the unigenes represented in the microarray, including cDNA libraries, isolated clones, raw and processed nucleotide and protein sequences, and results of all the structural and functional annotation of the unigenes, like general description, BLAST hits, putative Arabidopsis orthologs, microsatellites, putative SNPs, GO classification and PFAM domains. We have performed a Gene Ontology comparison with the full set of Arabidopsis proteins to estimate the genome coverage of the microarray. We have also performed microarray hybridizations to check its usability. Conclusion This new cDNA microarray replaces the first 7K microarray generated two years ago and allows gene expression analysis at a more global scale. We have followed a rational design to minimize cross-hybridization while maintaining its utility for different citrus species. Furthermore, we also provide access to a website with full structural and functional annotation of the unigenes represented in the microarray, along with the ability to use this site to directly perform gene expression analysis using standard tools at different publicly available servers. Furthermore, we show how this microarray offers a good representation of the citrus genome and present the usefulness of this genomic tool for global studies in citrus by using it to catalogue genes expressed in citrus globular embryos. PMID:18598343
An evaluation of two-channel ChIP-on-chip and DNA methylation microarray normalization strategies
2012-01-01
Background The combination of chromatin immunoprecipitation with two-channel microarray technology enables genome-wide mapping of binding sites of DNA-interacting proteins (ChIP-on-chip) or sites with methylated CpG di-nucleotides (DNA methylation microarray). These powerful tools are the gateway to understanding gene transcription regulation. Since the goals of such studies, the sample preparation procedures, the microarray content and study design are all different from transcriptomics microarrays, the data pre-processing strategies traditionally applied to transcriptomics microarrays may not be appropriate. Particularly, the main challenge of the normalization of "regulation microarrays" is (i) to make the data of individual microarrays quantitatively comparable and (ii) to keep the signals of the enriched probes, representing DNA sequences from the precipitate, as distinguishable as possible from the signals of the un-enriched probes, representing DNA sequences largely absent from the precipitate. Results We compare several widely used normalization approaches (VSN, LOWESS, quantile, T-quantile, Tukey's biweight scaling, Peng's method) applied to a selection of regulation microarray datasets, ranging from DNA methylation to transcription factor binding and histone modification studies. Through comparison of the data distributions of control probes and gene promoter probes before and after normalization, and assessment of the power to identify known enriched genomic regions after normalization, we demonstrate that there are clear differences in performance between normalization procedures. Conclusion T-quantile normalization applied separately on the channels and Tukey's biweight scaling outperform other methods in terms of the conservation of enriched and un-enriched signal separation, as well as in identification of genomic regions known to be enriched. T-quantile normalization is preferable as it additionally improves comparability between microarrays. In contrast, popular normalization approaches like quantile, LOWESS, Peng's method and VSN normalization alter the data distributions of regulation microarrays to such an extent that using these approaches will impact the reliability of the downstream analysis substantially. PMID:22276688
Rai, Muhammad Farooq; Tycksen, Eric D; Sandell, Linda J; Brophy, Robert H
2018-01-01
Microarrays and RNA-seq are at the forefront of high throughput transcriptome analyses. Since these methodologies are based on different principles, there are concerns about the concordance of data between the two techniques. The concordance of RNA-seq and microarrays for genome-wide analysis of differential gene expression has not been rigorously assessed in clinically derived ligament tissues. To demonstrate the concordance between RNA-seq and microarrays and to assess potential benefits of RNA-seq over microarrays, we assessed differences in transcript expression in anterior cruciate ligament (ACL) tissues based on time-from-injury. ACL remnants were collected from patients with an ACL tear at the time of ACL reconstruction. RNA prepared from torn ACL remnants was subjected to Agilent microarrays (N = 24) and RNA-seq (N = 8). The correlation of biological replicates in RNA-seq and microarrays data was similar (0.98 vs. 0.97), demonstrating that each platform has high internal reproducibility. Correlations between the RNA-seq data and the individual microarrays were low, but correlations between the RNA-seq values and the geometric mean of the microarrays values were moderate. The cross-platform concordance for differentially expressed transcripts or enriched pathways was linearly correlated (r = 0.64). RNA-Seq was superior in detecting low abundance transcripts and differentiating biologically critical isoforms. Additional independent validation of transcript expression was undertaken using microfluidic PCR for selected genes. PCR data showed 100% concordance (in expression pattern) with RNA-seq and microarrays data. These findings demonstrate that RNA-seq has advantages over microarrays for transcriptome profiling of ligament tissues when available and affordable. Furthermore, these findings are likely transferable to other musculoskeletal tissues where tissue collection is challenging and cells are in low abundance. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 36:484-497, 2018. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
Los Angeles and San Diego Margin High-Resolution Multibeam Bathymetry and Backscatter Data
Dartnell, Peter; Gardner, James V.; Mayer, Larry A.; Hughes-Clarke, John E.
2004-01-01
Summary -- The U.S. Geological Survey in cooperation with the University of New Hampshire and the University of New Brunswick mapped the nearshore regions off Los Angeles and San Diego, California using multibeam echosounders. Multibeam bathymetry and co-registered, corrected acoustic backscatter were collected in water depths ranging from about 3 to 900 m offshore Los Angeles and in water depths ranging from about 17 to 1230 m offshore San Diego. Continuous, 16-m spatial resolution, GIS ready format data of the entire Los Angeles Margin and San Diego Margin are available online as separate USGS Open-File Reports. For ongoing research, the USGS has processed sub-regions within these datasets at finer resolutions. The resolution of each sub-region was determined by the density of soundings within the region. This Open-File Report contains the finer resolution multibeam bathymetry and acoustic backscatter data that the USGS, Western Region, Coastal and Marine Geology Team has processed into GIS ready formats as of April 2004. The data are available in ArcInfo GRID and XYZ formats. See the Los Angeles or San Diego maps for the sub-region locations. These datasets in their present form were not originally intended for publication. The bathymetry and backscatter have data-collection and processing artifacts. These data are being made public to fulfill a Freedom of Information Act request. Care must be taken not to confuse artifacts with real seafloor morphology and acoustic backscatter.
Concurrent tumor segmentation and registration with uncertainty-based sparse non-uniform graphs.
Parisot, Sarah; Wells, William; Chemouny, Stéphane; Duffau, Hugues; Paragios, Nikos
2014-05-01
In this paper, we present a graph-based concurrent brain tumor segmentation and atlas to diseased patient registration framework. Both segmentation and registration problems are modeled using a unified pairwise discrete Markov Random Field model on a sparse grid superimposed to the image domain. Segmentation is addressed based on pattern classification techniques, while registration is performed by maximizing the similarity between volumes and is modular with respect to the matching criterion. The two problems are coupled by relaxing the registration term in the tumor area, corresponding to areas of high classification score and high dissimilarity between volumes. In order to overcome the main shortcomings of discrete approaches regarding appropriate sampling of the solution space as well as important memory requirements, content driven samplings of the discrete displacement set and the sparse grid are considered, based on the local segmentation and registration uncertainties recovered by the min marginal energies. State of the art results on a substantial low-grade glioma database demonstrate the potential of our method, while our proposed approach shows maintained performance and strongly reduced complexity of the model. Copyright © 2014 Elsevier B.V. All rights reserved.
Yuksel, Tugce; Michalek, Jeremy J
2015-03-17
We characterize the effect of regional temperature differences on battery electric vehicle (BEV) efficiency, range, and use-phase power plant CO2 emissions in the U.S. The efficiency of a BEV varies with ambient temperature due to battery efficiency and cabin climate control. We find that annual energy consumption of BEVs can increase by an average of 15% in the Upper Midwest or in the Southwest compared to the Pacific Coast due to temperature differences. Greenhouse gas (GHG) emissions from BEVs vary primarily with marginal regional grid mix, which has three times the GHG intensity in the Upper Midwest as on the Pacific Coast. However, even within a grid region, BEV emissions vary by up to 22% due to spatial and temporal ambient temperature variation and its implications for vehicle efficiency and charging duration and timing. Cold climate regions also encounter days with substantial reduction in EV range: the average range of a Nissan Leaf on the coldest day of the year drops from 70 miles on the Pacific Coast to less than 45 miles in the Upper Midwest. These regional differences are large enough to affect adoption patterns and energy and environmental implications of BEVs relative to alternatives.
NASA Astrophysics Data System (ADS)
Haiducek, John D.; Welling, Daniel T.; Ganushkina, Natalia Y.; Morley, Steven K.; Ozturk, Dogacan Su
2017-12-01
We simulated the entire month of January 2005 using the Space Weather Modeling Framework (SWMF) with observed solar wind data as input. We conducted this simulation with and without an inner magnetosphere model and tested two different grid resolutions. We evaluated the model's accuracy in predicting Kp, SYM-H, AL, and cross-polar cap potential (CPCP). We find that the model does an excellent job of predicting the SYM-H index, with a root-mean-square error (RMSE) of 17-18 nT. Kp is predicted well during storm time conditions but overpredicted during quiet times by a margin of 1 to 1.7 Kp units. AL is predicted reasonably well on average, with an RMSE of 230-270 nT. However, the model reaches the largest negative AL values significantly less often than the observations. The model tended to overpredict CPCP, with RMSE values on the order of 46-48 kV. We found the results to be insensitive to grid resolution, with the exception of the rate of occurrence for strongly negative AL values. The use of the inner magnetosphere component, however, affected results significantly, with all quantities except CPCP improved notably when the inner magnetosphere model was on.
The effect of column purification on cDNA indirect labelling for microarrays
Molas, M Lia; Kiss, John Z
2007-01-01
Background The success of the microarray reproducibility is dependent upon the performance of standardized procedures. Since the introduction of microarray technology for the analysis of global gene expression, reproducibility of results among different laboratories has been a major problem. Two of the main contributors to this variability are the use of different microarray platforms and different laboratory practices. In this paper, we address the latter question in terms of how variation in one of the steps of a labelling procedure affects the cDNA product prior to microarray hybridization. Results We used a standard procedure to label cDNA for microarray hybridization and employed different types of column chromatography for cDNA purification. After purifying labelled cDNA, we used the Agilent 2100 Bioanalyzer and agarose gel electrophoresis to assess the quality of the labelled cDNA before its hybridization onto a microarray platform. There were major differences in the cDNA profile (i.e. cDNA fragment lengths and abundance) as a result of using four different columns for purification. In addition, different columns have different efficiencies to remove rRNA contamination. This study indicates that the appropriate column to use in this type of protocol has to be experimentally determined. Finally, we present new evidence establishing the importance of testing the method of purification used during an indirect labelling procedure. Our results confirm the importance of assessing the quality of the sample in the labelling procedure prior to hybridization onto a microarray platform. Conclusion Standardization of column purification systems to be used in labelling procedures will improve the reproducibility of microarray results among different laboratories. In addition, implementation of a quality control check point of the labelled samples prior to microarray hybridization will prevent hybridizing a poor quality sample to expensive micorarrays. PMID:17597522
The effect of column purification on cDNA indirect labelling for microarrays.
Molas, M Lia; Kiss, John Z
2007-06-27
The success of the microarray reproducibility is dependent upon the performance of standardized procedures. Since the introduction of microarray technology for the analysis of global gene expression, reproducibility of results among different laboratories has been a major problem. Two of the main contributors to this variability are the use of different microarray platforms and different laboratory practices. In this paper, we address the latter question in terms of how variation in one of the steps of a labelling procedure affects the cDNA product prior to microarray hybridization. We used a standard procedure to label cDNA for microarray hybridization and employed different types of column chromatography for cDNA purification. After purifying labelled cDNA, we used the Agilent 2100 Bioanalyzer and agarose gel electrophoresis to assess the quality of the labelled cDNA before its hybridization onto a microarray platform. There were major differences in the cDNA profile (i.e. cDNA fragment lengths and abundance) as a result of using four different columns for purification. In addition, different columns have different efficiencies to remove rRNA contamination. This study indicates that the appropriate column to use in this type of protocol has to be experimentally determined. Finally, we present new evidence establishing the importance of testing the method of purification used during an indirect labelling procedure. Our results confirm the importance of assessing the quality of the sample in the labelling procedure prior to hybridization onto a microarray platform. Standardization of column purification systems to be used in labelling procedures will improve the reproducibility of microarray results among different laboratories. In addition, implementation of a quality control check point of the labelled samples prior to microarray hybridization will prevent hybridizing a poor quality sample to expensive micorarrays.
McCoy, Gary R; Touzet, Nicolas; Fleming, Gerard T A; Raine, Robin
2015-07-01
The toxic microalgal species Prymnesium parvum and Prymnesium polylepis are responsible for numerous fish kills causing economic stress on the aquaculture industry and, through the consumption of contaminated shellfish, can potentially impact on human health. Monitoring of toxic phytoplankton is traditionally carried out by light microscopy. However, molecular methods of identification and quantification are becoming more common place. This study documents the optimisation of the novel Microarrays for the Detection of Toxic Algae (MIDTAL) microarray from its initial stages to the final commercial version now available from Microbia Environnement (France). Existing oligonucleotide probes used in whole-cell fluorescent in situ hybridisation (FISH) for Prymnesium species from higher group probes to species-level probes were adapted and tested on the first-generation microarray. The combination and interaction of numerous other probes specific for a whole range of phytoplankton taxa also spotted on the chip surface caused high cross reactivity, resulting in false-positive results on the microarray. The probe sequences were extended for the subsequent second-generation microarray, and further adaptations of the hybridisation protocol and incubation temperatures significantly reduced false-positive readings from the first to the second-generation chip, thereby increasing the specificity of the MIDTAL microarray. Additional refinement of the subsequent third-generation microarray protocols with the addition of a poly-T amino linker to the 5' end of each probe further enhanced the microarray performance but also highlighted the importance of optimising RNA labelling efficiency when testing with natural seawater samples from Killary Harbour, Ireland.
The Glycan Microarray Story from Construction to Applications.
Hyun, Ji Young; Pai, Jaeyoung; Shin, Injae
2017-04-18
Not only are glycan-mediated binding processes in cells and organisms essential for a wide range of physiological processes, but they are also implicated in various pathological processes. As a result, elucidation of glycan-associated biomolecular interactions and their consequences is of great importance in basic biological research and biomedical applications. In 2002, we and others were the first to utilize glycan microarrays in efforts aimed at the rapid analysis of glycan-associated recognition events. Because they contain a number of glycans immobilized in a dense and orderly manner on a solid surface, glycan microarrays enable multiple parallel analyses of glycan-protein binding events while utilizing only small amounts of glycan samples. Therefore, this microarray technology has become a leading edge tool in studies aimed at elucidating roles played by glycans and glycan binding proteins in biological systems. In this Account, we summarize our efforts on the construction of glycan microarrays and their applications in studies of glycan-associated interactions. Immobilization strategies of functionalized and unmodified glycans on derivatized glass surfaces are described. Although others have developed immobilization techniques, our efforts have focused on improving the efficiencies and operational simplicity of microarray construction. The microarray-based technology has been most extensively used for rapid analysis of the glycan binding properties of proteins. In addition, glycan microarrays have been employed to determine glycan-protein interactions quantitatively, detect pathogens, and rapidly assess substrate specificities of carbohydrate-processing enzymes. More recently, the microarrays have been employed to identify functional glycans that elicit cell surface lectin-mediated cellular responses. Owing to these efforts, it is now possible to use glycan microarrays to expand the understanding of roles played by glycans and glycan binding proteins in biological systems.
Two-Dimensional VO2 Mesoporous Microarrays for High-Performance Supercapacitor
NASA Astrophysics Data System (ADS)
Fan, Yuqi; Ouyang, Delong; Li, Bao-Wen; Dang, Feng; Ren, Zongming
2018-05-01
Two-dimensional (2D) mesoporous VO2 microarrays have been prepared using an organic-inorganic liquid interface. The units of microarrays consist of needle-like VO2 particles with a mesoporous structure, in which crack-like pores with a pore size of about 2 nm and depth of 20-100 nm are distributed on the particle surface. The liquid interface acts as a template for the formation of the 2D microarrays, as identified from the kinetic observation. Due to the mesoporous structure of the units and high conductivity of the microarray, such 2D VO2 microarrays exhibit a high specific capacitance of 265 F/g at 1 A/g and excellent rate capability (182 F/g at 10 A/g) and cycling stability, suggesting the effect of unique microstructure for improving the electrochemical performance.
Plant-pathogen interactions: what microarray tells about it?
Lodha, T D; Basak, J
2012-01-01
Plant defense responses are mediated by elementary regulatory proteins that affect expression of thousands of genes. Over the last decade, microarray technology has played a key role in deciphering the underlying networks of gene regulation in plants that lead to a wide variety of defence responses. Microarray is an important tool to quantify and profile the expression of thousands of genes simultaneously, with two main aims: (1) gene discovery and (2) global expression profiling. Several microarray technologies are currently in use; most include a glass slide platform with spotted cDNA or oligonucleotides. Till date, microarray technology has been used in the identification of regulatory genes, end-point defence genes, to understand the signal transduction processes underlying disease resistance and its intimate links to other physiological pathways. Microarray technology can be used for in-depth, simultaneous profiling of host/pathogen genes as the disease progresses from infection to resistance/susceptibility at different developmental stages of the host, which can be done in different environments, for clearer understanding of the processes involved. A thorough knowledge of plant disease resistance using successful combination of microarray and other high throughput techniques, as well as biochemical, genetic, and cell biological experiments is needed for practical application to secure and stabilize yield of many crop plants. This review starts with a brief introduction to microarray technology, followed by the basics of plant-pathogen interaction, the use of DNA microarrays over the last decade to unravel the mysteries of plant-pathogen interaction, and ends with the future prospects of this technology.
Clustering-based spot segmentation of cDNA microarray images.
Uslan, Volkan; Bucak, Ihsan Ömür
2010-01-01
Microarrays are utilized as that they provide useful information about thousands of gene expressions simultaneously. In this study segmentation step of microarray image processing has been implemented. Clustering-based methods, fuzzy c-means and k-means, have been applied for the segmentation step that separates the spots from the background. The experiments show that fuzzy c-means have segmented spots of the microarray image more accurately than the k-means.
A perspective on microarrays: current applications, pitfalls, and potential uses
Jaluria, Pratik; Konstantopoulos, Konstantinos; Betenbaugh, Michael; Shiloach, Joseph
2007-01-01
With advances in robotics, computational capabilities, and the fabrication of high quality glass slides coinciding with increased genomic information being available on public databases, microarray technology is increasingly being used in laboratories around the world. In fact, fields as varied as: toxicology, evolutionary biology, drug development and production, disease characterization, diagnostics development, cellular physiology and stress responses, and forensics have benefiting from its use. However, for many researchers not familiar with microarrays, current articles and reviews often address neither the fundamental principles behind the technology nor the proper designing of experiments. Although, microarray technology is relatively simple, conceptually, its practice does require careful planning and detailed understanding of the limitations inherently present. Without these considerations, it can be exceedingly difficult to ascertain valuable information from microarray data. Therefore, this text aims to outline key features in microarray technology, paying particular attention to current applications as outlined in recent publications, experimental design, statistical methods, and potential uses. Furthermore, this review is not meant to be comprehensive, but rather substantive; highlighting important concepts and detailing steps necessary to conduct and interpret microarray experiments. Collectively, the information included in this text will highlight the versatility of microarray technology and provide a glimpse of what the future may hold. PMID:17254338
A Platform for Combined DNA and Protein Microarrays Based on Total Internal Reflection Fluorescence
Asanov, Alexander; Zepeda, Angélica; Vaca, Luis
2012-01-01
We have developed a novel microarray technology based on total internal reflection fluorescence (TIRF) in combination with DNA and protein bioassays immobilized at the TIRF surface. Unlike conventional microarrays that exhibit reduced signal-to-background ratio, require several stages of incubation, rinsing and stringency control, and measure only end-point results, our TIRF microarray technology provides several orders of magnitude better signal-to-background ratio, performs analysis rapidly in one step, and measures the entire course of association and dissociation kinetics between target DNA and protein molecules and the bioassays. In many practical cases detection of only DNA or protein markers alone does not provide the necessary accuracy for diagnosing a disease or detecting a pathogen. Here we describe TIRF microarrays that detect DNA and protein markers simultaneously, which reduces the probabilities of false responses. Supersensitive and multiplexed TIRF DNA and protein microarray technology may provide a platform for accurate diagnosis or enhanced research studies. Our TIRF microarray system can be mounted on upright or inverted microscopes or interfaced directly with CCD cameras equipped with a single objective, facilitating the development of portable devices. As proof-of-concept we applied TIRF microarrays for detecting molecular markers from Bacillus anthracis, the pathogen responsible for anthrax. PMID:22438738
Motakis, E S; Nason, G P; Fryzlewicz, P; Rutter, G A
2006-10-15
Many standard statistical techniques are effective on data that are normally distributed with constant variance. Microarray data typically violate these assumptions since they come from non-Gaussian distributions with a non-trivial mean-variance relationship. Several methods have been proposed that transform microarray data to stabilize variance and draw its distribution towards the Gaussian. Some methods, such as log or generalized log, rely on an underlying model for the data. Others, such as the spread-versus-level plot, do not. We propose an alternative data-driven multiscale approach, called the Data-Driven Haar-Fisz for microarrays (DDHFm) with replicates. DDHFm has the advantage of being 'distribution-free' in the sense that no parametric model for the underlying microarray data is required to be specified or estimated; hence, DDHFm can be applied very generally, not just to microarray data. DDHFm achieves very good variance stabilization of microarray data with replicates and produces transformed intensities that are approximately normally distributed. Simulation studies show that it performs better than other existing methods. Application of DDHFm to real one-color cDNA data validates these results. The R package of the Data-Driven Haar-Fisz transform (DDHFm) for microarrays is available in Bioconductor and CRAN.
Validation of MIMGO: a method to identify differentially expressed GO terms in a microarray dataset
2012-01-01
Background We previously proposed an algorithm for the identification of GO terms that commonly annotate genes whose expression is upregulated or downregulated in some microarray data compared with in other microarray data. We call these “differentially expressed GO terms” and have named the algorithm “matrix-assisted identification method of differentially expressed GO terms” (MIMGO). MIMGO can also identify microarray data in which genes annotated with a differentially expressed GO term are upregulated or downregulated. However, MIMGO has not yet been validated on a real microarray dataset using all available GO terms. Findings We combined Gene Set Enrichment Analysis (GSEA) with MIMGO to identify differentially expressed GO terms in a yeast cell cycle microarray dataset. GSEA followed by MIMGO (GSEA + MIMGO) correctly identified (p < 0.05) microarray data in which genes annotated to differentially expressed GO terms are upregulated. We found that GSEA + MIMGO was slightly less effective than, or comparable to, GSEA (Pearson), a method that uses Pearson’s correlation as a metric, at detecting true differentially expressed GO terms. However, unlike other methods including GSEA (Pearson), GSEA + MIMGO can comprehensively identify the microarray data in which genes annotated with a differentially expressed GO term are upregulated or downregulated. Conclusions MIMGO is a reliable method to identify differentially expressed GO terms comprehensively. PMID:23232071
Microintaglio Printing for Soft Lithography-Based in Situ Microarrays
Biyani, Manish; Ichiki, Takanori
2015-01-01
Advances in lithographic approaches to fabricating bio-microarrays have been extensively explored over the last two decades. However, the need for pattern flexibility, a high density, a high resolution, affordability and on-demand fabrication is promoting the development of unconventional routes for microarray fabrication. This review highlights the development and uses of a new molecular lithography approach, called “microintaglio printing technology”, for large-scale bio-microarray fabrication using a microreactor array (µRA)-based chip consisting of uniformly-arranged, femtoliter-size µRA molds. In this method, a single-molecule-amplified DNA microarray pattern is self-assembled onto a µRA mold and subsequently converted into a messenger RNA or protein microarray pattern by simultaneously producing and transferring (immobilizing) a messenger RNA or a protein from a µRA mold to a glass surface. Microintaglio printing allows the self-assembly and patterning of in situ-synthesized biomolecules into high-density (kilo-giga-density), ordered arrays on a chip surface with µm-order precision. This holistic aim, which is difficult to achieve using conventional printing and microarray approaches, is expected to revolutionize and reshape proteomics. This review is not written comprehensively, but rather substantively, highlighting the versatility of microintaglio printing for developing a prerequisite platform for microarray technology for the postgenomic era. PMID:27600226
An Introduction to MAMA (Meta-Analysis of MicroArray data) System.
Zhang, Zhe; Fenstermacher, David
2005-01-01
Analyzing microarray data across multiple experiments has been proven advantageous. To support this kind of analysis, we are developing a software system called MAMA (Meta-Analysis of MicroArray data). MAMA utilizes a client-server architecture with a relational database on the server-side for the storage of microarray datasets collected from various resources. The client-side is an application running on the end user's computer that allows the user to manipulate microarray data and analytical results locally. MAMA implementation will integrate several analytical methods, including meta-analysis within an open-source framework offering other developers the flexibility to plug in additional statistical algorithms.
Methods to study legionella transcriptome in vitro and in vivo.
Faucher, Sebastien P; Shuman, Howard A
2013-01-01
The study of transcriptome responses can provide insight into the regulatory pathways and genetic factors that contribute to a specific phenotype. For bacterial pathogens, it can identify putative new virulence systems and shed light on the mechanisms underlying the regulation of virulence factors. Microarrays have been previously used to study gene regulation in Legionella pneumophila. In the past few years a sharp reduction of the costs associated with microarray experiments together with the availability of relatively inexpensive custom-designed commercial microarrays has made microarray technology an accessible tool for the majority of researchers. Here we describe the methodologies to conduct microarray experiments from in vitro and in vivo samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denholm, Paul; Clark, Kara; O'Connell, Matt
Increasing the use of grid-flexibility options (improved grid management, demand response, and energy storage) could enable 25% or higher penetration of PV at low costs (see Denholm et al. 2016). Considering the large-scale integration of solar into electric-power systems complicates the calculation of the value of solar. In fact a comprehensive examination reveals that the value of solar technologies—or any other power-system technology or operating strategy—can only be understood in the context of the generation system as a whole. This is well illustrated by analysis of curtailment at high PV penetrations within the bulk power and transmission systems. As themore » deployment of PV increases, it is possible that during some sunny midday periods due to limited flexibility of conventional generators, system operators would need to reduce (curtail) PV output in order to maintain the crucial balance between electric supply and demand. As a result, PV’s value and cost competitiveness would degrade. For example, for utility-scale PV with a baseline SunShot LCOE of 6¢/kWh, increasing the annual energy demand met by solar energy from 10% to 20% would increase the marginal LCOE of PV from 6¢/kWh to almost 11¢/kWh in a California grid system with limited flexibility. However, this loss of value could be stemmed by increasing system flexibility via enhanced control of variable-generation resources, added energy storage, and the ability to motivate more electricity consumers to shift consumption to lower-demand periods. The combination of these measures would minimize solar curtailment and keep PV cost-competitive at penetrations at least as high as 25%. Efficient deployment of the grid-flexibility options needed to maintain solar’s value will require various innovations, from the development of communication, control, and energy storage technologies to the implementation of new market rules and operating procedures.« less
GeneXplorer: an interactive web application for microarray data visualization and analysis.
Rees, Christian A; Demeter, Janos; Matese, John C; Botstein, David; Sherlock, Gavin
2004-10-01
When publishing large-scale microarray datasets, it is of great value to create supplemental websites where either the full data, or selected subsets corresponding to figures within the paper, can be browsed. We set out to create a CGI application containing many of the features of some of the existing standalone software for the visualization of clustered microarray data. We present GeneXplorer, a web application for interactive microarray data visualization and analysis in a web environment. GeneXplorer allows users to browse a microarray dataset in an intuitive fashion. It provides simple access to microarray data over the Internet and uses only HTML and JavaScript to display graphic and annotation information. It provides radar and zoom views of the data, allows display of the nearest neighbors to a gene expression vector based on their Pearson correlations and provides the ability to search gene annotation fields. The software is released under the permissive MIT Open Source license, and the complete documentation and the entire source code are freely available for download from CPAN http://search.cpan.org/dist/Microarray-GeneXplorer/.
Fluorescence-based bioassays for the detection and evaluation of food materials.
Nishi, Kentaro; Isobe, Shin-Ichiro; Zhu, Yun; Kiyama, Ryoiti
2015-10-13
We summarize here the recent progress in fluorescence-based bioassays for the detection and evaluation of food materials by focusing on fluorescent dyes used in bioassays and applications of these assays for food safety, quality and efficacy. Fluorescent dyes have been used in various bioassays, such as biosensing, cell assay, energy transfer-based assay, probing, protein/immunological assay and microarray/biochip assay. Among the arrays used in microarray/biochip assay, fluorescence-based microarrays/biochips, such as antibody/protein microarrays, bead/suspension arrays, capillary/sensor arrays, DNA microarrays/polymerase chain reaction (PCR)-based arrays, glycan/lectin arrays, immunoassay/enzyme-linked immunosorbent assay (ELISA)-based arrays, microfluidic chips and tissue arrays, have been developed and used for the assessment of allergy/poisoning/toxicity, contamination and efficacy/mechanism, and quality control/safety. DNA microarray assays have been used widely for food safety and quality as well as searches for active components. DNA microarray-based gene expression profiling may be useful for such purposes due to its advantages in the evaluation of pathway-based intracellular signaling in response to food materials.
Fluorescence-Based Bioassays for the Detection and Evaluation of Food Materials
Nishi, Kentaro; Isobe, Shin-Ichiro; Zhu, Yun; Kiyama, Ryoiti
2015-01-01
We summarize here the recent progress in fluorescence-based bioassays for the detection and evaluation of food materials by focusing on fluorescent dyes used in bioassays and applications of these assays for food safety, quality and efficacy. Fluorescent dyes have been used in various bioassays, such as biosensing, cell assay, energy transfer-based assay, probing, protein/immunological assay and microarray/biochip assay. Among the arrays used in microarray/biochip assay, fluorescence-based microarrays/biochips, such as antibody/protein microarrays, bead/suspension arrays, capillary/sensor arrays, DNA microarrays/polymerase chain reaction (PCR)-based arrays, glycan/lectin arrays, immunoassay/enzyme-linked immunosorbent assay (ELISA)-based arrays, microfluidic chips and tissue arrays, have been developed and used for the assessment of allergy/poisoning/toxicity, contamination and efficacy/mechanism, and quality control/safety. DNA microarray assays have been used widely for food safety and quality as well as searches for active components. DNA microarray-based gene expression profiling may be useful for such purposes due to its advantages in the evaluation of pathway-based intracellular signaling in response to food materials. PMID:26473869
Chockalingam, Sriram; Aluru, Maneesha; Aluru, Srinivas
2016-09-19
Pre-processing of microarray data is a well-studied problem. Furthermore, all popular platforms come with their own recommended best practices for differential analysis of genes. However, for genome-scale network inference using microarray data collected from large public repositories, these methods filter out a considerable number of genes. This is primarily due to the effects of aggregating a diverse array of experiments with different technical and biological scenarios. Here we introduce a pre-processing pipeline suitable for inferring genome-scale gene networks from large microarray datasets. We show that partitioning of the available microarray datasets according to biological relevance into tissue- and process-specific categories significantly extends the limits of downstream network construction. We demonstrate the effectiveness of our pre-processing pipeline by inferring genome-scale networks for the model plant Arabidopsis thaliana using two different construction methods and a collection of 11,760 Affymetrix ATH1 microarray chips. Our pre-processing pipeline and the datasets used in this paper are made available at http://alurulab.cc.gatech.edu/microarray-pp.
CO2 abatement costs of greenhouse gas (GHG) mitigation by different biogas conversion pathways.
Rehl, T; Müller, J
2013-01-15
Biogas will be of increasing importance in the future as a factor in reducing greenhouse gas emissions cost-efficiently by the optimal use of available resources and technologies. The goal of this study was to identify the most ecological and economical use of a given resource (organic waste from residential, commercial and industry sectors) using one specific treatment technology (anaerobic digestion) but applying different energy conversion technologies. Average and marginal abatement costs were calculated based on Life Cycle Cost (LCC) and Life Cycle Assessment (LCA) methodologies. Eight new biogas systems producing electricity, heat, gas or automotive fuel were analyzed in order to identify the most cost-efficient way of reducing GHG emissions. A system using a combined heat and power station (which is connected to waste treatment and digestion operation facilities and located nearby potential residential, commercial or industrial heat users) was found to be the most cost-efficient biogas technology for reducing GHG emissions. Up to € 198 per tonne of CO(2) equivalents can be saved by replacing the "business as usual" systems based on fossil resources with ones based on biogas. Limited gas injection (desulfurized and dried biogas, without compression and upgrading) into the gas grid can also be a viable option with an abatement cost saving of € 72 per tonne of CO(2) equivalents, while a heating plant with a district heating grid or a system based on biogas results in higher abatement costs (€ 267 and € 270 per tonne CO(2) eq). Results from all systems are significantly influenced by whether average or marginal data are used as a reference. Beside that energy efficiency, the reference system that was replaced and the by-products as well as feedstock and investment costs were identified to be parameters with major impacts on abatement costs. The quantitative analysis was completed by a discussion of the role that abatement cost methodology can play in decision-making. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ramage, J. M.; Brodzik, M. J.; Hardman, M.
2016-12-01
Passive microwave (PM) 18 GHz and 36 GHz horizontally- and vertically-polarized brightness temperatures (Tb) channels from the Advanced Microwave Scanning Radiometer for EOS (AMSR-E) have been important sources of information about snow melt status in glacial environments, particularly at high latitudes. PM data are sensitive to the changes in near-surface liquid water that accompany melt onset, melt intensification, and refreezing. Overpasses are frequent enough that in most areas multiple (2-8) observations per day are possible, yielding the potential for determining the dynamic state of the snow pack during transition seasons. AMSR-E Tb data have been used effectively to determine melt onset and melt intensification using daily Tb and diurnal amplitude variation (DAV) thresholds. Due to mixed pixels in historically coarse spatial resolution Tb data, melt analysis has been impractical in ice-marginal zones where pixels may be only fractionally snow/ice covered, and in areas where the glacier is near large bodies of water: even small regions of open water in a pixel severely impact the microwave signal. We use the new enhanced-resolution Calibrated Passive Microwave Daily EASE-Grid 2.0 Brightness Temperature (CETB) Earth System Data Record product's twice daily obserations to test and update existing snow melt algorithms by determining appropriate melt thresholds for both Tb and DAV for the CETB 18 and 36 GHz channels. We use the enhanced resolution data to evaluate melt characteristics along glacier margins and melt transition zones during the melt seasons in locations spanning a wide range of melt scenarios, including the Patagonian Andes, the Alaskan Coast Range, and the Russian High Arctic icecaps. We quantify how improvement of spatial resolution from the original 12.5 - 25 km-scale pixels to the enhanced resolution of 3.125 - 6.25 km improves the ability to evaluate melt timing across boundaries and transition zones in diverse glacial environments.
A Spatially and Temporally Continuous LFE Catalogue for the Southern Alps, New Zealand
NASA Astrophysics Data System (ADS)
Chamberlain, C. J.; Townend, J.; Baratin, L. M.
2015-12-01
Using a brightness-based beamforming approach coupled with a matched-filter correlation method, we have developed a 6.5 year record of low-frequency earthquakes (LFEs) occuring on and near the deep extent of New Zealand's Alpine Fault. Our brightness template detection method, based on that of Frank et al. (2014), scans a pre-determined grid of possible seismic sources to automatically find LFE templates based on the stack of bandpassed squared seismic data. Previous work (Wech et al., 2012, Chamberlain et al., 2014) has shown that the depths of standard seismicity are anti-correlated with those of tremor and LFEs in the central Southern Alps: hence, by careful grid selection, shallow seismic sources can effectively be discriminated against. This beamforming approach produces many (>900) possible events. Initial beamforming detections are grouped by moveout and stacked to produce a subset of higher-quality events for use as templates in a cross-correlation detector. Events detected by cross-correlation are stacked to increase their signal-to-noise charectaristics before being located using a 3D velocity model. This method produces a spatially and temporally continuous catalogue of LFEs throughout the 6.5 year study period. The catalogue highlights quasi-continuous slow deformation occuring beneath the seismogenic zone near the Alpine Fault, punctuated by periods of increased LFE generation associated with tremor, and following large regional earthquakes. To date we have found no evidence of LFE generation north-east of Mt. Cook, the highest point in the Southern Alps, despite systematic searching throughout the region. We suggest that the along-strike cessation of tremor is due to changes in the fault's dip and the hypothesised presence of partially subducted passive margin material. This remnant passive margin would lie benath the tremor-generating region and has been linked to along-strike changes in subcrustal earthquake distributions (Boese et al., 2013).
Microfluidic microarray systems and methods thereof
West, Jay A. A. [Castro Valley, CA; Hukari, Kyle W [San Ramon, CA; Hux, Gary A [Tracy, CA
2009-04-28
Disclosed are systems that include a manifold in fluid communication with a microfluidic chip having a microarray, an illuminator, and a detector in optical communication with the microarray. Methods for using these systems for biological detection are also disclosed.
cDNA Microarray Screening in Food Safety
ROY, SASHWATI; SEN, CHANDAN K
2009-01-01
The cDNA microarray technology and related bioinformatics tools presents a wide range of novel application opportunities. The technology may be productively applied to address food safety. In this mini-review article, we present an update highlighting the late breaking discoveries that demonstrate the vitality of cDNA microarray technology as a tool to analyze food safety with reference to microbial pathogens and genetically modified foods. In order to bring the microarray technology to mainstream food safety, it is important to develop robust user-friendly tools that may be applied in a field setting. In addition, there needs to be a standardized process for regulatory agencies to interpret and act upon microarray-based data. The cDNA microarray approach is an emergent technology in diagnostics. Its values lie in being able to provide complimentary molecular insight when employed in addition to traditional tests for food safety, as part of a more comprehensive battery of tests. PMID:16466843
Li, Zhiguang; Kwekel, Joshua C; Chen, Tao
2012-01-01
Functional comparison across microarray platforms is used to assess the comparability or similarity of the biological relevance associated with the gene expression data generated by multiple microarray platforms. Comparisons at the functional level are very important considering that the ultimate purpose of microarray technology is to determine the biological meaning behind the gene expression changes under a specific condition, not just to generate a list of genes. Herein, we present a method named percentage of overlapping functions (POF) and illustrate how it is used to perform the functional comparison of microarray data generated across multiple platforms. This method facilitates the determination of functional differences or similarities in microarray data generated from multiple array platforms across all the functions that are presented on these platforms. This method can also be used to compare the functional differences or similarities between experiments, projects, or laboratories.
ArrayNinja: An Open Source Platform for Unified Planning and Analysis of Microarray Experiments.
Dickson, B M; Cornett, E M; Ramjan, Z; Rothbart, S B
2016-01-01
Microarray-based proteomic platforms have emerged as valuable tools for studying various aspects of protein function, particularly in the field of chromatin biochemistry. Microarray technology itself is largely unrestricted in regard to printable material and platform design, and efficient multidimensional optimization of assay parameters requires fluidity in the design and analysis of custom print layouts. This motivates the need for streamlined software infrastructure that facilitates the combined planning and analysis of custom microarray experiments. To this end, we have developed ArrayNinja as a portable, open source, and interactive application that unifies the planning and visualization of microarray experiments and provides maximum flexibility to end users. Array experiments can be planned, stored to a private database, and merged with the imaged results for a level of data interaction and centralization that is not currently attainable with available microarray informatics tools. © 2016 Elsevier Inc. All rights reserved.
Emerging Use of Gene Expression Microarrays in Plant Physiology
Wullschleger, Stan D.; Difazio, Stephen P.
2003-01-01
Microarrays have become an important technology for the global analysis of gene expression in humans, animals, plants, and microbes. Implemented in the context of a well-designed experiment, cDNA and oligonucleotide arrays can provide highthroughput, simultaneous analysis of transcript abundance for hundreds, if not thousands, of genes. However, despite widespread acceptance, the use of microarrays as a tool to better understand processes of interest to the plant physiologist is still being explored. To help illustrate current uses of microarrays in the plant sciences, several case studies that we believe demonstrate the emerging application of gene expression arrays in plant physiology weremore » selected from among the many posters and presentations at the 2003 Plant and Animal Genome XI Conference. Based on this survey, microarrays are being used to assess gene expression in plants exposed to the experimental manipulation of air temperature, soil water content and aluminium concentration in the root zone. Analysis often includes characterizing transcript profiles for multiple post-treatment sampling periods and categorizing genes with common patterns of response using hierarchical clustering techniques. In addition, microarrays are also providing insights into developmental changes in gene expression associated with fibre and root elongation in cotton and maize, respectively. Technical and analytical limitations of microarrays are discussed and projects attempting to advance areas of microarray design and data analysis are highlighted. Finally, although much work remains, we conclude that microarrays are a valuable tool for the plant physiologist interested in the characterization and identification of individual genes and gene families with potential application in the fields of agriculture, horticulture and forestry.« less
Profiling In Situ Microbial Community Structure with an Amplification Microarray
Knickerbocker, Christopher; Bryant, Lexi; Golova, Julia; Wiles, Cory; Williams, Kenneth H.; Peacock, Aaron D.; Long, Philip E.
2013-01-01
The objectives of this study were to unify amplification, labeling, and microarray hybridization chemistries within a single, closed microfluidic chamber (an amplification microarray) and verify technology performance on a series of groundwater samples from an in situ field experiment designed to compare U(VI) mobility under conditions of various alkalinities (as HCO3−) during stimulated microbial activity accompanying acetate amendment. Analytical limits of detection were between 2 and 200 cell equivalents of purified DNA. Amplification microarray signatures were well correlated with 16S rRNA-targeted quantitative PCR results and hybridization microarray signatures. The succession of the microbial community was evident with and consistent between the two microarray platforms. Amplification microarray analysis of acetate-treated groundwater showed elevated levels of iron-reducing bacteria (Flexibacter, Geobacter, Rhodoferax, and Shewanella) relative to the average background profile, as expected. Identical molecular signatures were evident in the transect treated with acetate plus NaHCO3, but at much lower signal intensities and with a much more rapid decline (to nondetection). Azoarcus, Thaurea, and Methylobacterium were responsive in the acetate-only transect but not in the presence of bicarbonate. Observed differences in microbial community composition or response to bicarbonate amendment likely had an effect on measured rates of U reduction, with higher rates probable in the part of the field experiment that was amended with bicarbonate. The simplification in microarray-based work flow is a significant technological advance toward entirely closed-amplicon microarray-based tests and is generally extensible to any number of environmental monitoring applications. PMID:23160129
PRACTICAL STRATEGIES FOR PROCESSING AND ANALYZING SPOTTED OLIGONUCLEOTIDE MICROARRAY DATA
Thoughtful data analysis is as important as experimental design, biological sample quality, and appropriate experimental procedures for making microarrays a useful supplement to traditional toxicology. In the present study, spotted oligonucleotide microarrays were used to profile...
DNA Microarray-based Ecotoxicological Biomarker Discovery in a Small Fish Model Species
This paper addresses several issues critical to use of zebrafish oligonucleotide microarrays for computational toxicology research on endocrine disrupting chemicals using small fish models, and more generally, the use of microarrays in aquatic toxicology.
IMPROVING THE RELIABILITY OF MICROARRAYS FOR TOXICOLOGY RESEARCH: A COLLABORATIVE APPROACH
Microarray-based gene expression profiling is a critical tool to identify molecular biomarkers of specific chemical stressors. Although current microarray technologies have progressed from their infancy, biological and technical repeatability and reliability are often still limit...
NASA Astrophysics Data System (ADS)
Michaud, François; Calmus, Thierry; Ratzov, Gueorgui; Royer, Jean-Yves; Sosson, Marc; Bigot-Cormier, Florence; Bandy, William; Mortera Gutiérrez, Carlos
2011-08-01
The relative motion of the Pacific plate with respect to the North America plate is partitioned between transcurrent faults located along the western margin of Baja California and transform faults and spreading ridges in the Gulf of California. However, the amount of right lateral offset along the Baja California western margin is still debated. We revisited multibeam swath bathymetry data along the southern end of the Tosco-Abreojos fault system. In this area the depths are less than 1,000 m and allow a finer gridding at 60 m cell spacing. This improved resolution unveils several transcurrent right lateral faults offsetting the seafloor and canyons, which can be used as markers to quantify local offsets. The seafloor of the southern end of the Tosco-Abreojos fault system (south of 24°N) displays NW-SE elongated bathymetric highs and lows, suggesting a transtensional tectonic regime associated with the formation of pull-apart basins. In such an active tectonic context, submarine canyon networks are unstable. Using the deformation rate inferred from kinematic predictions and pull-apart geometry, we suggest a minimum age for the reorganization of the canyon network.
Statistical use of argonaute expression and RISC assembly in microRNA target identification.
Stanhope, Stephen A; Sengupta, Srikumar; den Boon, Johan; Ahlquist, Paul; Newton, Michael A
2009-09-01
MicroRNAs (miRNAs) posttranscriptionally regulate targeted messenger RNAs (mRNAs) by inducing cleavage or otherwise repressing their translation. We address the problem of detecting m/miRNA targeting relationships in homo sapiens from microarray data by developing statistical models that are motivated by the biological mechanisms used by miRNAs. The focus of our modeling is the construction, activity, and mediation of RNA-induced silencing complexes (RISCs) competent for targeted mRNA cleavage. We demonstrate that regression models accommodating RISC abundance and controlling for other mediating factors fit the expression profiles of known target pairs substantially better than models based on m/miRNA expressions alone, and lead to verifications of computational target pair predictions that are more sensitive than those based on marginal expression levels. Because our models are fully independent of exogenous results from sequence-based computational methods, they are appropriate for use as either a primary or secondary source of information regarding m/miRNA target pair relationships, especially in conjunction with high-throughput expression studies.
Integrative prescreening in analysis of multiple cancer genomic studies
2012-01-01
Background In high throughput cancer genomic studies, results from the analysis of single datasets often suffer from a lack of reproducibility because of small sample sizes. Integrative analysis can effectively pool and analyze multiple datasets and provides a cost effective way to improve reproducibility. In integrative analysis, simultaneously analyzing all genes profiled may incur high computational cost. A computationally affordable remedy is prescreening, which fits marginal models, can be conducted in a parallel manner, and has low computational cost. Results An integrative prescreening approach is developed for the analysis of multiple cancer genomic datasets. Simulation shows that the proposed integrative prescreening has better performance than alternatives, particularly including prescreening with individual datasets, an intensity approach and meta-analysis. We also analyze multiple microarray gene profiling studies on liver and pancreatic cancers using the proposed approach. Conclusions The proposed integrative prescreening provides an effective way to reduce the dimensionality in cancer genomic studies. It can be coupled with existing analysis methods to identify cancer markers. PMID:22799431
Direct labeling of serum proteins by fluorescent dye for antibody microarray.
Klimushina, M V; Gumanova, N G; Metelskaya, V A
2017-05-06
Analysis of serum proteome by antibody microarray is used to identify novel biomarkers and to study signaling pathways including protein phosphorylation and protein-protein interactions. Labeling of serum proteins is important for optimal performance of the antibody microarray. Proper choice of fluorescent label and optimal concentration of protein loaded on the microarray ensure good quality of imaging that can be reliably scanned and processed by the software. We have optimized direct serum protein labeling using fluorescent dye Arrayit Green 540 (Arrayit Corporation, USA) for antibody microarray. Optimized procedure produces high quality images that can be readily scanned and used for statistical analysis of protein composition of the serum. Copyright © 2017 Elsevier Inc. All rights reserved.
Transfection microarray and the applications.
Miyake, Masato; Yoshikawa, Tomohiro; Fujita, Satoshi; Miyake, Jun
2009-05-01
Microarray transfection has been extensively studied for high-throughput functional analysis of mammalian cells. However, control of efficiency and reproducibility are the critical issues for practical use. By using solid-phase transfection accelerators and nano-scaffold, we provide a highly efficient and reproducible microarray-transfection device, "transfection microarray". The device would be applied to the limited number of available primary cells and stem cells not only for large-scale functional analysis but also reporter-based time-lapse cellular event analysis.
Zivicova, Veronika; Gal, Peter; Mifkova, Alzbeta; Novak, Stepan; Kaltner, Herbert; Kolar, Michal; Strnad, Hynek; Sachova, Jana; Hradilova, Miluse; Chovanec, Martin; Gabius, Hans-Joachim; Smetana, Karel; Fik, Zdenek
2018-03-01
Having previously initiated genome-wide expression profiling in head and neck squamous cell carcinoma (HNSCC) for regions of the tumor, the margin of surgical resecate (MSR) and normal mucosa (NM), we here proceed with respective analysis of cases after stratification according to the expression status of tenascin (Ten). Tissue specimens of each anatomical site were analyzed by immunofluorescent detection of Ten, fibronectin (Fn) and galectin-1 (Gal-1) as well as by microarrays. Histopathological examination demonstrated that Ten + Fn + Gal-1 + co-expression occurs more frequently in samples of HNSCC (55%) than in NM (9%; p<0.01). Contrary, the Ten - Fn + Gal-1 - (45%) and Ten - Fn - Gal-1 - (39%) status occurred with significantly (p<0.01) higher frequency than in HNSCC (3% and 4%, respectively). In MSRs, different immunophenotypes were distributed rather equally (Ten + Fn + Gal-1 + =24%; Ten - Fn + Gal-1 - =36%; Ten - Fn - Gal-1 - =33%), differing to the results in tumors (p<0.05). Absence/presence of Ten was used for stratification of patients into cohorts without a difference in prognosis, to comparatively examine gene-activity signatures. Microarray analysis revealed i) expression of several tumor progression-associated genes in Ten + HNSCC tumors and ii) a strong up-regulation of gene expression assigned to lipid metabolism in MSRs of Ten - tumors, while NM profiles remained similar. The presented data reveal marked and specific changes in tumors and MSR specimens of HNSCC without a separation based on prognosis. Copyright© 2018, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Briggs, Brandon R; Graw, Michael; Brodie, Eoin L
2013-11-01
The biogeochemical processes that occur in marine sediments on continental margins are complex; however, from one perspective they can be considered with respect to three geochemical zones based on the presence and form of methane: sulfate–methane transition (SMTZ), gas hydrate stability zone (GHSZ), and free gas zone (FGZ). These geochemical zones may harbor distinct microbial communities that are important in biogeochemical carbon cycles. The objective of this study was to describe the microbial communities in sediments from the SMTZ, GHSZ, and FGZ using molecular ecology methods (i.e. PhyloChip microarray analysis and terminal restriction fragment length polymorphism (T-RFLP)) and examining themore » results in the context of non-biological parameters in the sediments. Non-metric multidimensional scaling and multi-response permutation procedures were used to determine whether microbial community compositions were significantly different in the three geochemical zones and to correlate samples with abiotic characteristics of the sediments. This analysis indicated that microbial communities from all three zones were distinct from one another and that variables such as sulfate concentration, hydrate saturation of the nearest gas hydrate layer, and depth (or unmeasured variables associated with depth e.g. temperature, pressure) were correlated to differences between the three zones. The archaeal anaerobic methanotrophs typically attributed to performing anaerobic oxidation of methane were not detected in the SMTZ; however, the marine benthic group-B, which is often found in SMTZ, was detected. Within the GHSZ, samples that were typically closer to layers that contained higher hydrate saturation had indicator sequences related to Vibrio-type taxa. These results suggest that the biogeographic patterns of microbial communities in marine sediments are distinct based on geochemical zones defined by methane.« less
Cho, Sung Yoon; Ki, Chang-Seok; Jang, Ja-Hyun; Sohn, Young Bae; Park, Sung Won; Kim, Se Hwa; Kim, Su Jin; Jin, Dong-Kyu
2012-06-01
Patients with Xp deletions have short stature and may have some somatic traits typical of Turner syndrome (TS), whereas gonadal function is generally preserved. In most studies of these patients, microsatellites have been used to determine the break point of the Xp deletion. In the present study, we describe the clinical, cytogenetic, and chromosomal microarray (CMA) analysis of a family with an Xp22.33-Xp22.12 deletion. Two female siblings, aged 8 years 9 months and 11 years 10 months, presented with short stature. The older sibling's height (index case) was 137.9 cm (-1.81 SDS) and the younger sibling's height was 118.6 cm (-2.13 SDS). The mother and both daughters had only a short stature; a skeletal survey showed normal findings except for mildly shortened 4th and 5th metacarpal bones. No features of TS were present. The deletion appeared terminal with a breakpoint within Xp22.2 located about 19.9 Mb from the Xp telomere. The deletion contained 102 protein-coding genes. A probe of the end breakage point was located at the 19,908,986th base of the X chromosome, and a probe of the marginal normal region near the breakage point was located at the 19,910,848th base of the X chromosome. Therefore, the breakage point was concluded to be located between these two probes. In summary, we report a familial case of an Xp deletion. The findings of our study may be helpful in further analyzing the phenotypes associated with Xp deletions. Copyright © 2012 Wiley Periodicals, Inc.
A Human Lectin Microarray for Sperm Surface Glycosylation Analysis *
Sun, Yangyang; Cheng, Li; Gu, Yihua; Xin, Aijie; Wu, Bin; Zhou, Shumin; Guo, Shujuan; Liu, Yin; Diao, Hua; Shi, Huijuan; Wang, Guangyu; Tao, Sheng-ce
2016-01-01
Glycosylation is one of the most abundant and functionally important protein post-translational modifications. As such, technology for efficient glycosylation analysis is in high demand. Lectin microarrays are a powerful tool for such investigations and have been successfully applied for a variety of glycobiological studies. However, most of the current lectin microarrays are primarily constructed from plant lectins, which are not well suited for studies of human glycosylation because of the extreme complexity of human glycans. Herein, we constructed a human lectin microarray with 60 human lectin and lectin-like proteins. All of the lectins and lectin-like proteins were purified from yeast, and most showed binding to human glycans. To demonstrate the applicability of the human lectin microarray, human sperm were probed on the microarray and strong bindings were observed for several lectins, including galectin-1, 7, 8, GalNAc-T6, and ERGIC-53 (LMAN1). These bindings were validated by flow cytometry and fluorescence immunostaining. Further, mass spectrometry analysis showed that galectin-1 binds several membrane-associated proteins including heat shock protein 90. Finally, functional assays showed that binding of galectin-8 could significantly enhance the acrosome reaction within human sperms. To our knowledge, this is the first construction of a human lectin microarray, and we anticipate it will find wide use for a range of human or mammalian studies, alone or in combination with plant lectin microarrays. PMID:27364157
THE MAQC PROJECT: ESTABLISHING QC METRICS AND THRESHOLDS FOR MICROARRAY QUALITY CONTROL
Microarrays represent a core technology in pharmacogenomics and toxicogenomics; however, before this technology can successfully and reliably be applied in clinical practice and regulatory decision-making, standards and quality measures need to be developed. The Microarray Qualit...
Li, Dongmei; Le Pape, Marc A; Parikh, Nisha I; Chen, Will X; Dye, Timothy D
2013-01-01
Microarrays are widely used for examining differential gene expression, identifying single nucleotide polymorphisms, and detecting methylation loci. Multiple testing methods in microarray data analysis aim at controlling both Type I and Type II error rates; however, real microarray data do not always fit their distribution assumptions. Smyth's ubiquitous parametric method, for example, inadequately accommodates violations of normality assumptions, resulting in inflated Type I error rates. The Significance Analysis of Microarrays, another widely used microarray data analysis method, is based on a permutation test and is robust to non-normally distributed data; however, the Significance Analysis of Microarrays method fold change criteria are problematic, and can critically alter the conclusion of a study, as a result of compositional changes of the control data set in the analysis. We propose a novel approach, combining resampling with empirical Bayes methods: the Resampling-based empirical Bayes Methods. This approach not only reduces false discovery rates for non-normally distributed microarray data, but it is also impervious to fold change threshold since no control data set selection is needed. Through simulation studies, sensitivities, specificities, total rejections, and false discovery rates are compared across the Smyth's parametric method, the Significance Analysis of Microarrays, and the Resampling-based empirical Bayes Methods. Differences in false discovery rates controls between each approach are illustrated through a preterm delivery methylation study. The results show that the Resampling-based empirical Bayes Methods offer significantly higher specificity and lower false discovery rates compared to Smyth's parametric method when data are not normally distributed. The Resampling-based empirical Bayes Methods also offers higher statistical power than the Significance Analysis of Microarrays method when the proportion of significantly differentially expressed genes is large for both normally and non-normally distributed data. Finally, the Resampling-based empirical Bayes Methods are generalizable to next generation sequencing RNA-seq data analysis.
Development of a DNA microarray for species identification of quarantine aphids.
Lee, Won Sun; Choi, Hwalran; Kang, Jinseok; Kim, Ji-Hoon; Lee, Si Hyeock; Lee, Seunghwan; Hwang, Seung Yong
2013-12-01
Aphid pests are being brought into Korea as a result of increased crop trading. Aphids exist on growth areas of plants, and thus plant growth is seriously affected by aphid pests. However, aphids are very small and have several sexual morphs and life stages, so it is difficult to identify species on the basis of morphological features. This problem was approached using DNA microarray technology. DNA targets of the cytochrome c oxidase subunit I gene were generated with a fluorescent dye-labelled primer and were hybridised onto a DNA microarray consisting of specific probes. After analysing the signal intensity of the specific probes, the unique patterns from the DNA microarray, consisting of 47 species-specific probes, were obtained to identify 23 aphid species. To confirm the accuracy of the developed DNA microarray, ten individual blind samples were used in blind trials, and the identifications were completely consistent with the sequencing data of all individual blind samples. A microarray has been developed to distinguish aphid species. DNA microarray technology provides a rapid, easy, cost-effective and accurate method for identifying aphid species for pest control management. © 2013 Society of Chemical Industry.
Evaluating concentration estimation errors in ELISA microarray experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daly, Don S.; White, Amanda M.; Varnum, Susan M.
Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to predict a protein concentration in a sample. Deploying ELISA in a microarray format permits simultaneous prediction of the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Evaluating prediction error is critical to interpreting biological significance and improving the ELISA microarray process. Evaluating prediction error must be automated to realize a reliable high-throughput ELISA microarray system. Methods: In this paper, we present a statistical method based on propagation of error to evaluate prediction errors in the ELISA microarray process. Althoughmore » propagation of error is central to this method, it is effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization and statistical diagnostics when evaluating ELISA microarray prediction errors. We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of prediction errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error.« less
The Importance of Normalization on Large and Heterogeneous Microarray Datasets
DNA microarray technology is a powerful functional genomics tool increasingly used for investigating global gene expression in environmental studies. Microarrays can also be used in identifying biological networks, as they give insight on the complex gene-to-gene interactions, ne...
O-Charoen, Sirimon; Srivannavit, Onnop; Gulari, Erdogan
2008-01-01
Microfluidic microarrays have been developed for economical and rapid parallel synthesis of oligonucleotide and peptide libraries. For a synthesis system to be reproducible and uniform, it is crucial to have a uniform reagent delivery throughout the system. Computational fluid dynamics (CFD) is used to model and simulate the microfluidic microarrays to study geometrical effects on flow patterns. By proper design geometry, flow uniformity could be obtained in every microreactor in the microarrays. PMID:17480053
The application of DNA microarrays in gene expression analysis.
van Hal, N L; Vorst, O; van Houwelingen, A M; Kok, E J; Peijnenburg, A; Aharoni, A; van Tunen, A J; Keijer, J
2000-03-31
DNA microarray technology is a new and powerful technology that will substantially increase the speed of molecular biological research. This paper gives a survey of DNA microarray technology and its use in gene expression studies. The technical aspects and their potential improvements are discussed. These comprise array manufacturing and design, array hybridisation, scanning, and data handling. Furthermore, it is discussed how DNA microarrays can be applied in the working fields of: safety, functionality and health of food and gene discovery and pathway engineering in plants.
Sandwich ELISA Microarrays: Generating Reliable and Reproducible Assays for High-Throughput Screens
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonzalez, Rachel M.; Varnum, Susan M.; Zangar, Richard C.
The sandwich ELISA microarray is a powerful screening tool in biomarker discovery and validation due to its ability to simultaneously probe for multiple proteins in a miniaturized assay. The technical challenges of generating and processing the arrays are numerous. However, careful attention to possible pitfalls in the development of your antibody microarray assay can overcome these challenges. In this chapter, we describe in detail the steps that are involved in generating a reliable and reproducible sandwich ELISA microarray assay.
NASA Astrophysics Data System (ADS)
Lebedeva-Ivanova, Nina; Gaina, Carmen; Minakov, Alexander; Kashubin, Sergey
2016-04-01
We derived Moho depth and crustal thickness for the High Arctic region by 3D forward and inverse gravity modelling method in the spectral domain (Minakov et al. 2012) using lithosphere thermal gravity anomaly correction (Alvey et al., 2008); a vertical density variation for the sedimentary layer and lateral crustal variation density. Recently updated grids of bathymetry (Jakobsson et al., 2012), gravity anomaly (Gaina et al, 2011) and dynamic topography (Spasojevic & Gurnis, 2012) were used as input data for the algorithm. TeMAr sedimentary thickness grid (Petrov et al., 2013) was modified according to the most recently published seismic data, and was re-gridded and utilized as input data. Other input parameters for the algorithm were calibrated using seismic crustal scale profiles. The results are numerically compared with publically available grids of the Moho depth and crustal thickness for the High Arctic region (CRUST 1 and GEMMA global grids; the deep Arctic Ocean grids by Glebovsky et al., 2013) and seismic crustal scale profiles. The global grids provide coarser resolution of 0.5-1.0 geographic degrees and not focused on the High Arctic region. Our grids better capture all main features of the region and show smaller error in relation to the seismic crustal profiles compare to CRUST 1 and GEMMA grids. Results of 3D gravity modelling by Glebovsky et al. (2013) with separated geostructures approach show also good fit with seismic profiles; however these grids cover the deep part of the Arctic Ocean only. Alvey A, Gaina C, Kusznir NJ, Torsvik TH (2008). Integrated crustal thickness mapping and plate recon-structions for the high Arctic. Earth Planet Sci Lett 274:310-321. Gaina C, Werner SC, Saltus R, Maus S (2011). Circum-Arctic mapping project: new magnetic and gravity anomaly maps of the Arctic. Geol Soc Lond Mem 35, 39-48. Glebovsky V.Yu., Astafurova E.G., Chernykh A.A., Korneva M.A., Kaminsky V.D., Poselov V.A. (2013). Thickness of the Earth's crust in the deep Arctic Ocean: results of a 3D gravity modeling Russian Geology and Geophysics 54, 247-262. Jakobsson M, Mayer L, Coakley B, Dowdeswell JA, Forbes S, Fridman B, Hodnesdal H, Noormets R, Pedersen R, Rebesco M, Schenke HW, Zarayskaya Y, Accettella D, Armstrong A, Anderson RM, Bienhoff P, Camerlenghi A, Church I, Edwards M, Gardner JV, Hall JK, Hell B, Hestvik O, Krist-offersen Y, Marcussen C, Mohammad R, Mosher D, Nghiem SV, Pedrosa MT, Travaglini PG, Weatherall P (2012). The international bathymetric chart of the Arctic Ocean (IBCAO) version 3.0. Geophys Res Lett 39, L12609. Laske, G., Masters., G., Ma, Z. and Pasyanos, M. (2013). Update on CRUST1.0 - A 1-degree Global Model of Earth's Crust, Geophys. Res. Abstracts, 15, Abstract EGU2013-2658, 2013. Minakov A, Faleide JI, Glebovsky VY, Mjelde R (2012) Structure and evolution of the northern Barents-Kara Sea continental margin from integrated analysis of potential fields, bathymetry and sparse seismic data. Geophys J Int 188, 79-102. Petrov O., Smelror M., Shokalsky S., Morozov A., Kashubin S., Grikurov G., Sobolev N., Petrov E., (2013). A new international tectonic map of the Arctic (TeMAr) at 1:5 M scale and geodynamic evolution in the Arctic region. EGU2013-13481. Reguzzoni, M., & Sampietro, D. (2014). GEMMA: An Earth crustal model based on GOCE satellite data. International Journal of Applied Earth Observation and Geoinformation Spasojevic S. & Gurnis M., (2012). Sea level and vertical motion of continents from dynamic earth models since the late Cretaceous. American Association of Petroleum Geologists Bulletin, 96, pp. 2037-2064.
NASA Astrophysics Data System (ADS)
Alstone, Peter Michael
This work explores the intersections of information technology and off-grid electricity deployment in the developing world with focus on a key instance: the emergence of pay-as-you-go (PAYG) solar household-scale energy systems. It is grounded in detailed field study by my research team in Kenya between 2013-2014 that included primary data collection across the solar supply chain from global businesses through national and local distribution and to the end-users. We supplement the information with business process and national survey data to develop a detailed view of the markets, technology systems, and individuals who interact within those frameworks. The findings are presented in this dissertation as a series of four chapters with introductory, bridging, and synthesis material between them. The first chapter, Decentralized Energy Systems for Clean Electricity Access, presents a global view of the emerging off-grid power sector. Long-run trends in technology create "a unique moment in history" for closing the gap between global population and access to electricity, which has stubbornly held at 1-2 billion people without power since the initiation of the electric utility business model in the late 1800's. We show the potential for widespread near-term adoption of off-grid solar, which could lead to ten times less inequality in access and also ten times lower household-level climate impacts. Decentralized power systems that replace fuel-based incumbent lighting can advance the causes of climate stabilization, economic and social freedom and human health. Chapters two and three are focused on market and institutional dynamics present circa 2014 in for off-grid solar with a focus on the Kenya market. Chapter 2, "Off-grid Power and Connectivity", presents our findings related to the widespread influence of information technology across the supply chain for solar and in PAYG approaches. Using digital financing and embedded payment verification technology, PAYG businesses can help overcome key barriers to adoption of off-grid energy systems. The framework provides financing (or energy service payment structures) for users of off-grid solar, and we show is also instrumental for building trust in off-grid solar technology, facilitating supply chain coordination, and creating mechanisms and incentives for after-sales service. Chapter 3, Quality Communication, delves into detail on the information channels (both incumbent and ICT-based) that link retailers with regional and global markets for solar goods. In it we uncover the linked structure of physical distribution networks and the pathway for information about product characteristics (including, critically, the quality of products). The work shows that a few key decisions about product purchasing at the wholesale level, in places like Nairobi (the capital city for Kenya) create the bulk of the choice set for retail buyers, and show how targeting those wholesale purchasers is critically important for ensuring good-quality products are available. Chapter 4, the last in this dissertation, is titled Off-grid solar energy services enabled and evaluated through information technology and presents an analytic framework for using remote monitoring data from PAYG systems to assess the joint technological and behavioral drivers for energy access through solar home systems. Using large-scale (n ~ 1,000) data from a large PAYG business in Kenya (M-KOPA), we show that people tend to co-optimize between the quantity and reliability of service, using 55% of the energy technically possible but with only 5% system down time. Half of the users move their solar panel frequently (in response to concerns about theft, for the most part) and these users experienced 20% lower energy service quantities. The findings illustrate the implications of key trends for off-grid power: evolving system component technology architectures, opportunities for improved support to markets, and the use of background data from business and technology systems. (Abstract shortened by ProQuest.).
Comparison of RNA-seq and microarray-based models for clinical endpoint prediction.
Zhang, Wenqian; Yu, Ying; Hertwig, Falk; Thierry-Mieg, Jean; Zhang, Wenwei; Thierry-Mieg, Danielle; Wang, Jian; Furlanello, Cesare; Devanarayan, Viswanath; Cheng, Jie; Deng, Youping; Hero, Barbara; Hong, Huixiao; Jia, Meiwen; Li, Li; Lin, Simon M; Nikolsky, Yuri; Oberthuer, André; Qing, Tao; Su, Zhenqiang; Volland, Ruth; Wang, Charles; Wang, May D; Ai, Junmei; Albanese, Davide; Asgharzadeh, Shahab; Avigad, Smadar; Bao, Wenjun; Bessarabova, Marina; Brilliant, Murray H; Brors, Benedikt; Chierici, Marco; Chu, Tzu-Ming; Zhang, Jibin; Grundy, Richard G; He, Min Max; Hebbring, Scott; Kaufman, Howard L; Lababidi, Samir; Lancashire, Lee J; Li, Yan; Lu, Xin X; Luo, Heng; Ma, Xiwen; Ning, Baitang; Noguera, Rosa; Peifer, Martin; Phan, John H; Roels, Frederik; Rosswog, Carolina; Shao, Susan; Shen, Jie; Theissen, Jessica; Tonini, Gian Paolo; Vandesompele, Jo; Wu, Po-Yen; Xiao, Wenzhong; Xu, Joshua; Xu, Weihong; Xuan, Jiekun; Yang, Yong; Ye, Zhan; Dong, Zirui; Zhang, Ke K; Yin, Ye; Zhao, Chen; Zheng, Yuanting; Wolfinger, Russell D; Shi, Tieliu; Malkas, Linda H; Berthold, Frank; Wang, Jun; Tong, Weida; Shi, Leming; Peng, Zhiyu; Fischer, Matthias
2015-06-25
Gene expression profiling is being widely applied in cancer research to identify biomarkers for clinical endpoint prediction. Since RNA-seq provides a powerful tool for transcriptome-based applications beyond the limitations of microarrays, we sought to systematically evaluate the performance of RNA-seq-based and microarray-based classifiers in this MAQC-III/SEQC study for clinical endpoint prediction using neuroblastoma as a model. We generate gene expression profiles from 498 primary neuroblastomas using both RNA-seq and 44 k microarrays. Characterization of the neuroblastoma transcriptome by RNA-seq reveals that more than 48,000 genes and 200,000 transcripts are being expressed in this malignancy. We also find that RNA-seq provides much more detailed information on specific transcript expression patterns in clinico-genetic neuroblastoma subgroups than microarrays. To systematically compare the power of RNA-seq and microarray-based models in predicting clinical endpoints, we divide the cohort randomly into training and validation sets and develop 360 predictive models on six clinical endpoints of varying predictability. Evaluation of factors potentially affecting model performances reveals that prediction accuracies are most strongly influenced by the nature of the clinical endpoint, whereas technological platforms (RNA-seq vs. microarrays), RNA-seq data analysis pipelines, and feature levels (gene vs. transcript vs. exon-junction level) do not significantly affect performances of the models. We demonstrate that RNA-seq outperforms microarrays in determining the transcriptomic characteristics of cancer, while RNA-seq and microarray-based models perform similarly in clinical endpoint prediction. Our findings may be valuable to guide future studies on the development of gene expression-based predictive models and their implementation in clinical practice.
van Huet, Ramon A. C.; Pierrache, Laurence H.M.; Meester-Smoor, Magda A.; Klaver, Caroline C.W.; van den Born, L. Ingeborgh; Hoyng, Carel B.; de Wijs, Ilse J.; Collin, Rob W. J.; Hoefsloot, Lies H.
2015-01-01
Purpose To determine the efficacy of multiple versions of a commercially available arrayed primer extension (APEX) microarray chip for autosomal recessive retinitis pigmentosa (arRP). Methods We included 250 probands suspected of arRP who were genetically analyzed with the APEX microarray between January 2008 and November 2013. The mode of inheritance had to be autosomal recessive according to the pedigree (including isolated cases). If the microarray identified a heterozygous mutation, we performed Sanger sequencing of exons and exon–intron boundaries of that specific gene. The efficacy of this microarray chip with the additional Sanger sequencing approach was determined by the percentage of patients that received a molecular diagnosis. We also collected data from genetic tests other than the APEX analysis for arRP to provide a detailed description of the molecular diagnoses in our study cohort. Results The APEX microarray chip for arRP identified the molecular diagnosis in 21 (8.5%) of the patients in our cohort. Additional Sanger sequencing yielded a second mutation in 17 patients (6.8%), thereby establishing the molecular diagnosis. In total, 38 patients (15.2%) received a molecular diagnosis after analysis using the microarray and additional Sanger sequencing approach. Further genetic analyses after a negative result of the arRP microarray (n = 107) resulted in a molecular diagnosis of arRP (n = 23), autosomal dominant RP (n = 5), X-linked RP (n = 2), and choroideremia (n = 1). Conclusions The efficacy of the commercially available APEX microarray chips for arRP appears to be low, most likely caused by the limitations of this technique and the genetic and allelic heterogeneity of RP. Diagnostic yields up to 40% have been reported for next-generation sequencing (NGS) techniques that, as expected, thereby outperform targeted APEX analysis. PMID:25999674
Thematic mapping, land use, geological structure and water resources in central Spain
NASA Technical Reports Server (NTRS)
Delascuevas, N. (Principal Investigator)
1976-01-01
The author has identified the following significant results. The images can be positioned in an absolute reference system (geographical coordinates or polar stereographic coordinates) by means of their marginal indicators. By digital analysis of LANDSAT data and geometric positioning of pixels in UTM projection, accuracy was achieved for corrected MSS information which could be used for updating maps at scale 1:200,000 or smaller. Results show that adjustment of the UTM grid was better obtained by a first order, or even second order, algorithm of geometric correction. Digital analysis of LANDSAT data from the Madrid area showed that this line of study was promising for automatic classification of data applied to thematic cartography and soils identification.
Best practices for hybridization design in two-colour microarray analysis.
Knapen, Dries; Vergauwen, Lucia; Laukens, Kris; Blust, Ronny
2009-07-01
Two-colour microarrays are a popular platform of choice in gene expression studies. Because two different samples are hybridized on a single microarray, and several microarrays are usually needed in a given experiment, there are many possible ways to combine samples on different microarrays. The actual combination employed is commonly referred to as the 'hybridization design'. Different types of hybridization designs have been developed, all aimed at optimizing the experimental setup for the detection of differentially expressed genes while coping with technical noise. Here, we first provide an overview of the different classes of hybridization designs, discussing their advantages and limitations, and then we illustrate the current trends in the use of different hybridization design types in contemporary research.
Experimental Approaches to Microarray Analysis of Tumor Samples
ERIC Educational Resources Information Center
Furge, Laura Lowe; Winter, Michael B.; Meyers, Jacob I.; Furge, Kyle A.
2008-01-01
Comprehensive measurement of gene expression using high-density nucleic acid arrays (i.e. microarrays) has become an important tool for investigating the molecular differences in clinical and research samples. Consequently, inclusion of discussion in biochemistry, molecular biology, or other appropriate courses of microarray technologies has…
Challenges of microarray applications for microbial detection and gene expression profiling in food
USDA-ARS?s Scientific Manuscript database
Microarray technology represents one of the latest advances in molecular biology. The diverse types of microarrays have been applied to clinical and environmental microbiology, microbial ecology, and in human, veterinary, and plant diagnostics. Since multiple genes can be analyzed simultaneously, ...
CEM-designer: design of custom expression microarrays in the post-ENCODE Era.
Arnold, Christian; Externbrink, Fabian; Hackermüller, Jörg; Reiche, Kristin
2014-11-10
Microarrays are widely used in gene expression studies, and custom expression microarrays are popular to monitor expression changes of a customer-defined set of genes. However, the complexity of transcriptomes uncovered recently make custom expression microarray design a non-trivial task. Pervasive transcription and alternative processing of transcripts generate a wealth of interweaved transcripts that requires well-considered probe design strategies and is largely neglected in existing approaches. We developed the web server CEM-Designer that facilitates microarray platform independent design of custom expression microarrays for complex transcriptomes. CEM-Designer covers (i) the collection and generation of a set of unique target sequences from different sources and (ii) the selection of a set of sensitive and specific probes that optimally represents the target sequences. Probe design itself is left to third party software to ensure that probes meet provider-specific constraints. CEM-Designer is available at http://designpipeline.bioinf.uni-leipzig.de. Copyright © 2014 Elsevier B.V. All rights reserved.
Multiplex cDNA quantification method that facilitates the standardization of gene expression data
Gotoh, Osamu; Murakami, Yasufumi; Suyama, Akira
2011-01-01
Microarray-based gene expression measurement is one of the major methods for transcriptome analysis. However, current microarray data are substantially affected by microarray platforms and RNA references because of the microarray method can provide merely the relative amounts of gene expression levels. Therefore, valid comparisons of the microarray data require standardized platforms, internal and/or external controls and complicated normalizations. These requirements impose limitations on the extensive comparison of gene expression data. Here, we report an effective approach to removing the unfavorable limitations by measuring the absolute amounts of gene expression levels on common DNA microarrays. We have developed a multiplex cDNA quantification method called GEP-DEAN (Gene expression profiling by DCN-encoding-based analysis). The method was validated by using chemically synthesized DNA strands of known quantities and cDNA samples prepared from mouse liver, demonstrating that the absolute amounts of cDNA strands were successfully measured with a sensitivity of 18 zmol in a highly multiplexed manner in 7 h. PMID:21415008
Spot detection and image segmentation in DNA microarray data.
Qin, Li; Rueda, Luis; Ali, Adnan; Ngom, Alioune
2005-01-01
Following the invention of microarrays in 1994, the development and applications of this technology have grown exponentially. The numerous applications of microarray technology include clinical diagnosis and treatment, drug design and discovery, tumour detection, and environmental health research. One of the key issues in the experimental approaches utilising microarrays is to extract quantitative information from the spots, which represent genes in a given experiment. For this process, the initial stages are important and they influence future steps in the analysis. Identifying the spots and separating the background from the foreground is a fundamental problem in DNA microarray data analysis. In this review, we present an overview of state-of-the-art methods for microarray image segmentation. We discuss the foundations of the circle-shaped approach, adaptive shape segmentation, histogram-based methods and the recently introduced clustering-based techniques. We analytically show that clustering-based techniques are equivalent to the one-dimensional, standard k-means clustering algorithm that utilises the Euclidean distance.
Caryoscope: An Open Source Java application for viewing microarray data in a genomic context
Awad, Ihab AB; Rees, Christian A; Hernandez-Boussard, Tina; Ball, Catherine A; Sherlock, Gavin
2004-01-01
Background Microarray-based comparative genome hybridization experiments generate data that can be mapped onto the genome. These data are interpreted more easily when represented graphically in a genomic context. Results We have developed Caryoscope, which is an open source Java application for visualizing microarray data from array comparative genome hybridization experiments in a genomic context. Caryoscope can read General Feature Format files (GFF files), as well as comma- and tab-delimited files, that define the genomic positions of the microarray reporters for which data are obtained. The microarray data can be browsed using an interactive, zoomable interface, which helps users identify regions of chromosomal deletion or amplification. The graphical representation of the data can be exported in a number of graphic formats, including publication-quality formats such as PostScript. Conclusion Caryoscope is a useful tool that can aid in the visualization, exploration and interpretation of microarray data in a genomic context. PMID:15488149
Grubaugh, Nathan D.; Petz, Lawrence N.; Melanson, Vanessa R.; McMenamy, Scott S.; Turell, Michael J.; Long, Lewis S.; Pisarcik, Sarah E.; Kengluecha, Ampornpan; Jaichapor, Boonsong; O'Guinn, Monica L.; Lee, John S.
2013-01-01
Highly multiplexed assays, such as microarrays, can benefit arbovirus surveillance by allowing researchers to screen for hundreds of targets at once. We evaluated amplification strategies and the practicality of a portable DNA microarray platform to analyze virus-infected mosquitoes. The prototype microarray design used here targeted the non-structural protein 5, ribosomal RNA, and cytochrome b genes for the detection of flaviviruses, mosquitoes, and bloodmeals, respectively. We identified 13 of 14 flaviviruses from virus inoculated mosquitoes and cultured cells. Additionally, we differentiated between four mosquito genera and eight whole blood samples. The microarray platform was field evaluated in Thailand and successfully identified flaviviruses (Culex flavivirus, dengue-3, and Japanese encephalitis viruses), differentiated between mosquito genera (Aedes, Armigeres, Culex, and Mansonia), and detected mammalian bloodmeals (human and dog). We showed that the microarray platform and amplification strategies described here can be used to discern specific information on a wide variety of viruses and their vectors. PMID:23249687
Guo, Qingsheng; Bai, Zhixiong; Liu, Yuqian; Sun, Qingjiang
2016-03-15
In this work, we report the application of streptavidin-coated quantum dot (strAV-QD) in molecular beacon (MB) microarray assays by using the strAV-QD to label the immobilized MB, avoiding target labeling and meanwhile obviating the use of amplification. The MBs are stem-loop structured oligodeoxynucleotides, modified with a thiol and a biotin at two terminals of the stem. With the strAV-QD labeling an "opened" MB rather than a "closed" MB via streptavidin-biotin reaction, a sensitive and specific detection of label-free target DNA sequence is demonstrated by the MB microarray, with a signal-to-background ratio of 8. The immobilized MBs can be perfectly regenerated, allowing the reuse of the microarray. The MB microarray also is able to detect single nucleotide polymorphisms, exhibiting genotype-dependent fluorescence signals. It is demonstrated that the MB microarray can perform as a 4-to-2 encoder, compressing the genotype information into two outputs. Copyright © 2015 Elsevier B.V. All rights reserved.
2012-01-01
Over the last decade, the introduction of microarray technology has had a profound impact on gene expression research. The publication of studies with dissimilar or altogether contradictory results, obtained using different microarray platforms to analyze identical RNA samples, has raised concerns about the reliability of this technology. The MicroArray Quality Control (MAQC) project was initiated to address these concerns, as well as other performance and data analysis issues. Expression data on four titration pools from two distinct reference RNA samples were generated at multiple test sites using a variety of microarray-based and alternative technology platforms. Here we describe the experimental design and probe mapping efforts behind the MAQC project. We show intraplatform consistency across test sites as well as a high level of interplatform concordance in terms of genes identified as differentially expressed. This study provides a resource that represents an important first step toward establishing a framework for the use of microarrays in clinical and regulatory settings. PMID:16964229
Parthasarathy, N; Saksena, R; Kováč, P; Deshazer, D; Peacock, S J; Wuthiekanun, V; Heine, H S; Friedlander, A M; Cote, C K; Welkos, S L; Adamovicz, J J; Bavari, S; Waag, D M
2008-11-03
We developed a microarray platform by immobilizing bacterial 'signature' carbohydrates onto epoxide modified glass slides. The carbohydrate microarray platform was probed with sera from non-melioidosis and melioidosis (Burkholderia pseudomallei) individuals. The platform was also probed with sera from rabbits vaccinated with Bacillus anthracis spores and Francisella tularensis bacteria. By employing this microarray platform, we were able to detect and differentiate B. pseudomallei, B. anthracis and F. tularensis antibodies in infected patients, and infected or vaccinated animals. These antibodies were absent in the sera of naïve test subjects. The advantages of the carbohydrate microarray technology over the traditional indirect hemagglutination and microagglutination tests for the serodiagnosis of melioidosis and tularemia are discussed. Furthermore, this array is a multiplex carbohydrate microarray for the detection of all three biothreat bacterial infections including melioidosis, anthrax and tularemia with one, multivalent device. The implication is that this technology could be expanded to include a wide array of infectious and biothreat agents.
Split-plot microarray experiments: issues of design, power and sample size.
Tsai, Pi-Wen; Lee, Mei-Ling Ting
2005-01-01
This article focuses on microarray experiments with two or more factors in which treatment combinations of the factors corresponding to the samples paired together onto arrays are not completely random. A main effect of one (or more) factor(s) is confounded with arrays (the experimental blocks). This is called a split-plot microarray experiment. We utilise an analysis of variance (ANOVA) model to assess differentially expressed genes for between-array and within-array comparisons that are generic under a split-plot microarray experiment. Instead of standard t- or F-test statistics that rely on mean square errors of the ANOVA model, we use a robust method, referred to as 'a pooled percentile estimator', to identify genes that are differentially expressed across different treatment conditions. We illustrate the design and analysis of split-plot microarray experiments based on a case application described by Jin et al. A brief discussion of power and sample size for split-plot microarray experiments is also presented.
Women's experiences receiving abnormal prenatal chromosomal microarray testing results.
Bernhardt, Barbara A; Soucier, Danielle; Hanson, Karen; Savage, Melissa S; Jackson, Laird; Wapner, Ronald J
2013-02-01
Genomic microarrays can detect copy-number variants not detectable by conventional cytogenetics. This technology is diffusing rapidly into prenatal settings even though the clinical implications of many copy-number variants are currently unknown. We conducted a qualitative pilot study to explore the experiences of women receiving abnormal results from prenatal microarray testing performed in a research setting. Participants were a subset of women participating in a multicenter prospective study "Prenatal Cytogenetic Diagnosis by Array-based Copy Number Analysis." Telephone interviews were conducted with 23 women receiving abnormal prenatal microarray results. We found that five key elements dominated the experiences of women who had received abnormal prenatal microarray results: an offer too good to pass up, blindsided by the results, uncertainty and unquantifiable risks, need for support, and toxic knowledge. As prenatal microarray testing is increasingly used, uncertain findings will be common, resulting in greater need for careful pre- and posttest counseling, and more education of and resources for providers so they can adequately support the women who are undergoing testing.
Fuzzy support vector machine: an efficient rule-based classification technique for microarrays.
Hajiloo, Mohsen; Rabiee, Hamid R; Anooshahpour, Mahdi
2013-01-01
The abundance of gene expression microarray data has led to the development of machine learning algorithms applicable for tackling disease diagnosis, disease prognosis, and treatment selection problems. However, these algorithms often produce classifiers with weaknesses in terms of accuracy, robustness, and interpretability. This paper introduces fuzzy support vector machine which is a learning algorithm based on combination of fuzzy classifiers and kernel machines for microarray classification. Experimental results on public leukemia, prostate, and colon cancer datasets show that fuzzy support vector machine applied in combination with filter or wrapper feature selection methods develops a robust model with higher accuracy than the conventional microarray classification models such as support vector machine, artificial neural network, decision trees, k nearest neighbors, and diagonal linear discriminant analysis. Furthermore, the interpretable rule-base inferred from fuzzy support vector machine helps extracting biological knowledge from microarray data. Fuzzy support vector machine as a new classification model with high generalization power, robustness, and good interpretability seems to be a promising tool for gene expression microarray classification.
Rao-Blackwellization for Adaptive Gaussian Sum Nonlinear Model Propagation
NASA Technical Reports Server (NTRS)
Semper, Sean R.; Crassidis, John L.; George, Jemin; Mukherjee, Siddharth; Singla, Puneet
2015-01-01
When dealing with imperfect data and general models of dynamic systems, the best estimate is always sought in the presence of uncertainty or unknown parameters. In many cases, as the first attempt, the Extended Kalman filter (EKF) provides sufficient solutions to handling issues arising from nonlinear and non-Gaussian estimation problems. But these issues may lead unacceptable performance and even divergence. In order to accurately capture the nonlinearities of most real-world dynamic systems, advanced filtering methods have been created to reduce filter divergence while enhancing performance. Approaches, such as Gaussian sum filtering, grid based Bayesian methods and particle filters are well-known examples of advanced methods used to represent and recursively reproduce an approximation to the state probability density function (pdf). Some of these filtering methods were conceptually developed years before their widespread uses were realized. Advanced nonlinear filtering methods currently benefit from the computing advancements in computational speeds, memory, and parallel processing. Grid based methods, multiple-model approaches and Gaussian sum filtering are numerical solutions that take advantage of different state coordinates or multiple-model methods that reduced the amount of approximations used. Choosing an efficient grid is very difficult for multi-dimensional state spaces, and oftentimes expensive computations must be done at each point. For the original Gaussian sum filter, a weighted sum of Gaussian density functions approximates the pdf but suffers at the update step for the individual component weight selections. In order to improve upon the original Gaussian sum filter, Ref. [2] introduces a weight update approach at the filter propagation stage instead of the measurement update stage. This weight update is performed by minimizing the integral square difference between the true forecast pdf and its Gaussian sum approximation. By adaptively updating each component weight during the nonlinear propagation stage an approximation of the true pdf can be successfully reconstructed. Particle filtering (PF) methods have gained popularity recently for solving nonlinear estimation problems due to their straightforward approach and the processing capabilities mentioned above. The basic concept behind PF is to represent any pdf as a set of random samples. As the number of samples increases, they will theoretically converge to the exact, equivalent representation of the desired pdf. When the estimated qth moment is needed, the samples are used for its construction allowing further analysis of the pdf characteristics. However, filter performance deteriorates as the dimension of the state vector increases. To overcome this problem Ref. [5] applies a marginalization technique for PF methods, decreasing complexity of the system to one linear and another nonlinear state estimation problem. The marginalization theory was originally developed by Rao and Blackwell independently. According to Ref. [6] it improves any given estimator under every convex loss function. The improvement comes from calculating a conditional expected value, often involving integrating out a supportive statistic. In other words, Rao-Blackwellization allows for smaller but separate computations to be carried out while reaching the main objective of the estimator. In the case of improving an estimator's variance, any supporting statistic can be removed and its variance determined. Next, any other information that dependents on the supporting statistic is found along with its respective variance. A new approach is developed here by utilizing the strengths of the adaptive Gaussian sum propagation in Ref. [2] and a marginalization approach used for PF methods found in Ref. [7]. In the following sections a modified filtering approach is presented based on a special state-space model within nonlinear systems to reduce the dimensionality of the optimization problem in Ref. [2]. First, the adaptive Gaussian sum propagation is explained and then the new marginalized adaptive Gaussian sum propagation is derived. Finally, an example simulation is presented.
Abou Assi, Hala; Gómez-Pinto, Irene; González, Carlos
2017-01-01
Abstract In situ fabricated nucleic acids microarrays are versatile and very high-throughput platforms for aptamer optimization and discovery, but the chemical space that can be probed against a given target has largely been confined to DNA, while RNA and non-natural nucleic acid microarrays are still an essentially uncharted territory. 2΄-Fluoroarabinonucleic acid (2΄F-ANA) is a prime candidate for such use in microarrays. Indeed, 2΄F-ANA chemistry is readily amenable to photolithographic microarray synthesis and its potential in high affinity aptamers has been recently discovered. We thus synthesized the first microarrays containing 2΄F-ANA and 2΄F-ANA/DNA chimeric sequences to fully map the binding affinity landscape of the TBA1 thrombin-binding G-quadruplex aptamer containing all 32 768 possible DNA-to-2΄F-ANA mutations. The resulting microarray was screened against thrombin to identify a series of promising 2΄F-ANA-modified aptamer candidates with Kds significantly lower than that of the unmodified control and which were found to adopt highly stable, antiparallel-folded G-quadruplex structures. The solution structure of the TBA1 aptamer modified with 2΄F-ANA at position T3 shows that fluorine substitution preorganizes the dinucleotide loop into the proper conformation for interaction with thrombin. Overall, our work strengthens the potential of 2΄F-ANA in aptamer research and further expands non-genomic applications of nucleic acids microarrays. PMID:28100695
Geue, Lutz; Stieber, Bettina; Monecke, Stefan; Engelmann, Ines; Gunzer, Florian; Slickers, Peter; Braun, Sascha D; Ehricht, Ralf
2014-08-01
In this study, we developed a new rapid, economic, and automated microarray-based genotyping test for the standardized subtyping of Shiga toxins 1 and 2 of Escherichia coli. The microarrays from Alere Technologies can be used in two different formats, the ArrayTube and the ArrayStrip (which enables high-throughput testing in a 96-well format). One microarray chip harbors all the gene sequences necessary to distinguish between all Stx subtypes, facilitating the identification of single and multiple subtypes within a single isolate in one experiment. Specific software was developed to automatically analyze all data obtained from the microarray. The assay was validated with 21 Shiga toxin-producing E. coli (STEC) reference strains that were previously tested by the complete set of conventional subtyping PCRs. The microarray results showed 100% concordance with the PCR results. Essentially identical results were detected when the standard DNA extraction method was replaced by a time-saving heat lysis protocol. For further validation of the microarray, we identified the Stx subtypes or combinations of the subtypes in 446 STEC field isolates of human and animal origin. In summary, this oligonucleotide array represents an excellent diagnostic tool that provides some advantages over standard PCR-based subtyping. The number of the spotted probes on the microarrays can be increased by additional probes, such as for novel alleles, species markers, or resistance genes, should the need arise. Copyright © 2014, American Society for Microbiology. All Rights Reserved.
Over the last decade, the introduction of microarray technology has had a profound impact on gene expression research. The publication of studies with dissimilar or altogether contradictory results, obtained using different microarray platforms to analyze identical RNA samples, ...
NASA Astrophysics Data System (ADS)
Seo, Gwang-Ho; Cho, Yang-Ki; Choi, Byoung-Ju
2014-02-01
High-resolution reanalysis of heat transport in the northwestern Pacific marginal seas was conducted for the period January 1980-December 2009 using ensemble Kalman filter. An ocean circulation model with a grid of 0.1 × 0.1° horizontal resolution and 20 vertical levels was used. Atmospheric forcing data from daily European Centre for Medium-Range Weather Forecasts were used in the ocean model. The assimilated data for the reanalysis were based on available observations of hydrographic profiles, including field surveys and Argo float and satellite-observed sea-surface temperature data. This study focused on mean and temporal variations in oceanic heat transport within the major straits among the marginal seas over 30 years. The mean heat transport in the Korea/Tsushima Strait and onshore transport across the shelf break in the East China Sea (ECS), Taiwan Strait, Tsugaru Strait, and Soya Strait were 182, 123, 82, 100, and 34 × 1012 W, respectively. The long-term trends in heat transport through the Korea/Tsushima Strait and Tsugaru Strait and onshore transport across the shelf break of the ECS were increasing, whereas the trend in heat transport through the Taiwan Strait was decreasing. There was little long-term change in heat transport in the Soya Strait. These long-term changes in heat transport through the Korea/Tsushima Strait, across the shelf of the ECS, and through the Taiwan Strait may be related to increased northeasterly wind stress in the ECS, which drives Ekman transport onto the shelf across the shelf break.
USDA-ARS?s Scientific Manuscript database
The development of a fluorescent multiplexed microarray platform able to detect and quantify a wide variety of pollutants in seawater is reported. The microarray platform has been manufactured by spotting 6 different bioconjugate competitors and it uses a cocktail of 6 monoclonal and polyclonal anti...
Microarray technology is a powerful tool to investigate the gene expression profiles for thousands of genes simultaneously. In recent years, microarrays have been used to characterize environmental pollutants and identify molecular mode(s) of action of chemicals including endocri...
USDA-ARS?s Scientific Manuscript database
The amount of microarray gene expression data in public repositories has been increasing exponentially for the last couple of decades. High-throughput microarray data integration and analysis has become a critical step in exploring the large amount of expression data for biological discovery. Howeve...
Microarrays Made Simple: "DNA Chips" Paper Activity
ERIC Educational Resources Information Center
Barnard, Betsy
2006-01-01
DNA microarray technology is revolutionizing biological science. DNA microarrays (also called DNA chips) allow simultaneous screening of many genes for changes in expression between different cells. Now researchers can obtain information about genes in days or weeks that used to take months or years. The paper activity described in this article…
Over the last decade, the introduction of microarray technology has had a profound impact on gene expression research. The publication of studies with dissimilar or altogether contradictory results, obtained using different microarray platforms to analyze identical RNA samples, h...
ERIC Educational Resources Information Center
Plomin, Robert; Schalkwyk, Leonard C.
2007-01-01
Microarrays are revolutionizing genetics by making it possible to genotype hundreds of thousands of DNA markers and to assess the expression (RNA transcripts) of all of the genes in the genome. Microarrays are slides the size of a postage stamp that contain millions of DNA sequences to which single-stranded DNA or RNA can hybridize. This…
Karampetsou, Evangelia; Morrogh, Deborah; Chitty, Lyn
2014-01-01
The advantage of microarray (array) over conventional karyotype for the diagnosis of fetal pathogenic chromosomal anomalies has prompted the use of microarrays in prenatal diagnostics. In this review we compare the performance of different array platforms (BAC, oligonucleotide CGH, SNP) and designs (targeted, whole genome, whole genome, and targeted, custom) and discuss their advantages and disadvantages in relation to prenatal testing. We also discuss the factors to consider when implementing a microarray testing service for the diagnosis of fetal chromosomal aberrations. PMID:26237396
A Perspective on DNA Microarrays in Pathology Research and Practice
Pollack, Jonathan R.
2007-01-01
DNA microarray technology matured in the mid-1990s, and the past decade has witnessed a tremendous growth in its application. DNA microarrays have provided powerful tools for pathology researchers seeking to describe, classify, and understand human disease. There has also been great expectation that the technology would advance the practice of pathology. This review highlights some of the key contributions of DNA microarrays to experimental pathology, focusing in the area of cancer research. Also discussed are some of the current challenges in translating utility to clinical practice. PMID:17600117
Bingle, Lynne; Fonseca, Felipe P; Farthing, Paula M
2017-01-01
Tissue microarrays were first constructed in the 1980s but were used by only a limited number of researchers for a considerable period of time. In the last 10 years there has been a dramatic increase in the number of publications describing the successful use of tissue microarrays in studies aimed at discovering and validating biomarkers. This, along with the increased availability of both manual and automated microarray builders on the market, has encouraged even greater use of this novel and powerful tool. This chapter describes the basic techniques required to build a tissue microarray using a manual method in order that the theory behind the practical steps can be fully explained. Guidance is given to ensure potential disadvantages of the technique are fully considered.
Identification of differentially expressed genes and false discovery rate in microarray studies.
Gusnanto, Arief; Calza, Stefano; Pawitan, Yudi
2007-04-01
To highlight the development in microarray data analysis for the identification of differentially expressed genes, particularly via control of false discovery rate. The emergence of high-throughput technology such as microarrays raises two fundamental statistical issues: multiplicity and sensitivity. We focus on the biological problem of identifying differentially expressed genes. First, multiplicity arises due to testing tens of thousands of hypotheses, rendering the standard P value meaningless. Second, known optimal single-test procedures such as the t-test perform poorly in the context of highly multiple tests. The standard approach of dealing with multiplicity is too conservative in the microarray context. The false discovery rate concept is fast becoming the key statistical assessment tool replacing the P value. We review the false discovery rate approach and argue that it is more sensible for microarray data. We also discuss some methods to take into account additional information from the microarrays to improve the false discovery rate. There is growing consensus on how to analyse microarray data using the false discovery rate framework in place of the classical P value. Further research is needed on the preprocessing of the raw data, such as the normalization step and filtering, and on finding the most sensitive test procedure.
Steger, Doris; Berry, David; Haider, Susanne; Horn, Matthias; Wagner, Michael; Stocker, Roman; Loy, Alexander
2011-01-01
The hybridization of nucleic acid targets with surface-immobilized probes is a widely used assay for the parallel detection of multiple targets in medical and biological research. Despite its widespread application, DNA microarray technology still suffers from several biases and lack of reproducibility, stemming in part from an incomplete understanding of the processes governing surface hybridization. In particular, non-random spatial variations within individual microarray hybridizations are often observed, but the mechanisms underpinning this positional bias remain incompletely explained. This study identifies and rationalizes a systematic spatial bias in the intensity of surface hybridization, characterized by markedly increased signal intensity of spots located at the boundaries of the spotted areas of the microarray slide. Combining observations from a simplified single-probe block array format with predictions from a mathematical model, the mechanism responsible for this bias is found to be a position-dependent variation in lateral diffusion of target molecules. Numerical simulations reveal a strong influence of microarray well geometry on the spatial bias. Reciprocal adjustment of the size of the microarray hybridization chamber to the area of surface-bound probes is a simple and effective measure to minimize or eliminate the diffusion-based bias, resulting in increased uniformity and accuracy of quantitative DNA microarray hybridization.
Haider, Susanne; Horn, Matthias; Wagner, Michael; Stocker, Roman; Loy, Alexander
2011-01-01
Background The hybridization of nucleic acid targets with surface-immobilized probes is a widely used assay for the parallel detection of multiple targets in medical and biological research. Despite its widespread application, DNA microarray technology still suffers from several biases and lack of reproducibility, stemming in part from an incomplete understanding of the processes governing surface hybridization. In particular, non-random spatial variations within individual microarray hybridizations are often observed, but the mechanisms underpinning this positional bias remain incompletely explained. Methodology/Principal Findings This study identifies and rationalizes a systematic spatial bias in the intensity of surface hybridization, characterized by markedly increased signal intensity of spots located at the boundaries of the spotted areas of the microarray slide. Combining observations from a simplified single-probe block array format with predictions from a mathematical model, the mechanism responsible for this bias is found to be a position-dependent variation in lateral diffusion of target molecules. Numerical simulations reveal a strong influence of microarray well geometry on the spatial bias. Conclusions Reciprocal adjustment of the size of the microarray hybridization chamber to the area of surface-bound probes is a simple and effective measure to minimize or eliminate the diffusion-based bias, resulting in increased uniformity and accuracy of quantitative DNA microarray hybridization. PMID:21858215
2015-01-01
Biological assays formatted as microarrays have become a critical tool for the generation of the comprehensive data sets required for systems-level understanding of biological processes. Manual annotation of data extracted from images of microarrays, however, remains a significant bottleneck, particularly for protein microarrays due to the sensitivity of this technology to weak artifact signal. In order to automate the extraction and curation of data from protein microarrays, we describe an algorithm called Crossword that logically combines information from multiple approaches to fully automate microarray segmentation. Automated artifact removal is also accomplished by segregating structured pixels from the background noise using iterative clustering and pixel connectivity. Correlation of the location of structured pixels across image channels is used to identify and remove artifact pixels from the image prior to data extraction. This component improves the accuracy of data sets while reducing the requirement for time-consuming visual inspection of the data. Crossword enables a fully automated protocol that is robust to significant spatial and intensity aberrations. Overall, the average amount of user intervention is reduced by an order of magnitude and the data quality is increased through artifact removal and reduced user variability. The increase in throughput should aid the further implementation of microarray technologies in clinical studies. PMID:24417579
The detection and differentiation of canine respiratory pathogens using oligonucleotide microarrays.
Wang, Lih-Chiann; Kuo, Ya-Ting; Chueh, Ling-Ling; Huang, Dean; Lin, Jiunn-Horng
2017-05-01
Canine respiratory diseases are commonly seen in dogs along with co-infections with multiple respiratory pathogens, including viruses and bacteria. Virus infections in even vaccinated dogs were also reported. The clinical signs caused by different respiratory etiological agents are similar, which makes differential diagnosis imperative. An oligonucleotide microarray system was developed in this study. The wild type and vaccine strains of canine distemper virus (CDV), influenza virus, canine herpesvirus (CHV), Bordetella bronchiseptica and Mycoplasma cynos were detected and differentiated simultaneously on a microarray chip. The detection limit is 10, 10, 100, 50 and 50 copy numbers for CDV, influenza virus, CHV, B. bronchiseptica and M. cynos, respectively. The clinical test results of nasal swab samples showed that the microarray had remarkably better efficacy than the multiplex PCR-agarose gel method. The positive detection rate of microarray and agarose gel was 59.0% (n=33) and 41.1% (n=23) among the 56 samples, respectively. CDV vaccine strain and pathogen co-infections were further demonstrated by the microarray but not by the multiplex PCR-agarose gel. The oligonucleotide microarray provides a highly efficient diagnosis alternative that could be applied to clinical usage, greatly assisting in disease therapy and control. Copyright © 2017 Elsevier B.V. All rights reserved.
Gene selection for microarray data classification via subspace learning and manifold regularization.
Tang, Chang; Cao, Lijuan; Zheng, Xiao; Wang, Minhui
2017-12-19
With the rapid development of DNA microarray technology, large amount of genomic data has been generated. Classification of these microarray data is a challenge task since gene expression data are often with thousands of genes but a small number of samples. In this paper, an effective gene selection method is proposed to select the best subset of genes for microarray data with the irrelevant and redundant genes removed. Compared with original data, the selected gene subset can benefit the classification task. We formulate the gene selection task as a manifold regularized subspace learning problem. In detail, a projection matrix is used to project the original high dimensional microarray data into a lower dimensional subspace, with the constraint that the original genes can be well represented by the selected genes. Meanwhile, the local manifold structure of original data is preserved by a Laplacian graph regularization term on the low-dimensional data space. The projection matrix can serve as an importance indicator of different genes. An iterative update algorithm is developed for solving the problem. Experimental results on six publicly available microarray datasets and one clinical dataset demonstrate that the proposed method performs better when compared with other state-of-the-art methods in terms of microarray data classification. Graphical Abstract The graphical abstract of this work.
Kumar, Mukesh; Rath, Nitish Kumar; Rath, Santanu Kumar
2016-04-01
Microarray-based gene expression profiling has emerged as an efficient technique for classification, prognosis, diagnosis, and treatment of cancer. Frequent changes in the behavior of this disease generates an enormous volume of data. Microarray data satisfies both the veracity and velocity properties of big data, as it keeps changing with time. Therefore, the analysis of microarray datasets in a small amount of time is essential. They often contain a large amount of expression, but only a fraction of it comprises genes that are significantly expressed. The precise identification of genes of interest that are responsible for causing cancer are imperative in microarray data analysis. Most existing schemes employ a two-phase process such as feature selection/extraction followed by classification. In this paper, various statistical methods (tests) based on MapReduce are proposed for selecting relevant features. After feature selection, a MapReduce-based K-nearest neighbor (mrKNN) classifier is also employed to classify microarray data. These algorithms are successfully implemented in a Hadoop framework. A comparative analysis is done on these MapReduce-based models using microarray datasets of various dimensions. From the obtained results, it is observed that these models consume much less execution time than conventional models in processing big data. Copyright © 2016 Elsevier Inc. All rights reserved.
Pérez-Rubio, Gloria; Pérez-Rodríguez, Martha E; Fernández-López, Juan Carlos; Ramírez-Venegas, Alejandra; García-Colunga, Jesús; Ávila-Moreno, Federico; Camarena, Angel; Sansores, Raúl H; Falfán-Valencia, Ramcés
2016-07-01
To identify genetic variants associated with greater tobacco consumption in a Mexican population. Daily smokers were classified as light smokers (LS; n = 742), heavy smokers (HS; n = 601) and nonsmokers (NS; n = 606). In the first stage, a genotyping microarray that included 347 SNPs in CHRNA2-CHRNA7/CHRNA10, CHRNB2-CHRNB4 and NRXN1 genes and 37 ancestry-informative markers was used to analyze 707 samples (187 HS, 328 LS and 192 NS). In the second stage, 14 SNPs from stage 1 were validated in the remaining samples (HS, LS and NS; n = 414 in each group) using real-time PCR. To predict the role of the associated SNPs, an in silico analysis was performed. Two SNPs in NRXN1 and two in CHRNA5 were associated with cigarette consumption, while rs10865246/C (NRXN1) was associated with high nicotine addiction. The in silico analysis revealed that rs1882296/T had a high level of homology with Hsa-miR-6740-5p, which encodes a putative miRNA that targets glutamate receptor subunits (GRIA2, GRID2) and GABA receptor subunits (GABRG1, GABRA4, GABRB2), while rs1882296/C had a high level of homology with Hsa-miR-6866-5p, which encodes a different miRNA that targets GRID2 and GABRB2. In a Mexican Mestizo population, greater consumption of cigarettes was influenced by polymorphisms in the NRXN1 and CHRNA5 genes. We proposed new hypotheses regarding the putative roles of miRNAs that influence the GABAergic and glutamatergic pathways in smoking addiction.
Characterization and simulation of cDNA microarray spots using a novel mathematical model
Kim, Hye Young; Lee, Seo Eun; Kim, Min Jung; Han, Jin Il; Kim, Bo Kyung; Lee, Yong Sung; Lee, Young Seek; Kim, Jin Hyuk
2007-01-01
Background The quality of cDNA microarray data is crucial for expanding its application to other research areas, such as the study of gene regulatory networks. Despite the fact that a number of algorithms have been suggested to increase the accuracy of microarray gene expression data, it is necessary to obtain reliable microarray images by improving wet-lab experiments. As the first step of a cDNA microarray experiment, spotting cDNA probes is critical to determining the quality of spot images. Results We developed a governing equation of cDNA deposition during evaporation of a drop in the microarray spotting process. The governing equation included four parameters: the surface site density on the support, the extrapolated equilibrium constant for the binding of cDNA molecules with surface sites on glass slides, the macromolecular interaction factor, and the volume constant of a drop of cDNA solution. We simulated cDNA deposition from the single model equation by varying the value of the parameters. The morphology of the resulting cDNA deposit can be classified into three types: a doughnut shape, a peak shape, and a volcano shape. The spot morphology can be changed into a flat shape by varying the experimental conditions while considering the parameters of the governing equation of cDNA deposition. The four parameters were estimated by fitting the governing equation to the real microarray images. With the results of the simulation and the parameter estimation, the phenomenon of the formation of cDNA deposits in each type was investigated. Conclusion This study explains how various spot shapes can exist and suggests which parameters are to be adjusted for obtaining a good spot. This system is able to explore the cDNA microarray spotting process in a predictable, manageable and descriptive manner. We hope it can provide a way to predict the incidents that can occur during a real cDNA microarray experiment, and produce useful data for several research applications involving cDNA microarrays. PMID:18096047
Mallén, Maria; Díaz-González, María; Bonilla, Diana; Salvador, Juan P; Marco, María P; Baldi, Antoni; Fernández-Sánchez, César
2014-06-17
Low-density protein microarrays are emerging tools in diagnostics whose deployment could be primarily limited by the cost of fluorescence detection schemes. This paper describes an electrical readout system of microarrays comprising an array of gold interdigitated microelectrodes and an array of polydimethylsiloxane microwells, which enabled multiplexed detection of up to thirty six biological events on the same substrate. Similarly to fluorescent readout counterparts, the microarray can be developed on disposable glass slide substrates. However, unlike them, the presented approach is compact and requires a simple and inexpensive instrumentation. The system makes use of urease labeled affinity reagents for developing the microarrays and is based on detection of conductivity changes taking place when ionic species are generated in solution due to the catalytic hydrolysis of urea. The use of a polydimethylsiloxane microwell array facilitates the positioning of the measurement solution on every spot of the microarray. Also, it ensures the liquid tightness and isolation from the surrounding ones during the microarray readout process, thereby avoiding evaporation and chemical cross-talk effects that were shown to affect the sensitivity and reliability of the system. The performance of the system is demonstrated by carrying out the readout of a microarray for boldenone anabolic androgenic steroid hormone. Analytical results are comparable to those obtained by fluorescent scanner detection approaches. The estimated detection limit is 4.0 ng mL(-1), this being below the threshold value set by the World Anti-Doping Agency and the European Community. Copyright © 2014 Elsevier B.V. All rights reserved.
Sevenler, Derin; Daaboul, George G; Ekiz Kanik, Fulya; Ünlü, Neşe Lortlar; Ünlü, M Selim
2018-05-21
DNA and protein microarrays are a high-throughput technology that allow the simultaneous quantification of tens of thousands of different biomolecular species. The mediocre sensitivity and limited dynamic range of traditional fluorescence microarrays compared to other detection techniques have been the technology's Achilles' heel and prevented their adoption for many biomedical and clinical diagnostic applications. Previous work to enhance the sensitivity of microarray readout to the single-molecule ("digital") regime have either required signal amplifying chemistry or sacrificed throughput, nixing the platform's primary advantages. Here, we report the development of a digital microarray which extends both the sensitivity and dynamic range of microarrays by about 3 orders of magnitude. This technique uses functionalized gold nanorods as single-molecule labels and an interferometric scanner which can rapidly enumerate individual nanorods by imaging them with a 10× objective lens. This approach does not require any chemical signal enhancement such as silver deposition and scans arrays with a throughput similar to commercial fluorescence scanners. By combining single-nanoparticle enumeration and ensemble measurements of spots when the particles are very dense, this system achieves a dynamic range of about 6 orders of magnitude directly from a single scan. As a proof-of-concept digital protein microarray assay, we demonstrated detection of hepatitis B virus surface antigen in buffer with a limit of detection of 3.2 pg/mL. More broadly, the technique's simplicity and high-throughput nature make digital microarrays a flexible platform technology with a wide range of potential applications in biomedical research and clinical diagnostics.
Burgarella, Sarah; Cattaneo, Dario; Pinciroli, Francesco; Masseroli, Marco
2005-12-01
Improvements of bio-nano-technologies and biomolecular techniques have led to increasing production of high-throughput experimental data. Spotted cDNA microarray is one of the most diffuse technologies, used in single research laboratories and in biotechnology service facilities. Although they are routinely performed, spotted microarray experiments are complex procedures entailing several experimental steps and actors with different technical skills and roles. During an experiment, involved actors, who can also be located in a distance, need to access and share specific experiment information according to their roles. Furthermore, complete information describing all experimental steps must be orderly collected to allow subsequent correct interpretation of experimental results. We developed MicroGen, a web system for managing information and workflow in the production pipeline of spotted microarray experiments. It is constituted of a core multi-database system able to store all data completely characterizing different spotted microarray experiments according to the Minimum Information About Microarray Experiments (MIAME) standard, and of an intuitive and user-friendly web interface able to support the collaborative work required among multidisciplinary actors and roles involved in spotted microarray experiment production. MicroGen supports six types of user roles: the researcher who designs and requests the experiment, the spotting operator, the hybridisation operator, the image processing operator, the system administrator, and the generic public user who can access the unrestricted part of the system to get information about MicroGen services. MicroGen represents a MIAME compliant information system that enables managing workflow and supporting collaborative work in spotted microarray experiment production.
DNA Microarray Detection of 18 Important Human Blood Protozoan Species
Chen, Jun-Hu; Feng, Xin-Yu; Chen, Shao-Hong; Cai, Yu-Chun; Lu, Yan; Zhou, Xiao-Nong; Chen, Jia-Xu; Hu, Wei
2016-01-01
Background Accurate detection of blood protozoa from clinical samples is important for diagnosis, treatment and control of related diseases. In this preliminary study, a novel DNA microarray system was assessed for the detection of Plasmodium, Leishmania, Trypanosoma, Toxoplasma gondii and Babesia in humans, animals, and vectors, in comparison with microscopy and PCR data. Developing a rapid, simple, and convenient detection method for protozoan detection is an urgent need. Methodology/Principal Findings The microarray assay simultaneously identified 18 species of common blood protozoa based on the differences in respective target genes. A total of 20 specific primer pairs and 107 microarray probes were selected according to conserved regions which were designed to identify 18 species in 5 blood protozoan genera. The positive detection rate of the microarray assay was 91.78% (402/438). Sensitivity and specificity for blood protozoan detection ranged from 82.4% (95%CI: 65.9% ~ 98.8%) to 100.0% and 95.1% (95%CI: 93.2% ~ 97.0%) to 100.0%, respectively. Positive predictive value (PPV) and negative predictive value (NPV) ranged from 20.0% (95%CI: 2.5% ~ 37.5%) to 100.0% and 96.8% (95%CI: 95.0% ~ 98.6%) to 100.0%, respectively. Youden index varied from 0.82 to 0.98. The detection limit of the DNA microarrays ranged from 200 to 500 copies/reaction, similar to PCR findings. The concordance rate between microarray data and DNA sequencing results was 100%. Conclusions/Significance Overall, the newly developed microarray platform provides a convenient, highly accurate, and reliable clinical assay for the determination of blood protozoan species. PMID:27911895
Goodman, Corey W.; Major, Heather J.; Walls, William D.; Sheffield, Val C.; Casavant, Thomas L.; Darbro, Benjamin W.
2016-01-01
Chromosomal microarrays (CMAs) are routinely used in both research and clinical laboratories; yet, little attention has been given to the estimation of genome-wide true and false negatives during the assessment of these assays and how such information could be used to calibrate various algorithmic metrics to improve performance. Low-throughput, locus-specific methods such as fluorescence in situ hybridization (FISH), quantitative PCR (qPCR), or multiplex ligation-dependent probe amplification (MLPA) preclude rigorous calibration of various metrics used by copy number variant (CNV) detection algorithms. To aid this task, we have established a comparative methodology, CNV-ROC, which is capable of performing a high throughput, low cost, analysis of CMAs that takes into consideration genome-wide true and false negatives. CNV-ROC uses a higher resolution microarray to confirm calls from a lower resolution microarray and provides for a true measure of genome-wide performance metrics at the resolution offered by microarray testing. CNV-ROC also provides for a very precise comparison of CNV calls between two microarray platforms without the need to establish an arbitrary degree of overlap. Comparison of CNVs across microarrays is done on a per-probe basis and receiver operator characteristic (ROC) analysis is used to calibrate algorithmic metrics, such as log2 ratio threshold, to enhance CNV calling performance. CNV-ROC addresses a critical and consistently overlooked aspect of analytical assessments of genome-wide techniques like CMAs which is the measurement and use of genome-wide true and false negative data for the calculation of performance metrics and comparison of CNV profiles between different microarray experiments. PMID:25595567
Fretting wear behaviors of a dual-cooled nuclear fuel rod under a simulated rod vibration
NASA Astrophysics Data System (ADS)
Lee, Young-Ho; Kim, Hyung-Kyu; Kang, Heung-Seok; Yoon, Kyung-Ho; Kim, Jae-Yong; Lee, Kang-Hee
2012-06-01
Recently, a dual-cooled fuel (i.e., annular fuel) that is compatible with current operating PWR plants has been proposed in order to realize both a considerable amount of power uprating and an increase of safety margins. As the design concept should be compatible with current operating PWR plants, however, it shows a narrow gap between the fuel rods when compared with current solid nuclear fuel arrays and needs to modify the spacer grid shapes and their positions. In this study, fretting wear tests have been performed to evaluate the wear resistance of a dual-cooled fuel by using a proposed spring and dimple of spacer grids that have a cantilever type and hemispherical shape, respectively. As a result, the wear volume of the spring specimen gradually increases as the contact condition is changed from a certain gap, just contact to positive force. However, in the dimple specimen, just contact condition shows a large wear volume. In addition, a circular rod motion at upper region of contact surface is gradually increased and its diametric size depends on the wear depth increase. Based on the test results, the fretting wear resistance of the proposed spring and dimple is analyzed by comparing the wear measurement results and rod motion in detail.
Haiducek, John D.; Welling, Daniel T.; Ganushkina, Natalia Y.; ...
2017-10-30
We simulated the entire month of January, 2005 using the Space Weather Modeling Framework (SWMF) with observed solar wind data as input. We conducted this simulation with and without an inner magnetosphere model, and tested two different grid resolutions. We evaluated the model's accuracy in predicting Kp, Sym-H, AL, and cross polar cap potential (CPCP). We find that the model does an excellent job of predicting the Sym-H index, with an RMSE of 17-18 nT. Kp is predicted well during storm-time conditions, but over-predicted during quiet times by a margin of 1 to 1.7 Kp units. AL is predicted reasonablymore » well on average, with an RMSE of 230-270 nT. However, the model reaches the largest negative AL values significantly less often than the observations. The model tended to over-predict CPCP, with RMSE values on the order of 46-48 kV. We found the results to be insensitive to grid resoution, with the exception of the rate of occurrence for strongly negative AL values. As a result, the use of the inner magnetosphere component, however, affected results significantly, with all quantities except CPCP improved notably when the inner magnetosphere model was on.« less
Current and Future Environmental Balance of Small-Scale Run-of-River Hydropower.
Gallagher, John; Styles, David; McNabola, Aonghus; Williams, A Prysor
2015-05-19
Globally, the hydropower (HP) sector has significant potential to increase its capacity by 2050. This study quantifies the energy and resource demands of small-scale HP projects and presents methods to reduce associated environmental impacts based on potential growth in the sector. The environmental burdens of three (50-650 kW) run-of-river HP projects were calculated using life cycle assessment (LCA). The global warming potential (GWP) for the projects to generate electricity ranged from 5.5-8.9 g CO2 eq/kWh, compared with 403 g CO2 eq/kWh for UK marginal grid electricity. A sensitivity analysis accounted for alternative manufacturing processes, transportation, ecodesign considerations, and extended project lifespan. These findings were extrapolated for technically viable HP sites in Europe, with the potential to generate 7.35 TWh and offset over 2.96 Mt of CO2 from grid electricity per annum. Incorporation of ecodesign could provide resource savings for these HP projects: avoiding 800 000 tonnes of concrete, 10 000 tonnes of steel, and 65 million vehicle miles. Small additional material and energy contributions can double a HP system lifespan, providing 39-47% reductions for all environmental impact categories. In a world of finite resources, this paper highlights the importance of HP as a resource-efficient, renewable energy system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haiducek, John D.; Welling, Daniel T.; Ganushkina, Natalia Y.
We simulated the entire month of January, 2005 using the Space Weather Modeling Framework (SWMF) with observed solar wind data as input. We conducted this simulation with and without an inner magnetosphere model, and tested two different grid resolutions. We evaluated the model's accuracy in predicting Kp, Sym-H, AL, and cross polar cap potential (CPCP). We find that the model does an excellent job of predicting the Sym-H index, with an RMSE of 17-18 nT. Kp is predicted well during storm-time conditions, but over-predicted during quiet times by a margin of 1 to 1.7 Kp units. AL is predicted reasonablymore » well on average, with an RMSE of 230-270 nT. However, the model reaches the largest negative AL values significantly less often than the observations. The model tended to over-predict CPCP, with RMSE values on the order of 46-48 kV. We found the results to be insensitive to grid resoution, with the exception of the rate of occurrence for strongly negative AL values. As a result, the use of the inner magnetosphere component, however, affected results significantly, with all quantities except CPCP improved notably when the inner magnetosphere model was on.« less
Weather Observation Systems and Efficiency of Fighting Forest Fires
NASA Astrophysics Data System (ADS)
Khabarov, N.; Moltchanova, E.; Obersteiner, M.
2007-12-01
Weather observation is an essential component of modern forest fire management systems. Satellite and in-situ based weather observation systems might help to reduce forest loss, human casualties and destruction of economic capital. In this paper, we develop and apply a methodology to assess the benefits of various weather observation systems on reductions of burned area due to early fire detection. In particular, we consider a model where the air patrolling schedule is determined by a fire hazard index. The index is computed from gridded daily weather data for the area covering parts Spain and Portugal. We conduct a number of simulation experiments. First, the resolution of the original data set is artificially reduced. The reduction of the total forest burned area associated with air patrolling based on a finer weather grid indicates the benefit of using higher spatially resolved weather observations. Second, we consider a stochastic model to simulate forest fires and explore the sensitivity of the model with respect to the quality of input data. The analysis of combination of satellite and ground monitoring reveals potential cost saving due to a "system of systems effect" and substantial reduction in burned area. Finally, we estimate the marginal improvement schedule for loss of life and economic capital as a function of the improved fire observing system.
NASA Astrophysics Data System (ADS)
Gohl, Karsten; Denk, Astrid; Eagles, Graeme; Wobbe, Florian
2013-02-01
The Amundsen Sea Embayment (ASE), with Pine Island Bay (PIB) in the eastern embayment, is a key location to understanding tectonic processes of the Pacific margin of West Antarctica. PIB has for a long time been suggested to contain the crustal boundary between the Thurston Island block and the Marie Byrd Land block. Plate tectonic reconstructions have shown that the initial rifting and breakup of New Zealand from West Antarctica occurred between Chatham Rise and the eastern Marie Byrd Land at the ASE. Recent concepts have discussed the possibility of PIB being the site of one of the eastern branches of the West Antarctic Rift System (WARS). About 30,000 km of aeromagnetic data - collected opportunistically by ship-based helicopter flights - and tracks of ship-borne magnetics were recorded over the ASE shelf during two RV Polarstern expeditions in 2006 and 2010. Grid processing, Euler deconvolution and 2D modelling were applied for the analysis of magnetic anomaly patterns, identification of structural lineaments and characterisation of magnetic source bodies. The grid clearly outlines the boundary zone between the inner shelf with outcropping basement rocks and the sedimentary basins of the middle to outer shelf. Distinct zones of anomaly patterns and lineaments can be associated with at least three tectonic phases from (1) magmatic emplacement zones of Cretaceous rifting and breakup (100-85 Ma), to (2) a southern distributed plate boundary zone of the Bellingshausen Plate (80-61 Ma) and (3) activities of the WARS indicated by NNE-SSW trending lineaments (55-30 Ma?). The analysis and interpretation are also used for constraining the directions of some of the flow paths of past grounded ice streams across the shelf.
Sass, John H.; Walters, Mark A.
1999-01-01
The Basin and Range Province of the Western United States covers most of Nevada and parts of adjoining states. It was formed by east-west tectonic extension that occurred mostly between 50 and 10 Ma, but which still is active in some areas. The northern Basin and Range, also known as the Great Basin, is higher in elevation, has higher regional heat flow and is more tectonically active than the southern Basin and Range which encompasses the Mojave and Sonoran Deserts. The Great Basin terrane contains the largest number of geothermal power plants in the United States, although most electrical production is at The Geysers and in the Salton Trough. Installed capacities of electrical power plants in the Great Basin vary from 1 to 260 MWe. Productivity is limited largely by permeability, relatively small productive reservoir volumes, available water, market conditions and the availability of transmission lines. Accessible, in-place heat is not a limiting condition for geothermal systems in the Great Basin. In many areas, economic temperatures (>120°C) can be found at economically drillable depths making it an appropriate region for implementation of the concept of "Enhanced Geothermal Systems" (EGS). An incremental approach to EGS would involve increasing the productivity and longevity of existing hydrothermal systems. Those geothermal projects that have an existing power plant and transmission facilities are the most attractive EGS candidates. Sites that were not developed owing to marginal size, lack of intrinsic permeability, and distance to existing electrical grid lines are also worthy of consideration for off-grid power production in geographically isolated markets such as ranches, farms, mines, and smelters.
Parthasarathy, Narayanan; DeShazer, David; England, Marilyn; Waag, David M
2006-11-01
A polysaccharide microarray platform was prepared by immobilizing Burkholderia pseudomallei and Burkholderia mallei polysaccharides. This polysaccharide array was tested with success for detecting B. pseudomallei and B. mallei serum (human and animal) antibodies. The advantages of this microarray technology over the current serodiagnosis of the above bacterial infections were discussed.
EDRN Biomarker Reference Lab: Pacific Northwest National Laboratory — EDRN Public Portal
The purpose of this project is to develop antibody microarrays incorporating three major improvements compared to previous antibody microarray platforms, and to produce and disseminate these antibody microarray technologies for the Early Detection Research Network (EDRN) and the research community focusing on early detection, and risk assessment of cancer.
DEVELOPMENT AND VALIDATION OF A 2,000 GENE MICROARRAY FOR THE FATHEAD MINNOW, PIMEPHALES PROMELAS
The development of the gene microarray has provided the field of ecotoxicology a new tool to identify modes of action (MOA) of chemicals and chemical mixtures. Herein we describe the development and application of a 2,000 gene oligonucleotide microarray for the fathead minnow (P...
Fabrication of Carbohydrate Microarrays by Boronate Formation.
Adak, Avijit K; Lin, Ting-Wei; Li, Ben-Yuan; Lin, Chun-Cheng
2017-01-01
The interactions between soluble carbohydrates and/or surface displayed glycans and protein receptors are essential to many biological processes and cellular recognition events. Carbohydrate microarrays provide opportunities for high-throughput quantitative analysis of carbohydrate-protein interactions. Over the past decade, various techniques have been implemented for immobilizing glycans on solid surfaces in a microarray format. Herein, we describe a detailed protocol for fabricating carbohydrate microarrays that capitalizes on the intrinsic reactivity of boronic acid toward carbohydrates to form stable boronate diesters. A large variety of unprotected carbohydrates ranging in structure from simple disaccharides and trisaccharides to considerably more complex human milk and blood group (oligo)saccharides have been covalently immobilized in a single step on glass slides, which were derivatized with high-affinity boronic acid ligands. The immobilized ligands in these microarrays maintain the receptor-binding activities including those of lectins and antibodies according to the structures of their pendant carbohydrates for rapid analysis of a number of carbohydrate-recognition events within 30 h. This method facilitates the direct construction of otherwise difficult to obtain carbohydrate microarrays from underivatized glycans.
Signal amplification by rolling circle amplification on DNA microarrays
Nallur, Girish; Luo, Chenghua; Fang, Linhua; Cooley, Stephanie; Dave, Varshal; Lambert, Jeremy; Kukanskis, Kari; Kingsmore, Stephen; Lasken, Roger; Schweitzer, Barry
2001-01-01
While microarrays hold considerable promise in large-scale biology on account of their massively parallel analytical nature, there is a need for compatible signal amplification procedures to increase sensitivity without loss of multiplexing. Rolling circle amplification (RCA) is a molecular amplification method with the unique property of product localization. This report describes the application of RCA signal amplification for multiplexed, direct detection and quantitation of nucleic acid targets on planar glass and gel-coated microarrays. As few as 150 molecules bound to the surface of microarrays can be detected using RCA. Because of the linear kinetics of RCA, nucleic acid target molecules may be measured with a dynamic range of four orders of magnitude. Consequently, RCA is a promising technology for the direct measurement of nucleic acids on microarrays without the need for a potentially biasing preamplification step. PMID:11726701
Cell-Based Microarrays for In Vitro Toxicology
NASA Astrophysics Data System (ADS)
Wegener, Joachim
2015-07-01
DNA/RNA and protein microarrays have proven their outstanding bioanalytical performance throughout the past decades, given the unprecedented level of parallelization by which molecular recognition assays can be performed and analyzed. Cell microarrays (CMAs) make use of similar construction principles. They are applied to profile a given cell population with respect to the expression of specific molecular markers and also to measure functional cell responses to drugs and chemicals. This review focuses on the use of cell-based microarrays for assessing the cytotoxicity of drugs, toxins, or chemicals in general. It also summarizes CMA construction principles with respect to the cell types that are used for such microarrays, the readout parameters to assess toxicity, and the various formats that have been established and applied. The review ends with a critical comparison of CMAs and well-established microtiter plate (MTP) approaches.
The use of open source bioinformatics tools to dissect transcriptomic data.
Nitsche, Benjamin M; Ram, Arthur F J; Meyer, Vera
2012-01-01
Microarrays are a valuable technology to study fungal physiology on a transcriptomic level. Various microarray platforms are available comprising both single and two channel arrays. Despite different technologies, preprocessing of microarray data generally includes quality control, background correction, normalization, and summarization of probe level data. Subsequently, depending on the experimental design, diverse statistical analysis can be performed, including the identification of differentially expressed genes and the construction of gene coexpression networks.We describe how Bioconductor, a collection of open source and open development packages for the statistical programming language R, can be used for dissecting microarray data. We provide fundamental details that facilitate the process of getting started with R and Bioconductor. Using two publicly available microarray datasets from Aspergillus niger, we give detailed protocols on how to identify differentially expressed genes and how to construct gene coexpression networks.
Zhang, Zhaowei; Li, Peiwu; Hu, Xiaofeng; Zhang, Qi; Ding, Xiaoxia; Zhang, Wen
2012-01-01
Chemical contaminants in food have caused serious health issues in both humans and animals. Microarray technology is an advanced technique suitable for the analysis of chemical contaminates. In particular, immuno-microarray approach is one of the most promising methods for chemical contaminants analysis. The use of microarrays for the analysis of chemical contaminants is the subject of this review. Fabrication strategies and detection methods for chemical contaminants are discussed in detail. Application to the analysis of mycotoxins, biotoxins, pesticide residues, and pharmaceutical residues is also described. Finally, future challenges and opportunities are discussed.
Enhancing Results of Microarray Hybridizations Through Microagitation
Toegl, Andreas; Kirchner, Roland; Gauer, Christoph; Wixforth, Achim
2003-01-01
Protein and DNA microarrays have become a standard tool in proteomics/genomics research. In order to guarantee fast and reproducible hybridization results, the diffusion limit must be overcome. Surface acoustic wave (SAW) micro-agitation chips efficiently agitate the smallest sample volumes (down to 10 μL and below) without introducing any dead volume. The advantages are reduced reaction time, increased signal-to-noise ratio, improved homogeneity across the microarray, and better slide-to-slide reproducibility. The SAW micromixer chips are the heart of the Advalytix ArrayBooster, which is compatible with all microarrays based on the microscope slide format. PMID:13678150
AFM 4.0: a toolbox for DNA microarray analysis
Breitkreutz, Bobby-Joe; Jorgensen, Paul; Breitkreutz, Ashton; Tyers, Mike
2001-01-01
We have developed a series of programs, collectively packaged as Array File Maker 4.0 (AFM), that manipulate and manage DNA microarray data. AFM 4.0 is simple to use, applicable to any organism or microarray, and operates within the familiar confines of Microsoft Excel. Given a database of expression ratios, AFM 4.0 generates input files for clustering, helps prepare colored figures and Venn diagrams, and can uncover aneuploidy in yeast microarray data. AFM 4.0 should be especially useful to laboratories that do not have access to specialized commercial or in-house software. PMID:11532221
Progress in the application of DNA microarrays.
Lobenhofer, E K; Bushel, P R; Afshari, C A; Hamadeh, H K
2001-01-01
Microarray technology has been applied to a variety of different fields to address fundamental research questions. The use of microarrays, or DNA chips, to study the gene expression profiles of biologic samples began in 1995. Since that time, the fundamental concepts behind the chip, the technology required for making and using these chips, and the multitude of statistical tools for analyzing the data have been extensively reviewed. For this reason, the focus of this review will be not on the technology itself but on the application of microarrays as a research tool and the future challenges of the field. PMID:11673116
Issues in the analysis of oligonucleotide tiling microarrays for transcript mapping
NASA Technical Reports Server (NTRS)
Royce, Thomas E.; Rozowsky, Joel S.; Bertone, Paul; Samanta, Manoj; Stolc, Viktor; Weissman, Sherman; Snyder, Michael; Gerstein, Mark
2005-01-01
Traditional microarrays use probes complementary to known genes to quantitate the differential gene expression between two or more conditions. Genomic tiling microarray experiments differ in that probes that span a genomic region at regular intervals are used to detect the presence or absence of transcription. This difference means the same sets of biases and the methods for addressing them are unlikely to be relevant to both types of experiment. We introduce the informatics challenges arising in the analysis of tiling microarray experiments as open problems to the scientific community and present initial approaches for the analysis of this nascent technology.
Shrinkage regression-based methods for microarray missing value imputation.
Wang, Hsiuying; Chiu, Chia-Chun; Wu, Yi-Ching; Wu, Wei-Sheng
2013-01-01
Missing values commonly occur in the microarray data, which usually contain more than 5% missing values with up to 90% of genes affected. Inaccurate missing value estimation results in reducing the power of downstream microarray data analyses. Many types of methods have been developed to estimate missing values. Among them, the regression-based methods are very popular and have been shown to perform better than the other types of methods in many testing microarray datasets. To further improve the performances of the regression-based methods, we propose shrinkage regression-based methods. Our methods take the advantage of the correlation structure in the microarray data and select similar genes for the target gene by Pearson correlation coefficients. Besides, our methods incorporate the least squares principle, utilize a shrinkage estimation approach to adjust the coefficients of the regression model, and then use the new coefficients to estimate missing values. Simulation results show that the proposed methods provide more accurate missing value estimation in six testing microarray datasets than the existing regression-based methods do. Imputation of missing values is a very important aspect of microarray data analyses because most of the downstream analyses require a complete dataset. Therefore, exploring accurate and efficient methods for estimating missing values has become an essential issue. Since our proposed shrinkage regression-based methods can provide accurate missing value estimation, they are competitive alternatives to the existing regression-based methods.
2004-01-01
of RNA From Peripheral Blood Cells: A Validation Study for Molecular Diagnostics by Microarray and Kinetic RT-PCR Assays Application in...VALIDATION STUDY FOR MOLECULAR DIAGNOSTICS BY MICROARRAY AND KINETIC RT-PCR ASSAYS APPLICATION IN AEROSPACE MEDICINE INTRODUCTION Extraction of cellular
Microarray profiling of chemical-induced effects is being increasingly used in medium and high-throughput formats. In this study, we describe computational methods to identify molecular targets from whole-genome microarray data using as an example the estrogen receptor α (ERα), ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sastry, Chellury; Pratt, Robert G.; Srivastava, Viraj
2010-12-01
In this report, we present the results of an analytical cost/benefit study of residential smart appliances from a utility/grid perspective in support of a joint stakeholder petition to the ENERGY STAR program within the Environmental Protection Agency (EPA) and Department of Energy (DOE). The goal of the petition is in part to provide appliance manufacturers incentives to hasten the production of smart appliances. The underlying hypothesis is that smart appliances can play a critical role in addressing some of the societal challenges, such as anthropogenic global warming, associated with increased electricity demand, and facilitate increased penetration of renewable sources ofmore » power. The appliances we consider include refrigerator/freezers, clothes washers, clothes dryers, room air-conditioners, and dishwashers. The petition requests the recognition that providing an appliance with smart grid capability, i.e., products that meet the definition of a smart appliance, is at least equivalent to a corresponding five percent in operational machine efficiencies. It is then expected that given sufficient incentives and value propositions, and suitable automation capabilities built into smart appliances, residential consumers will be adopting these smart appliances and will be willing participants in addressing the aforementioned societal challenges by more effectively managing their home electricity consumption. The analytical model we utilize in our cost/benefit analysis consists of a set of user-definable assumptions such as the definition of on-peak (hours of day, days of week, months of year), the expected percentage of normal consumer electricity consumption (also referred to as appliance loads) that can shifted from peak hours to off-peak hours, the average power rating of each appliance, etc. Based on these assumptions, we then formulate what the wholesale grid operating-cost savings, or benefits, would be if the smart capabilities of appliances were invoked, and some percentage of appliance loads were shifted away from peak hours to run during off-peak hours, and appliance loads served power-system balancing needs such as spinning reserves that would otherwise have to be provided by generators. The rationale is that appliance loads can be curtailed for about ten minutes or less in response to a grid contingency without any diminution in the quality of service to the consumer. We then estimate the wholesale grid operating-cost savings based on historical wholesale-market clearing prices (location marginal and spinning reserve) from major wholesale power markets in the United States. The savings derived from the smart grid capabilities of an appliance are then compared to the savings derived from a five percent increase in traditional operational machine efficiencies, referred to as cost in this report, to determine whether the savings in grid operating costs (benefits) are at least as high as or higher than the operational machine efficiency credit (cost).« less
Chromosomal Microarray versus Karyotyping for Prenatal Diagnosis
Wapner, Ronald J.; Martin, Christa Lese; Levy, Brynn; Ballif, Blake C.; Eng, Christine M.; Zachary, Julia M.; Savage, Melissa; Platt, Lawrence D.; Saltzman, Daniel; Grobman, William A.; Klugman, Susan; Scholl, Thomas; Simpson, Joe Leigh; McCall, Kimberly; Aggarwal, Vimla S.; Bunke, Brian; Nahum, Odelia; Patel, Ankita; Lamb, Allen N.; Thom, Elizabeth A.; Beaudet, Arthur L.; Ledbetter, David H.; Shaffer, Lisa G.; Jackson, Laird
2013-01-01
Background Chromosomal microarray analysis has emerged as a primary diagnostic tool for the evaluation of developmental delay and structural malformations in children. We aimed to evaluate the accuracy, efficacy, and incremental yield of chromosomal microarray analysis as compared with karyotyping for routine prenatal diagnosis. Methods Samples from women undergoing prenatal diagnosis at 29 centers were sent to a central karyotyping laboratory. Each sample was split in two; standard karyotyping was performed on one portion and the other was sent to one of four laboratories for chromosomal microarray. Results We enrolled a total of 4406 women. Indications for prenatal diagnosis were advanced maternal age (46.6%), abnormal result on Down’s syndrome screening (18.8%), structural anomalies on ultrasonography (25.2%), and other indications (9.4%). In 4340 (98.8%) of the fetal samples, microarray analysis was successful; 87.9% of samples could be used without tissue culture. Microarray analysis of the 4282 nonmosaic samples identified all the aneuploidies and unbalanced rearrangements identified on karyotyping but did not identify balanced translocations and fetal triploidy. In samples with a normal karyotype, microarray analysis revealed clinically relevant deletions or duplications in 6.0% with a structural anomaly and in 1.7% of those whose indications were advanced maternal age or positive screening results. Conclusions In the context of prenatal diagnostic testing, chromosomal microarray analysis identified additional, clinically significant cytogenetic information as compared with karyotyping and was equally efficacious in identifying aneuploidies and unbalanced rearrangements but did not identify balanced translocations and triploidies. (Funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development and others; ClinicalTrials.gov number, NCT01279733.) PMID:23215555
The MGED Ontology: a resource for semantics-based description of microarray experiments.
Whetzel, Patricia L; Parkinson, Helen; Causton, Helen C; Fan, Liju; Fostel, Jennifer; Fragoso, Gilberto; Game, Laurence; Heiskanen, Mervi; Morrison, Norman; Rocca-Serra, Philippe; Sansone, Susanna-Assunta; Taylor, Chris; White, Joseph; Stoeckert, Christian J
2006-04-01
The generation of large amounts of microarray data and the need to share these data bring challenges for both data management and annotation and highlights the need for standards. MIAME specifies the minimum information needed to describe a microarray experiment and the Microarray Gene Expression Object Model (MAGE-OM) and resulting MAGE-ML provide a mechanism to standardize data representation for data exchange, however a common terminology for data annotation is needed to support these standards. Here we describe the MGED Ontology (MO) developed by the Ontology Working Group of the Microarray Gene Expression Data (MGED) Society. The MO provides terms for annotating all aspects of a microarray experiment from the design of the experiment and array layout, through to the preparation of the biological sample and the protocols used to hybridize the RNA and analyze the data. The MO was developed to provide terms for annotating experiments in line with the MIAME guidelines, i.e. to provide the semantics to describe a microarray experiment according to the concepts specified in MIAME. The MO does not attempt to incorporate terms from existing ontologies, e.g. those that deal with anatomical parts or developmental stages terms, but provides a framework to reference terms in other ontologies and therefore facilitates the use of ontologies in microarray data annotation. The MGED Ontology version.1.2.0 is available as a file in both DAML and OWL formats at http://mged.sourceforge.net/ontologies/index.php. Release notes and annotation examples are provided. The MO is also provided via the NCICB's Enterprise Vocabulary System (http://nciterms.nci.nih.gov/NCIBrowser/Dictionary.do). Stoeckrt@pcbi.upenn.edu Supplementary data are available at Bioinformatics online.
Pine, P S; Boedigheimer, M; Rosenzweig, B A; Turpaz, Y; He, Y D; Delenstarr, G; Ganter, B; Jarnagin, K; Jones, W D; Reid, L H; Thompson, K L
2008-11-01
Effective use of microarray technology in clinical and regulatory settings is contingent on the adoption of standard methods for assessing performance. The MicroArray Quality Control project evaluated the repeatability and comparability of microarray data on the major commercial platforms and laid the groundwork for the application of microarray technology to regulatory assessments. However, methods for assessing performance that are commonly applied to diagnostic assays used in laboratory medicine remain to be developed for microarray assays. A reference system for microarray performance evaluation and process improvement was developed that includes reference samples, metrics and reference datasets. The reference material is composed of two mixes of four different rat tissue RNAs that allow defined target ratios to be assayed using a set of tissue-selective analytes that are distributed along the dynamic range of measurement. The diagnostic accuracy of detected changes in expression ratios, measured as the area under the curve from receiver operating characteristic plots, provides a single commutable value for comparing assay specificity and sensitivity. The utility of this system for assessing overall performance was evaluated for relevant applications like multi-laboratory proficiency testing programs and single-laboratory process drift monitoring. The diagnostic accuracy of detection of a 1.5-fold change in signal level was found to be a sensitive metric for comparing overall performance. This test approaches the technical limit for reliable discrimination of differences between two samples using this technology. We describe a reference system that provides a mechanism for internal and external assessment of laboratory proficiency with microarray technology and is translatable to performance assessments on other whole-genome expression arrays used for basic and clinical research.
A meta-data based method for DNA microarray imputation.
Jörnsten, Rebecka; Ouyang, Ming; Wang, Hui-Yu
2007-03-29
DNA microarray experiments are conducted in logical sets, such as time course profiling after a treatment is applied to the samples, or comparisons of the samples under two or more conditions. Due to cost and design constraints of spotted cDNA microarray experiments, each logical set commonly includes only a small number of replicates per condition. Despite the vast improvement of the microarray technology in recent years, missing values are prevalent. Intuitively, imputation of missing values is best done using many replicates within the same logical set. In practice, there are few replicates and thus reliable imputation within logical sets is difficult. However, it is in the case of few replicates that the presence of missing values, and how they are imputed, can have the most profound impact on the outcome of downstream analyses (e.g. significance analysis and clustering). This study explores the feasibility of imputation across logical sets, using the vast amount of publicly available microarray data to improve imputation reliability in the small sample size setting. We download all cDNA microarray data of Saccharomyces cerevisiae, Arabidopsis thaliana, and Caenorhabditis elegans from the Stanford Microarray Database. Through cross-validation and simulation, we find that, for all three species, our proposed imputation using data from public databases is far superior to imputation within a logical set, sometimes to an astonishing degree. Furthermore, the imputation root mean square error for significant genes is generally a lot less than that of non-significant ones. Since downstream analysis of significant genes, such as clustering and network analysis, can be very sensitive to small perturbations of estimated gene effects, it is highly recommended that researchers apply reliable data imputation prior to further analysis. Our method can also be applied to cDNA microarray experiments from other species, provided good reference data are available.
Goodman, Corey W; Major, Heather J; Walls, William D; Sheffield, Val C; Casavant, Thomas L; Darbro, Benjamin W
2015-04-01
Chromosomal microarrays (CMAs) are routinely used in both research and clinical laboratories; yet, little attention has been given to the estimation of genome-wide true and false negatives during the assessment of these assays and how such information could be used to calibrate various algorithmic metrics to improve performance. Low-throughput, locus-specific methods such as fluorescence in situ hybridization (FISH), quantitative PCR (qPCR), or multiplex ligation-dependent probe amplification (MLPA) preclude rigorous calibration of various metrics used by copy number variant (CNV) detection algorithms. To aid this task, we have established a comparative methodology, CNV-ROC, which is capable of performing a high throughput, low cost, analysis of CMAs that takes into consideration genome-wide true and false negatives. CNV-ROC uses a higher resolution microarray to confirm calls from a lower resolution microarray and provides for a true measure of genome-wide performance metrics at the resolution offered by microarray testing. CNV-ROC also provides for a very precise comparison of CNV calls between two microarray platforms without the need to establish an arbitrary degree of overlap. Comparison of CNVs across microarrays is done on a per-probe basis and receiver operator characteristic (ROC) analysis is used to calibrate algorithmic metrics, such as log2 ratio threshold, to enhance CNV calling performance. CNV-ROC addresses a critical and consistently overlooked aspect of analytical assessments of genome-wide techniques like CMAs which is the measurement and use of genome-wide true and false negative data for the calculation of performance metrics and comparison of CNV profiles between different microarray experiments. Copyright © 2015 Elsevier Inc. All rights reserved.
Construction of a cDNA microarray derived from the ascidian Ciona intestinalis.
Azumi, Kaoru; Takahashi, Hiroki; Miki, Yasufumi; Fujie, Manabu; Usami, Takeshi; Ishikawa, Hisayoshi; Kitayama, Atsusi; Satou, Yutaka; Ueno, Naoto; Satoh, Nori
2003-10-01
A cDNA microarray was constructed from a basal chordate, the ascidian Ciona intestinalis. The draft genome of Ciona has been read and inferred to contain approximately 16,000 protein-coding genes, and cDNAs for transcripts of 13,464 genes have been characterized and compiled as the "Ciona intestinalis Gene Collection Release I". In the present study, we constructed a cDNA microarray of these 13,464 Ciona genes. A preliminary experiment with Cy3- and Cy5-labeled probes showed extensive differential gene expression between fertilized eggs and larvae. In addition, there was a good correlation between results obtained by the present microarray analysis and those from previous EST analyses. This first microarray of a large collection of Ciona intestinalis cDNA clones should facilitate the analysis of global gene expression and gene networks during the embryogenesis of basal chordates.
Improvement in the amine glass platform by bubbling method for a DNA microarray
Jee, Seung Hyun; Kim, Jong Won; Lee, Ji Hyeong; Yoon, Young Soo
2015-01-01
A glass platform with high sensitivity for sexually transmitted diseases microarray is described here. An amino-silane-based self-assembled monolayer was coated on the surface of a glass platform using a novel bubbling method. The optimized surface of the glass platform had highly uniform surface modifications using this method, as well as improved hybridization properties with capture probes in the DNA microarray. On the basis of these results, the improved glass platform serves as a highly reliable and optimal material for the DNA microarray. Moreover, in this study, we demonstrated that our glass platform, manufactured by utilizing the bubbling method, had higher uniformity, shorter processing time, lower background signal, and higher spot signal than the platforms manufactured by the general dipping method. The DNA microarray manufactured with a glass platform prepared using bubbling method can be used as a clinical diagnostic tool. PMID:26468293
Improvement in the amine glass platform by bubbling method for a DNA microarray.
Jee, Seung Hyun; Kim, Jong Won; Lee, Ji Hyeong; Yoon, Young Soo
2015-01-01
A glass platform with high sensitivity for sexually transmitted diseases microarray is described here. An amino-silane-based self-assembled monolayer was coated on the surface of a glass platform using a novel bubbling method. The optimized surface of the glass platform had highly uniform surface modifications using this method, as well as improved hybridization properties with capture probes in the DNA microarray. On the basis of these results, the improved glass platform serves as a highly reliable and optimal material for the DNA microarray. Moreover, in this study, we demonstrated that our glass platform, manufactured by utilizing the bubbling method, had higher uniformity, shorter processing time, lower background signal, and higher spot signal than the platforms manufactured by the general dipping method. The DNA microarray manufactured with a glass platform prepared using bubbling method can be used as a clinical diagnostic tool.
Prediction of regulatory gene pairs using dynamic time warping and gene ontology.
Yang, Andy C; Hsu, Hui-Huang; Lu, Ming-Da; Tseng, Vincent S; Shih, Timothy K
2014-01-01
Selecting informative genes is the most important task for data analysis on microarray gene expression data. In this work, we aim at identifying regulatory gene pairs from microarray gene expression data. However, microarray data often contain multiple missing expression values. Missing value imputation is thus needed before further processing for regulatory gene pairs becomes possible. We develop a novel approach to first impute missing values in microarray time series data by combining k-Nearest Neighbour (KNN), Dynamic Time Warping (DTW) and Gene Ontology (GO). After missing values are imputed, we then perform gene regulation prediction based on our proposed DTW-GO distance measurement of gene pairs. Experimental results show that our approach is more accurate when compared with existing missing value imputation methods on real microarray data sets. Furthermore, our approach can also discover more regulatory gene pairs that are known in the literature than other methods.
Draghici, Sorin; Tarca, Adi L; Yu, Longfei; Ethier, Stephen; Romero, Roberto
2008-03-01
The BioArray Software Environment (BASE) is a very popular MIAME-compliant, web-based microarray data repository. However in BASE, like in most other microarray data repositories, the experiment annotation and raw data uploading can be very timeconsuming, especially for large microarray experiments. We developed KUTE (Karmanos Universal daTabase for microarray Experiments), as a plug-in for BASE 2.0 that addresses these issues. KUTE provides an automatic experiment annotation feature and a completely redesigned data work-flow that dramatically reduce the human-computer interaction time. For instance, in BASE 2.0 a typical Affymetrix experiment involving 100 arrays required 4 h 30 min of user interaction time forexperiment annotation, and 45 min for data upload/download. In contrast, for the same experiment, KUTE required only 28 min of user interaction time for experiment annotation, and 3.3 min for data upload/download. http://vortex.cs.wayne.edu/kute/index.html.
Temperature Gradient Effect on Gas Discrimination Power of a Metal-Oxide Thin-Film Sensor Microarray
Sysoev, Victor V.; Kiselev, Ilya; Frietsch, Markus; Goschnick, Joachim
2004-01-01
The paper presents results concerning the effect of spatial inhomogeneous operating temperature on the gas discrimination power of a gas-sensor microarray, with the latter based on a thin SnO2 film employed in the KAMINA electronic nose. Three different temperature distributions over the substrate are discussed: a nearly homogeneous one and two temperature gradients, equal to approx. 3.3 °C/mm and 6.7 °C/mm, applied across the sensor elements (segments) of the array. The gas discrimination power of the microarray is judged by using the Mahalanobis distance in the LDA (Linear Discrimination Analysis) coordinate system between the data clusters obtained by the response of the microarray to four target vapors: ethanol, acetone, propanol and ammonia. It is shown that the application of a temperature gradient increases the gas discrimination power of the microarray by up to 35 %.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gentry, T.; Schadt, C.; Zhou, J.
Microarray technology has the unparalleled potential tosimultaneously determine the dynamics and/or activities of most, if notall, of the microbial populations in complex environments such as soilsand sediments. Researchers have developed several types of arrays thatcharacterize the microbial populations in these samples based on theirphylogenetic relatedness or functional genomic content. Several recentstudies have used these microarrays to investigate ecological issues;however, most have only analyzed a limited number of samples withrelatively few experiments utilizing the full high-throughput potentialof microarray analysis. This is due in part to the unique analyticalchallenges that these samples present with regard to sensitivity,specificity, quantitation, and data analysis. Thismore » review discussesspecific applications of microarrays to microbial ecology research alongwith some of the latest studies addressing the difficulties encounteredduring analysis of complex microbial communities within environmentalsamples. With continued development, microarray technology may ultimatelyachieve its potential for comprehensive, high-throughput characterizationof microbial populations in near real-time.« less
Chondrocyte channel transcriptomics
Lewis, Rebecca; May, Hannah; Mobasheri, Ali; Barrett-Jolley, Richard
2013-01-01
To date, a range of ion channels have been identified in chondrocytes using a number of different techniques, predominantly electrophysiological and/or biomolecular; each of these has its advantages and disadvantages. Here we aim to compare and contrast the data available from biophysical and microarray experiments. This letter analyses recent transcriptomics datasets from chondrocytes, accessible from the European Bioinformatics Institute (EBI). We discuss whether such bioinformatic analysis of microarray datasets can potentially accelerate identification and discovery of ion channels in chondrocytes. The ion channels which appear most frequently across these microarray datasets are discussed, along with their possible functions. We discuss whether functional or protein data exist which support the microarray data. A microarray experiment comparing gene expression in osteoarthritis and healthy cartilage is also discussed and we verify the differential expression of 2 of these genes, namely the genes encoding large calcium-activated potassium (BK) and aquaporin channels. PMID:23995703
Fisher, D; Markitziu, A; Fishel, D; Brayer, L
1984-07-01
Fifty-four paired, approximal amalgam fillings, extended (E) versus unextended (NE) were placed in forty-three patients and followed up to 4 years. Yearly measurements between the alveolar crest and (a) the apical margin of the fillings (E, NE), and (b) the cemento-enamel junction of the control group, were performed using bite-wing radiographs joined to a translucent grid magnified ten-fold. The rate of alveolar crest resorption was similar for the control (C) and the unextended filling (NE) and reached 0.45 mm after 4 years of follow-up. The resorption of the alveolar crest under the extended (E) filling was significantly higher and reached 0.80 mm after 4 years (P less than 0.001).
Cherry, Catherine; Hopfe, Christina; MacGillivray, Brian; Pidgeon, Nick
2015-04-01
Decarbonising housing is a key UK government policy to mitigate climate change. Using discourse analysis, we assess how low carbon housing is portrayed within British broadsheet media. Three distinct storylines were identified. Dominating the discourse, Zero carbon housing promotes new-build, low carbon houses as offering high technology solutions to the climate problem. Retrofitting homes emphasises the need to reduce emissions within existing housing, tackling both climate change and rising fuel prices. A more marginal discourse, Sustainable living, frames low carbon houses as related to individual identities and 'off-grid' or greener lifestyles. Our analysis demonstrates that technical and economic paradigms dominate media discourse on low carbon housing, marginalising social and behavioural aspects. © The Author(s) 2013.
The Fault Block Model: A novel approach for faulted gas reservoirs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ursin, J.R.; Moerkeseth, P.O.
1994-12-31
The Fault Block Model was designed for the development of gas production from Sleipner Vest. The reservoir consists of marginal marine sandstone of Hugine Formation. Modeling of highly faulted and compartmentalized reservoirs is severely impeded by the nature and extent of known and undetected faults and, in particular, their effectiveness as flow barrier. The model presented is efficient and superior to other models, for highly faulted reservoir, i.e. grid based simulators, because it minimizes the effect of major undetected faults and geological uncertainties. In this article the authors present the Fault Block Model as a new tool to better understandmore » the implications of geological uncertainty in faulted gas reservoirs with good productivity, with respect to uncertainty in well coverage and optimum gas recovery.« less
Brothers, Laura; Herman, Bruce M.; Hart, Patrick E.; Ruppel, Carolyn D.
2016-01-01
Subsea ice-bearing permafrost (IBPF) and associated gas hydrate in the Arctic have been subject to a warming climate and saline intrusion since the last transgression at the end of the Pleistocene. The consequent degradation of IBPF is potentially associated with significant degassing of dissociating gas hydrate deposits. Previous studies interpreted the distribution of subsea permafrost on the U.S. Beaufort continental shelf based on geographically sparse data sets and modeling of expected thermal history. The most cited work projects subsea permafrost to the shelf edge (∼100 m isobath). This study uses a compilation of stacking velocity analyses from ∼100,000 line-km of industry-collected multichannel seismic reflection data acquired over 57,000 km2 of the U.S. Beaufort shelf to delineate continuous subsea IBPF. Gridded average velocities of the uppermost 750 ms two-way travel time range from 1475 to 3110 m s−1. The monotonic, cross-shore pattern in velocity distribution suggests that the seaward extent of continuous IBPF is within 37 km of the modern shoreline at water depths < 25 m. These interpretations corroborate recent Beaufort seismic refraction studies and provide the best, margin-scale evidence that continuous subsea IBPF does not currently extend to the northern limits of the continental shelf.
Integrating Demand-Side Resources into the Electric Grid: Economic and Environmental Considerations
NASA Astrophysics Data System (ADS)
Fisher, Michael J.
Demand-side resources are taking an increasingly prominent role in providing essential grid services once provided by thermal power plants. This thesis considers the economic feasibility and environmental effects of integrating demand-side resources into the electric grid with consideration given to the diversity of market and environmental conditions that can affect their behavior. Chapter 2 explores the private economics and system-level carbon dioxide reduction when using demand response for spinning reserve. Steady end uses like lighting are more than twice as profitable as seasonal end uses because spinning reserve is needed year-round. Avoided carbon emission damages from using demand response instead of fossil fuel generation for spinning reserve are sufficient to justify incentives for demand response resources. Chapter 3 quantifies the system-level net emissions rate and private economics of behind-the-meter energy storage. Net emission rates are lower than marginal emission rates for power plants and in-line with estimates of net emission rates from grid-level storage. The economics are favorable for many buildings in regions with high demand charges like California and New York, even without subsidies. Future penetration into regions with average charges like Pennsylvania will depend greatly on installation cost reductions and wholesale prices for ancillary services. Chapter 4 outlines a novel econometric model to quantify potential revenues from energy storage that reduces demand charges. The model is based on a novel predictive metric that is derived from the building's load profile. Normalized revenue estimates are independent of the power capacity of the battery holding other performance characteristics equal, which can be used to calculate the profit-maximizing storage size. Chapter 5 analyzes the economic feasibility of flow batteries in the commercial and industrial market. Flow batteries at a 4-hour duration must be less expensive on a dollar per installed kWh basis, often by 20-30%, to break even with shorter duration li-ion or lead-acid despite allowing for deeper depth of discharge and superior cycle life. These results are robust to assumptions of tariff rates, battery round-trip efficiencies, amount of solar generation and whether the battery can participate in the wholesale energy and ancillary services markets.
Evaluation of model-predicted hazardous air pollutants (HAPs) near a mid-sized U.S. airport
NASA Astrophysics Data System (ADS)
Vennam, Lakshmi Pradeepa; Vizuete, William; Arunachalam, Saravanan
2015-10-01
Accurate modeling of aircraft-emitted pollutants in the vicinity of airports is essential to study the impact on local air quality and to answer policy and health-impact related issues. To quantify air quality impacts of airport-related hazardous air pollutants (HAPs), we carried out a fine-scale (4 × 4 km horizontal resolution) Community Multiscale Air Quality model (CMAQ) model simulation at the T.F. Green airport in Providence (PVD), Rhode Island. We considered temporally and spatially resolved aircraft emissions from the new Aviation Environmental Design Tool (AEDT). These model predictions were then evaluated with observations from a field campaign focused on assessing HAPs near the PVD airport. The annual normalized mean error (NME) was in the range of 36-70% normalized mean error for all HAPs except for acrolein (>70%). The addition of highly resolved aircraft emissions showed only marginally incremental improvements in performance (1-2% decrease in NME) of some HAPs (formaldehyde, xylene). When compared to a coarser 36 × 36 km grid resolution, the 4 × 4 km grid resolution did improve performance by up to 5-20% NME for formaldehyde and acetaldehyde. The change in power setting (from traditional International Civil Aviation Organization (ICAO) 7% to observation studies based 4%) doubled the aircraft idling emissions of HAPs, but led to only a 2% decrease in NME. Overall modeled aircraft-attributable contributions are in the range of 0.5-28% near a mid-sized airport grid-cell with maximum impacts seen only within 4-16 km from the airport grid-cell. Comparison of CMAQ predictions with HAP estimates from EPA's National Air Toxics Assessment (NATA) did show similar annual mean concentrations and equally poor performance. Current estimates of HAPs for PVD are a challenge for modeling systems and refinements in our ability to simulate aircraft emissions have made only incremental improvements. Even with unrealistic increases in HAPs aviation emissions the model could not match observed concentrations near the runway airport site. Our results suggest other uncertainties in the modeling system such as meteorology, HAPs chemistry, or other emission sources require increased scrutiny.
Ruettger, Anke; Nieter, Johanna; Skrypnyk, Artem; Engelmann, Ines; Ziegler, Albrecht; Moser, Irmgard; Monecke, Stefan; Ehricht, Ralf
2012-01-01
Membrane-based spoligotyping has been converted to DNA microarray format to qualify it for high-throughput testing. We have shown the assay's validity and suitability for direct typing from tissue and detecting new spoligotypes. Advantages of the microarray methodology include rapidity, ease of operation, automatic data processing, and affordability. PMID:22553239
Ruettger, Anke; Nieter, Johanna; Skrypnyk, Artem; Engelmann, Ines; Ziegler, Albrecht; Moser, Irmgard; Monecke, Stefan; Ehricht, Ralf; Sachse, Konrad
2012-07-01
Membrane-based spoligotyping has been converted to DNA microarray format to qualify it for high-throughput testing. We have shown the assay's validity and suitability for direct typing from tissue and detecting new spoligotypes. Advantages of the microarray methodology include rapidity, ease of operation, automatic data processing, and affordability.
Applications of microarray technology in breast cancer research
Cooper, Colin S
2001-01-01
Microarrays provide a versatile platform for utilizing information from the Human Genome Project to benefit human health. This article reviews the ways in which microarray technology may be used in breast cancer research. Its diverse applications include monitoring chromosome gains and losses, tumour classification, drug discovery and development, DNA resequencing, mutation detection and investigating the mechanism of tumour development. PMID:11305951
ERIC Educational Resources Information Center
Chang, Ming-Mei; Briggs, George M.
2007-01-01
DNA microarrays are microscopic arrays on a solid surface, typically a glass slide, on which DNA oligonucleotides are deposited or synthesized in a high-density matrix with a predetermined spatial order. Several types of DNA microarrays have been developed and used for various biological studies. Here, we developed an undergraduate laboratory…
Deciphering the glycosaminoglycan code with the help of microarrays.
de Paz, Jose L; Seeberger, Peter H
2008-07-01
Carbohydrate microarrays have become a powerful tool to elucidate the biological role of complex sugars. Microarrays are particularly useful for the study of glycosaminoglycans (GAGs), a key class of carbohydrates. The high-throughput chip format enables rapid screening of large numbers of potential GAG sequences produced via a complex biosynthesis while consuming very little sample. Here, we briefly highlight the most recent advances involving GAG microarrays built with synthetic or naturally derived oligosaccharides. These chips are powerful tools for characterizing GAG-protein interactions and determining structure-activity relationships for specific sequences. Thereby, they contribute to decoding the information contained in specific GAG sequences.
Walt, David R
2010-01-01
This tutorial review describes how fibre optic microarrays can be used to create a variety of sensing and measurement systems. This review covers the basics of optical fibres and arrays, the different microarray architectures, and describes a multitude of applications. Such arrays enable multiplexed sensing for a variety of analytes including nucleic acids, vapours, and biomolecules. Polymer-coated fibre arrays can be used for measuring microscopic chemical phenomena, such as corrosion and localized release of biochemicals from cells. In addition, these microarrays can serve as a substrate for fundamental studies of single molecules and single cells. The review covers topics of interest to chemists, biologists, materials scientists, and engineers.
A database for the analysis of immunity genes in Drosophila: PADMA database.
Lee, Mark J; Mondal, Ariful; Small, Chiyedza; Paddibhatla, Indira; Kawaguchi, Akira; Govind, Shubha
2011-01-01
While microarray experiments generate voluminous data, discerning trends that support an existing or alternative paradigm is challenging. To synergize hypothesis building and testing, we designed the Pathogen Associated Drosophila MicroArray (PADMA) database for easy retrieval and comparison of microarray results from immunity-related experiments (www.padmadatabase.org). PADMA also allows biologists to upload their microarray-results and compare it with datasets housed within PADMA. We tested PADMA using a preliminary dataset from Ganaspis xanthopoda-infected fly larvae, and uncovered unexpected trends in gene expression, reshaping our hypothesis. Thus, the PADMA database will be a useful resource to fly researchers to evaluate, revise, and refine hypotheses.
Schönmann, Susan; Loy, Alexander; Wimmersberger, Céline; Sobek, Jens; Aquino, Catharine; Vandamme, Peter; Frey, Beat; Rehrauer, Hubert; Eberl, Leo
2009-04-01
For cultivation-independent and highly parallel analysis of members of the genus Burkholderia, an oligonucleotide microarray (phylochip) consisting of 131 hierarchically nested 16S rRNA gene-targeted oligonucleotide probes was developed. A novel primer pair was designed for selective amplification of a 1.3 kb 16S rRNA gene fragment of Burkholderia species prior to microarray analysis. The diagnostic performance of the microarray for identification and differentiation of Burkholderia species was tested with 44 reference strains of the genera Burkholderia, Pandoraea, Ralstonia and Limnobacter. Hybridization patterns based on presence/absence of probe signals were interpreted semi-automatically using the novel likelihood-based strategy of the web-tool Phylo- Detect. Eighty-eight per cent of the reference strains were correctly identified at the species level. The evaluated microarray was applied to investigate shifts in the Burkholderia community structure in acidic forest soil upon addition of cadmium, a condition that selected for Burkholderia species. The microarray results were in agreement with those obtained from phylogenetic analysis of Burkholderia 16S rRNA gene sequences recovered from the same cadmiumcontaminated soil, demonstrating the value of the Burkholderia phylochip for determinative and environmental studies.
Cheng, Ningtao; Wu, Leihong; Cheng, Yiyu
2013-01-01
The promise of microarray technology in providing prediction classifiers for cancer outcome estimation has been confirmed by a number of demonstrable successes. However, the reliability of prediction results relies heavily on the accuracy of statistical parameters involved in classifiers. It cannot be reliably estimated with only a small number of training samples. Therefore, it is of vital importance to determine the minimum number of training samples and to ensure the clinical value of microarrays in cancer outcome prediction. We evaluated the impact of training sample size on model performance extensively based on 3 large-scale cancer microarray datasets provided by the second phase of MicroArray Quality Control project (MAQC-II). An SSNR-based (scale of signal-to-noise ratio) protocol was proposed in this study for minimum training sample size determination. External validation results based on another 3 cancer datasets confirmed that the SSNR-based approach could not only determine the minimum number of training samples efficiently, but also provide a valuable strategy for estimating the underlying performance of classifiers in advance. Once translated into clinical routine applications, the SSNR-based protocol would provide great convenience in microarray-based cancer outcome prediction in improving classifier reliability. PMID:23861920
Ogunnaike, Babatunde A; Gelmi, Claudio A; Edwards, Jeremy S
2010-05-21
Gene expression studies generate large quantities of data with the defining characteristic that the number of genes (whose expression profiles are to be determined) exceed the number of available replicates by several orders of magnitude. Standard spot-by-spot analysis still seeks to extract useful information for each gene on the basis of the number of available replicates, and thus plays to the weakness of microarrays. On the other hand, because of the data volume, treating the entire data set as an ensemble, and developing theoretical distributions for these ensembles provides a framework that plays instead to the strength of microarrays. We present theoretical results that under reasonable assumptions, the distribution of microarray intensities follows the Gamma model, with the biological interpretations of the model parameters emerging naturally. We subsequently establish that for each microarray data set, the fractional intensities can be represented as a mixture of Beta densities, and develop a procedure for using these results to draw statistical inference regarding differential gene expression. We illustrate the results with experimental data from gene expression studies on Deinococcus radiodurans following DNA damage using cDNA microarrays. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Development and characterization of a disposable plastic microarray printhead.
Griessner, Matthias; Hartig, Dave; Christmann, Alexander; Pohl, Carsten; Schellhase, Michaela; Ehrentreich-Förster, Eva
2011-06-01
During the last decade microarrays have become a powerful analytical tool. Commonly microarrays are produced in a non-contact manner using silicone printheads. However, silicone printheads are expensive and not able to be used as a disposable. Here, we show the development and functional characterization of 8-channel plastic microarray printheads that overcome both disadvantages of their conventional silicone counterparts. A combination of injection-molding and laser processing allows us to produce a high quantity of cheap, customizable and disposable microarray printheads. The use of plastics (e.g., polystyrene) minimizes the need for surface modifications required previously for proper printing results. Time-consuming regeneration processes, cleaning procedures and contaminations caused by residual samples are avoided. The utilization of plastic printheads for viscous liquids, such as cell suspensions or whole blood, is possible. Furthermore, functional parts within the plastic printhead (e.g., particle filters) can be included. Our printhead is compatible with commercially available TopSpot devices but provides additional economic and technical benefits as compared to conventional TopSpot printheads, while fulfilling all requirements demanded on the latter. All in all, this work describes how the field of traditional microarray spotting can be extended significantly by low cost plastic printheads.
Optimal Control of Shock Wave Turbulent Boundary Layer Interactions Using Micro-Array Actuation
NASA Technical Reports Server (NTRS)
Anderson, Bernhard H.; Tinapple, Jon; Surber, Lewis
2006-01-01
The intent of this study on micro-array flow control is to demonstrate the viability and economy of Response Surface Methodology (RSM) to determine optimal designs of micro-array actuation for controlling the shock wave turbulent boundary layer interactions within supersonic inlets and compare these concepts to conventional bleed performance. The term micro-array refers to micro-actuator arrays which have heights of 25 to 40 percent of the undisturbed supersonic boundary layer thickness. This study covers optimal control of shock wave turbulent boundary layer interactions using standard micro-vane, tapered micro-vane, and standard micro-ramp arrays at a free stream Mach number of 2.0. The effectiveness of the three micro-array devices was tested using a shock pressure rise induced by the 10 shock generator, which was sufficiently strong as to separate the turbulent supersonic boundary layer. The overall design purpose of the micro-arrays was to alter the properties of the supersonic boundary layer by introducing a cascade of counter-rotating micro-vortices in the near wall region. In this manner, the impact of the shock wave boundary layer (SWBL) interaction on the main flow field was minimized without boundary bleed.
Tra, Yolande V; Evans, Irene M
2010-01-01
BIO2010 put forth the goal of improving the mathematical educational background of biology students. The analysis and interpretation of microarray high-dimensional data can be very challenging and is best done by a statistician and a biologist working and teaching in a collaborative manner. We set up such a collaboration and designed a course on microarray data analysis. We started using Genome Consortium for Active Teaching (GCAT) materials and Microarray Genome and Clustering Tool software and added R statistical software along with Bioconductor packages. In response to student feedback, one microarray data set was fully analyzed in class, starting from preprocessing to gene discovery to pathway analysis using the latter software. A class project was to conduct a similar analysis where students analyzed their own data or data from a published journal paper. This exercise showed the impact that filtering, preprocessing, and different normalization methods had on gene inclusion in the final data set. We conclude that this course achieved its goals to equip students with skills to analyze data from a microarray experiment. We offer our insight about collaborative teaching as well as how other faculty might design and implement a similar interdisciplinary course.
Design of microarray experiments for genetical genomics studies.
Bueno Filho, Júlio S S; Gilmour, Steven G; Rosa, Guilherme J M
2006-10-01
Microarray experiments have been used recently in genetical genomics studies, as an additional tool to understand the genetic mechanisms governing variation in complex traits, such as for estimating heritabilities of mRNA transcript abundances, for mapping expression quantitative trait loci, and for inferring regulatory networks controlling gene expression. Several articles on the design of microarray experiments discuss situations in which treatment effects are assumed fixed and without any structure. In the case of two-color microarray platforms, several authors have studied reference and circular designs. Here, we discuss the optimal design of microarray experiments whose goals refer to specific genetic questions. Some examples are used to illustrate the choice of a design for comparing fixed, structured treatments, such as genotypic groups. Experiments targeting single genes or chromosomic regions (such as with transgene research) or multiple epistatic loci (such as within a selective phenotyping context) are discussed. In addition, microarray experiments in which treatments refer to families or to subjects (within family structures or complex pedigrees) are presented. In these cases treatments are more appropriately considered to be random effects, with specific covariance structures, in which the genetic goals relate to the estimation of genetic variances and the heritability of transcriptional abundances.
WebArray: an online platform for microarray data analysis
Xia, Xiaoqin; McClelland, Michael; Wang, Yipeng
2005-01-01
Background Many cutting-edge microarray analysis tools and algorithms, including commonly used limma and affy packages in Bioconductor, need sophisticated knowledge of mathematics, statistics and computer skills for implementation. Commercially available software can provide a user-friendly interface at considerable cost. To facilitate the use of these tools for microarray data analysis on an open platform we developed an online microarray data analysis platform, WebArray, for bench biologists to utilize these tools to explore data from single/dual color microarray experiments. Results The currently implemented functions were based on limma and affy package from Bioconductor, the spacings LOESS histogram (SPLOSH) method, PCA-assisted normalization method and genome mapping method. WebArray incorporates these packages and provides a user-friendly interface for accessing a wide range of key functions of limma and others, such as spot quality weight, background correction, graphical plotting, normalization, linear modeling, empirical bayes statistical analysis, false discovery rate (FDR) estimation, chromosomal mapping for genome comparison. Conclusion WebArray offers a convenient platform for bench biologists to access several cutting-edge microarray data analysis tools. The website is freely available at . It runs on a Linux server with Apache and MySQL. PMID:16371165
Support vector machine and principal component analysis for microarray data classification
NASA Astrophysics Data System (ADS)
Astuti, Widi; Adiwijaya
2018-03-01
Cancer is a leading cause of death worldwide although a significant proportion of it can be cured if it is detected early. In recent decades, technology called microarray takes an important role in the diagnosis of cancer. By using data mining technique, microarray data classification can be performed to improve the accuracy of cancer diagnosis compared to traditional techniques. The characteristic of microarray data is small sample but it has huge dimension. Since that, there is a challenge for researcher to provide solutions for microarray data classification with high performance in both accuracy and running time. This research proposed the usage of Principal Component Analysis (PCA) as a dimension reduction method along with Support Vector Method (SVM) optimized by kernel functions as a classifier for microarray data classification. The proposed scheme was applied on seven data sets using 5-fold cross validation and then evaluation and analysis conducted on term of both accuracy and running time. The result showed that the scheme can obtained 100% accuracy for Ovarian and Lung Cancer data when Linear and Cubic kernel functions are used. In term of running time, PCA greatly reduced the running time for every data sets.
Microarray-based screening of heat shock protein inhibitors.
Schax, Emilia; Walter, Johanna-Gabriela; Märzhäuser, Helene; Stahl, Frank; Scheper, Thomas; Agard, David A; Eichner, Simone; Kirschning, Andreas; Zeilinger, Carsten
2014-06-20
Based on the importance of heat shock proteins (HSPs) in diseases such as cancer, Alzheimer's disease or malaria, inhibitors of these chaperons are needed. Today's state-of-the-art techniques to identify HSP inhibitors are performed in microplate format, requiring large amounts of proteins and potential inhibitors. In contrast, we have developed a miniaturized protein microarray-based assay to identify novel inhibitors, allowing analysis with 300 pmol of protein. The assay is based on competitive binding of fluorescence-labeled ATP and potential inhibitors to the ATP-binding site of HSP. Therefore, the developed microarray enables the parallel analysis of different ATP-binding proteins on a single microarray. We have demonstrated the possibility of multiplexing by immobilizing full-length human HSP90α and HtpG of Helicobacter pylori on microarrays. Fluorescence-labeled ATP was competed by novel geldanamycin/reblastatin derivatives with IC50 values in the range of 0.5 nM to 4 μM and Z(*)-factors between 0.60 and 0.96. Our results demonstrate the potential of a target-oriented multiplexed protein microarray to identify novel inhibitors for different members of the HSP90 family. Copyright © 2014 Elsevier B.V. All rights reserved.
Sun, Xiuhua; Wang, Huaixin; Wang, Yuanyuan; Gui, Taijiang; Wang, Ke; Gao, Changlu
2018-04-15
Nonspecific binding or adsorption of biomolecules presents as a major obstacle to higher sensitivity, specificity and reproducibility in microarray technology. We report herein a method to fabricate antifouling microarray via photopolymerization of biomimetic betaine compounds. In brief, carboxybetaine methacrylate was polymerized as arrays for protein sensing, while sulfobetaine methacrylate was polymerized as background. With the abundant carboxyl groups on array surfaces and zwitterionic polymers on the entire surfaces, this microarray allows biomolecular immobilization and recognition with low nonspecific interactions due to its antifouling property. Therefore, low concentration of target molecules can be captured and detected by this microarray. It was proved that a concentration of 10ngmL -1 bovine serum albumin in the sample matrix of bovine serum can be detected by the microarray derivatized with anti-bovine serum albumin. Moreover, with proper hydrophilic-hydrophobic designs, this approach can be applied to fabricate surface-tension droplet arrays, which allows surface-directed cell adhesion and growth. These light controllable approaches constitute a clear improvement in the design of antifouling interfaces, which may lead to greater flexibility in the development of interfacial architectures and wider application in blood contact microdevices. Copyright © 2017 Elsevier B.V. All rights reserved.
Evans, Irene M.
2010-01-01
BIO2010 put forth the goal of improving the mathematical educational background of biology students. The analysis and interpretation of microarray high-dimensional data can be very challenging and is best done by a statistician and a biologist working and teaching in a collaborative manner. We set up such a collaboration and designed a course on microarray data analysis. We started using Genome Consortium for Active Teaching (GCAT) materials and Microarray Genome and Clustering Tool software and added R statistical software along with Bioconductor packages. In response to student feedback, one microarray data set was fully analyzed in class, starting from preprocessing to gene discovery to pathway analysis using the latter software. A class project was to conduct a similar analysis where students analyzed their own data or data from a published journal paper. This exercise showed the impact that filtering, preprocessing, and different normalization methods had on gene inclusion in the final data set. We conclude that this course achieved its goals to equip students with skills to analyze data from a microarray experiment. We offer our insight about collaborative teaching as well as how other faculty might design and implement a similar interdisciplinary course. PMID:20810954
EDGE3: A web-based solution for management and analysis of Agilent two color microarray experiments
Vollrath, Aaron L; Smith, Adam A; Craven, Mark; Bradfield, Christopher A
2009-01-01
Background The ability to generate transcriptional data on the scale of entire genomes has been a boon both in the improvement of biological understanding and in the amount of data generated. The latter, the amount of data generated, has implications when it comes to effective storage, analysis and sharing of these data. A number of software tools have been developed to store, analyze, and share microarray data. However, a majority of these tools do not offer all of these features nor do they specifically target the commonly used two color Agilent DNA microarray platform. Thus, the motivating factor for the development of EDGE3 was to incorporate the storage, analysis and sharing of microarray data in a manner that would provide a means for research groups to collaborate on Agilent-based microarray experiments without a large investment in software-related expenditures or extensive training of end-users. Results EDGE3 has been developed with two major functions in mind. The first function is to provide a workflow process for the generation of microarray data by a research laboratory or a microarray facility. The second is to store, analyze, and share microarray data in a manner that doesn't require complicated software. To satisfy the first function, EDGE3 has been developed as a means to establish a well defined experimental workflow and information system for microarray generation. To satisfy the second function, the software application utilized as the user interface of EDGE3 is a web browser. Within the web browser, a user is able to access the entire functionality, including, but not limited to, the ability to perform a number of bioinformatics based analyses, collaborate between research groups through a user-based security model, and access to the raw data files and quality control files generated by the software used to extract the signals from an array image. Conclusion Here, we present EDGE3, an open-source, web-based application that allows for the storage, analysis, and controlled sharing of transcription-based microarray data generated on the Agilent DNA platform. In addition, EDGE3 provides a means for managing RNA samples and arrays during the hybridization process. EDGE3 is freely available for download at . PMID:19732451
Vollrath, Aaron L; Smith, Adam A; Craven, Mark; Bradfield, Christopher A
2009-09-04
The ability to generate transcriptional data on the scale of entire genomes has been a boon both in the improvement of biological understanding and in the amount of data generated. The latter, the amount of data generated, has implications when it comes to effective storage, analysis and sharing of these data. A number of software tools have been developed to store, analyze, and share microarray data. However, a majority of these tools do not offer all of these features nor do they specifically target the commonly used two color Agilent DNA microarray platform. Thus, the motivating factor for the development of EDGE(3) was to incorporate the storage, analysis and sharing of microarray data in a manner that would provide a means for research groups to collaborate on Agilent-based microarray experiments without a large investment in software-related expenditures or extensive training of end-users. EDGE(3) has been developed with two major functions in mind. The first function is to provide a workflow process for the generation of microarray data by a research laboratory or a microarray facility. The second is to store, analyze, and share microarray data in a manner that doesn't require complicated software. To satisfy the first function, EDGE3 has been developed as a means to establish a well defined experimental workflow and information system for microarray generation. To satisfy the second function, the software application utilized as the user interface of EDGE(3) is a web browser. Within the web browser, a user is able to access the entire functionality, including, but not limited to, the ability to perform a number of bioinformatics based analyses, collaborate between research groups through a user-based security model, and access to the raw data files and quality control files generated by the software used to extract the signals from an array image. Here, we present EDGE(3), an open-source, web-based application that allows for the storage, analysis, and controlled sharing of transcription-based microarray data generated on the Agilent DNA platform. In addition, EDGE(3) provides a means for managing RNA samples and arrays during the hybridization process. EDGE(3) is freely available for download at http://edge.oncology.wisc.edu/.
NASA Astrophysics Data System (ADS)
Afonso Dias, Nuno; Afilhado, Alexandra; Schnürle, Philippe; Gallais, Flora; Soares, José; Fuck, Reinhardt; Cupertino, José; Viana, Adriano; Moulin, Maryline; Aslanian, Daniel; Matias, Luís; Evain, Mikael; Loureiro, Afonso
2017-04-01
The deep crustal structure of the North-East equatorial Brazilian margin, was investigated during the MAGIC (Margins of brAzil, Ghana and Ivory Coast) joint project, conducted in 2012. The main goal set to understand the fundamental processes leading to the thinning and finally breakup of the continental crust, in a context of a Pull-apart system with two strike-slip borders. The offshore Barreirinhas Basin, was probed by a set of 5 intersecting deep seismic wide-angle profiles, with the deployment of short-period OBS's from IFREMER and land stations from the Brazilian pool. The experiment was devoted to obtain the 2D structure along the directions of flow lines, parallel to margin segmentation and margin segmentation, from tomography and forward modeling. The OBS's deployed recorded also lateral shooting along some profiles, allowing a 3D tomography inversion complementing the results of 2D modeling. Due to the large variation of the water column thickness, heterogeneous crustal structure and Moho depth, several approaches were tested to generate initial input models, to set the grid parameterization and inversion parameters. The assessment of the 3D model was performed by standard synthetic tests and comparison with the obtained 2D forward models. The results evidence a NW-SE segmentation of the margin, following the opening direction of this pull-apart basin, and N-S segmentation that marks the passage between Basins II-III. The signature of the segmentation is evident in the tomograms, where the shallowing of the basement from Basin II towards the oceanic domain is well marked by a NW-SE velocity gradient. Both 2D forward modeling and 3D tomographic inversion indicate a N-S segmentation in the proto-oceanic and oceanic domains, at least at the shallow mantle level. In the southern area the mantle is much faster than on the north. In all profiles crossing Basin II, a deep layer with velocities of 7-4-7.6 km/s generates both refracted as well as reflected phases from its boundaries, in agreement with the 3D model, which indicate a much more gradual transition of crustal velocities to mantle-velocities, than in the remaining segments. The intersection of Basins II, III and proto-oceanic crust is well marked by the absence of seismic energy propagation at deep crust to mantle levels, with no lateral arrival being recorded. Publication supported by FCT- project UID/GEO/50019/2013 - Instituto Dom Luiz.
[Oligonucleotide microarray for subtyping avian influenza virus].
Xueqing, Han; Xiangmei, Lin; Yihong, Hou; Shaoqiang, Wu; Jian, Liu; Lin, Mei; Guangle, Jia; Zexiao, Yang
2008-09-01
Avian influenza viruses are important human and animal respiratory pathogens and rapid diagnosis of novel emerging avian influenza viruses is vital for effective global influenza surveillance. We developed an oligonucleotide microarray-based method for subtyping all avian influenza virus (16 HA and 9 NA subtypes). In total 25 pairs of primers specific for different subtypes and 1 pair of universal primers were carefully designed based on the genomic sequences of influenza A viruses retrieved from GenBank database. Several multiplex RT-PCR methods were then developed, and the target cDNAs of 25 subtype viruses were amplified by RT-PCR or overlapping PCR for evaluating the microarray. Further 52 oligonucleotide probes specific for all 25 subtype viruses were designed according to published gene sequences of avian influenza viruses in amplified target cDNAs domains, and a microarray for subtyping influenza A virus was developed. Then its specificity and sensitivity were validated by using different subtype strains and 2653 samples from 49 different areas. The results showed that all the subtypes of influenza virus could be identified simultaneously on this microarray with high sensitivity, which could reach to 2.47 pfu/mL virus or 2.5 ng target DNA. Furthermore, there was no cross reaction with other avian respiratory virus. An oligonucleotide microarray-based strategy for detection of avian influenza viruses has been developed. Such a diagnostic microarray will be useful in discovering and identifying all subtypes of avian influenza virus.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masson, D.G.; Huggett, Q.J.; Weaver, P.P.E.
1991-08-01
Side-scan sonar data, cores, and high-resolution profiles have been used to produce an integrated model of sedimentation for the continental margin west of the Canary Islands. Long-range side-scan sonar (GLORIA) data and a grid of 3.5-kHz profiles, covering some 200,000 km{sup 2} allow a regional appraisal of sedimentation. More detailed studies of selected areas have been undertaken using a new 30 kHz deep-towed side-scan sonar (TOBI) developed by the U.K. Institute of Oceanographic Sciences. Sediment cores have been used both to calibrate acoustic facies identified on sonographs and for detailed stratigraphic studies. The most recent significant sedimentation event in themore » area is to Saharan Sediment Slide, which carried material from the upper continental slope off West Africa to the edge of the Madeira Abyssal Plain, a distance of some 1000 km. The authors data shows the downslope evolution of the debris flow. Near the Canaries, it is a 20-m-thick deposit rafting coherent blocks of more than 1 km diameter; side-scan records show a strong flow-parallel fabric on a scale of tens of meters. On the lower slope, the debris flow thins to a few meters, the flow fabric disappears, and the rafted blocks decrease to meters in diameter. Side-scan data from the lower slope show that the Saharan Slide buries an older landscape of turbidity current channels, typically 1 km wide and 50 m deep. Evidence from the Madeiran Abyssal Plain indicates a history of large but infrequent turbidity currents, the emplacement of which is related to the effects of sea level changes on the northwest African margin.« less
Quaternary evolution of the Fennoscandian Ice Sheet from 3D seismic data
NASA Astrophysics Data System (ADS)
Montelli, A.; Dowdeswell, J. A.; Ottesen, D.; Johansen, S. E.
2016-12-01
The Quaternary seismic stratigraphy and architecture of the mid-Norwegian continental shelf and slope are investigated using extensive grids of marine 2D and 3D seismic reflection data that cover more than 100,000 km2 of the continental margin. At least 26 distinct regional palaeo-surfaces have been interpreted within the stratigraphy of the Quaternary Naust Formation on the mid-Norwegian margin. Multiple assemblages of buried glacigenic landforms are preserved within the Naust Formation across most of the study area, facilitating detailed palaeo-glaciological reconstructions. We document a marine-terminating, calving Fennoscandian Ice Sheet (FIS) margin present periodically on the Norwegian shelf since at least the beginning of the Quaternary. Elongate, streamlined landforms interpreted as mega-scale glacial lineations (MSGLs) have been found within the upper part of the Naust sequence N ( 1.9-1.6 Ma), sugesting the development of fast-flowing ice streams since that time. Shifts in the location of depocentres and direction of features indicative of fast ice-flow suggest that several reorganisations in the FIS drainage have occurred since 1.5 Ma. Subglacial landforms reveal a complex and dynamic ice sheet, with converging palaeo-ice streams and several flow-switching events that may reflect major changes in topography and internal ice-sheet structure. Lack of subglacial meltwater channels suggests a largely distributed, low-volume meltwater system that drained the FIS through permeable subglacial till without leaving much erosional evidence. This regional palaeo-environmental examination of the FIS provides a useful framework for ice-sheet modelling and shows that fragmentary preservation of buried surfaces and variability of ice-sheet dynamics should be taken into account when reconstructing glacial history from spatially limited datasets.
Houseknecht, D.W.; Bird, K.J.
2004-01-01
Beaufortian strata (Jurassic-Lower Cretaceous) in the National Petroleum Reserve in Alaska (NPRA) are a focus of exploration since the 1994 discovery of the nearby Alpine oil field (>400 MMBO). These strata include the Kingak Shale, a succession of depositional sequences influenced by rift opening of the Arctic Ocean Basin. Interpretation of sequence stratigraphy and depositional facies from a regional two-dimensional seismic grid and well data allows the definition of four sequence sets that each displays unique stratal geometries and thickness trends across NPRA. A Lower to Middle Jurassic sequence set includes numerous transgressive-regressive sequences that collectively built a clastic shelf in north-central NPRA. Along the south-facing, lobate shelf margin, condensed shales in transgressive systems tracts downlap and coalesce into a basinal condensed section that is likely an important hydrocarbon source rock. An Oxfordian-Kimmeridgian sequence set, deposited during pulses of uplift on the Barrow arch, includes multiple transgressive-regressive sequences that locally contain well-winnowed, shoreface sandstones at the base of transgressive systems tracts. These shoreface sandstones and overlying shales, deposited during maximum flooding, form stratigraphic traps that are the main objective of exploration in the Alpine play in NPRA. A Valanginian sequence set includes at least two transgressive-regressive sequences that display relatively distal characteristics, suggesting high relative sea level. An important exception is the presence of a basal transgressive systems tract that locally contains shoreface sandstones of reservoir quality. A Hauterivian sequence set includes two transgressive-regressive sequences that constitute a shelf-margin wedge developed as the result of tectonic uplift along the Barrow arch during rift opening of the Arctic Ocean Basin. This sequence set displays stratal geometries suggesting incision and synsedimentary collapse of the shelf margin. ?? 2004. The American Association of Petroleum Geologists. All rights reserved.
Contourite drifts on early passive margins as an indicator of established lithospheric breakup
NASA Astrophysics Data System (ADS)
Soares, Duarte M.; Alves, Tiago M.; Terrinha, Pedro
2014-09-01
The Albian-Cenomanian breakup sequence (BS) offshore Northwest Iberia is mapped, described and characterised for the first time in terms of its seismic and depositional facies. The interpreted dataset comprises a large grid of regional (2D) seismic-reflection profiles, complemented by Industry and ODP/DSDP borehole data. Within the BS are observed distinct seismic facies that reflect the presence of: (a) black shales and fine-grained turbidites, (b) mass-transport deposits (MTDs) and coarse-grained turbidites, and (c) contourite drifts. Borehole data show that these depositional systems developed as mixed carbonate-siliciclastic sediments proximally, and as organic-carbon-rich mudstones (black shales) distally on the Northwest Iberia margin. MTDs and turbidites tend to occur on the continental slope, frequently in association with large-scale olistostromes. Distally, these change into interbedded fine-grained turbidites and black shales showing widespread evidence of deep-water current activity towards the top of the BS. Current activity is expressed by intra-BS erosional surfaces and sediment drifts. The results in this paper are important as they demonstrate that contourite drifts are ubiquitous features in the study area after Aptian-Albian lithospheric breakup. Therefore, we interpret the recognition of contourite drifts in Northwest Iberia as having significant palaeogeographic implications. Contourite drifts materialise the onset of important deep-water circulation marking the establishment of oceanic gateways between two fully separated continental margins. As a corollary, we postulate the generation of deep-water geostrophic currents to have had significant impact on North Atlantic climate and ocean circulation during the Albian-Cenomanian, with the record of such impacts being preserved in the contourite drifts analysed in this work.
NASA Astrophysics Data System (ADS)
Li, Guohui; Bai, Ling; Zhou, Yuanze; Wang, Xiaoran; Cui, Qinghui
2017-11-01
P-wave triplications related to the 410 km discontinuity (the 410) were clearly observed from the vertical component seismograms of three intermediate-depth earthquakes that occurred in the Indo-Burma Subduction Zone (IBSZ) and were recorded by the Chinese Digital Seismic Network (CDSN). By matching the observed P-wave triplications with synthetics through a grid search, we obtained the best-fit models for four azimuthal profiles (I-IV from north to south) to constrain the P-wave velocity structure near the 410 beneath the southeastern margin of the Tibetan Plateau (TP). A ubiquitous low-velocity layer (LVL) resides atop the mantle transition zone (MTZ). The LVL is 25 to 40 km thick, with a P-wave velocity decrement ranging from approximately - 5.3% to - 3.6% related to the standard Earth model IASP91. An abrupt transition in the velocity decrement of the LVL was observed between profiles II and III. We postulate that the mantle structure beneath the southeastern margin of the TP is primarily controlled by the southeastern extrusion of the TP to the north combined with the eastward subduction of the Indian plate to the south, but not affected by the Emeishan mantle plume. We attribute the LVL to the partial melting induced by water and/or other volatiles released from the subducted Indian plate and the stagnant Pacific plate, but not from the upwelling or the remnants of the Emeishan mantle plume. A high-velocity anomaly ranging from approximately 1.0% to 1.5% was also detected at a depth of 542 to 600 km, providing additional evidence for the remnants of the subducted Pacific plate within the MTZ.
Goldman, M.; Gvirtzman, H.; Hurwitz, S.
2004-01-01
An extensive time domain electromagnetic (TDEM) survey covering the Sea of Galilee with a dense grid of points has been recently carried out. A total of 269 offshore and 33 supplementary onshore TDEM soundings were performed along six N-S and ten W-E profiles and at selected points both offshore and onshore along the whole coastal line. The interpreted resistivities were calibrated with the direct salinity measurements in the Haon-2 borehole and relatively deep (5 m) cores taken from the lake bottom. It was found that resistivities below 1 ohm-m are solely indicative of groundwater salinity exceeding 10,000 mg Cl/l. Such low resistivities (high salinities) were detected at depths greater than 15 m below almost the entire bottom of the lake. At some parts of the lake, particularly in the south, the saline water was detected at shallower depths, sometimes at a few meters below the bottom. Relatively high resistivity (fresh groundwater) was found along the margins of the lake down to roughly 100 m, the maximum exploration depth of the system. The detected sharp lateral contrasts at the lake margin between high and low resistivities coincide with the faults separating the carbonate and clastic units, respectively. The geometry of the fresh/saline groundwater interface below the central part of the lake is very similar to the shape of the lake bottom, probably due to the diffusive salt transport from the bottom sediments to the lake water. The above geophysical observations suggest differentsalt transport mechanisms from the sediments to the central part of the lake (diffusion) and from regional aquifers to the margins of the lake (advection). ?? 2004 Science From Israel/LPPLtd.
Stevenson, David A; Carey, John C; Cowley, Brett C; Bayrak-Toydemir, Pinar; Mao, Rong; Brothman, Arthur R
2004-12-01
We report a de novo cryptic 11p duplication found by genomic microarray with a cytogenetically detected 4p deletion. Terminal 4p deletions cause Wolf-Hirschhorn syndrome, but the phenotype probably was modified by the paternally derived 11p duplication. This emphasizes the clinical utility of genomic microarray.
DNA microarrays and their use in dermatology.
Mlakar, Vid; Glavac, Damjan
2007-03-01
Multiple different DNA microarray technologies are available on the market today. They can be used for studying either DNA or RNA with the purpose of identifying and explaining the role of genes involved in different processes. This paper reviews different DNA microarray platforms available for such studies and their usage in cases of malignant melanomas, psoriasis, and exposure of keratinocytes and melanocytes to UV illumination.
mRNA-Based Parallel Detection of Active Methanotroph Populations by Use of a Diagnostic Microarray
Bodrossy, Levente; Stralis-Pavese, Nancy; Konrad-Köszler, Marianne; Weilharter, Alexandra; Reichenauer, Thomas G.; Schöfer, David; Sessitsch, Angela
2006-01-01
A method was developed for the mRNA-based application of microbial diagnostic microarrays to detect active microbial populations. DNA- and mRNA-based analyses of environmental samples were compared and confirmed via quantitative PCR. Results indicated that mRNA-based microarray analyses may provide additional information on the composition and functioning of microbial communities. PMID:16461725
DNA Microarray Wet Lab Simulation Brings Genomics into the High School Curriculum
ERIC Educational Resources Information Center
Campbell, A. Malcolm; Zanta, Carolyn A.; Heyer, Laurie J.; Kittinger, Ben; Gabric, Kathleen M.; Adler, Leslie
2006-01-01
We have developed a wet lab DNA microarray simulation as part of a complete DNA microarray module for high school students. The wet lab simulation has been field tested with high school students in Illinois and Maryland as well as in workshops with high school teachers from across the nation. Instead of using DNA, our simulation is based on pH…
Optimization of cDNA microarrays procedures using criteria that do not rely on external standards.
Bruland, Torunn; Anderssen, Endre; Doseth, Berit; Bergum, Hallgeir; Beisvag, Vidar; Laegreid, Astrid
2007-10-18
The measurement of gene expression using microarray technology is a complicated process in which a large number of factors can be varied. Due to the lack of standard calibration samples such as are used in traditional chemical analysis it may be a problem to evaluate whether changes done to the microarray procedure actually improve the identification of truly differentially expressed genes. The purpose of the present work is to report the optimization of several steps in the microarray process both in laboratory practices and in data processing using criteria that do not rely on external standards. We performed a cDNA microarry experiment including RNA from samples with high expected differential gene expression termed "high contrasts" (rat cell lines AR42J and NRK52E) compared to self-self hybridization, and optimized a pipeline to maximize the number of genes found to be differentially expressed in the "high contrasts" RNA samples by estimating the false discovery rate (FDR) using a null distribution obtained from the self-self experiment. The proposed high-contrast versus self-self method (HCSSM) requires only four microarrays per evaluation. The effects of blocking reagent dose, filtering, and background corrections methodologies were investigated. In our experiments a dose of 250 ng LNA (locked nucleic acid) dT blocker, no background correction and weight based filtering gave the largest number of differentially expressed genes. The choice of background correction method had a stronger impact on the estimated number of differentially expressed genes than the choice of filtering method. Cross platform microarray (Illumina) analysis was used to validate that the increase in the number of differentially expressed genes found by HCSSM was real. The results show that HCSSM can be a useful and simple approach to optimize microarray procedures without including external standards. Our optimizing method is highly applicable to both long oligo-probe microarrays which have become commonly used for well characterized organisms such as man, mouse and rat, as well as to cDNA microarrays which are still of importance for organisms with incomplete genome sequence information such as many bacteria, plants and fish.
Optimization of cDNA microarrays procedures using criteria that do not rely on external standards
Bruland, Torunn; Anderssen, Endre; Doseth, Berit; Bergum, Hallgeir; Beisvag, Vidar; Lægreid, Astrid
2007-01-01
Background The measurement of gene expression using microarray technology is a complicated process in which a large number of factors can be varied. Due to the lack of standard calibration samples such as are used in traditional chemical analysis it may be a problem to evaluate whether changes done to the microarray procedure actually improve the identification of truly differentially expressed genes. The purpose of the present work is to report the optimization of several steps in the microarray process both in laboratory practices and in data processing using criteria that do not rely on external standards. Results We performed a cDNA microarry experiment including RNA from samples with high expected differential gene expression termed "high contrasts" (rat cell lines AR42J and NRK52E) compared to self-self hybridization, and optimized a pipeline to maximize the number of genes found to be differentially expressed in the "high contrasts" RNA samples by estimating the false discovery rate (FDR) using a null distribution obtained from the self-self experiment. The proposed high-contrast versus self-self method (HCSSM) requires only four microarrays per evaluation. The effects of blocking reagent dose, filtering, and background corrections methodologies were investigated. In our experiments a dose of 250 ng LNA (locked nucleic acid) dT blocker, no background correction and weight based filtering gave the largest number of differentially expressed genes. The choice of background correction method had a stronger impact on the estimated number of differentially expressed genes than the choice of filtering method. Cross platform microarray (Illumina) analysis was used to validate that the increase in the number of differentially expressed genes found by HCSSM was real. Conclusion The results show that HCSSM can be a useful and simple approach to optimize microarray procedures without including external standards. Our optimizing method is highly applicable to both long oligo-probe microarrays which have become commonly used for well characterized organisms such as man, mouse and rat, as well as to cDNA microarrays which are still of importance for organisms with incomplete genome sequence information such as many bacteria, plants and fish. PMID:17949480
Erickson, A; Fisher, M; Furukawa-Stoffer, T; Ambagala, A; Hodko, D; Pasick, J; King, D P; Nfon, C; Ortega Polo, R; Lung, O
2018-04-01
Microarray technology can be useful for pathogen detection as it allows simultaneous interrogation of the presence or absence of a large number of genetic signatures. However, most microarray assays are labour-intensive and time-consuming to perform. This study describes the development and initial evaluation of a multiplex reverse transcription (RT)-PCR and novel accompanying automated electronic microarray assay for simultaneous detection and differentiation of seven important viruses that affect swine (foot-and-mouth disease virus [FMDV], swine vesicular disease virus [SVDV], vesicular exanthema of swine virus [VESV], African swine fever virus [ASFV], classical swine fever virus [CSFV], porcine respiratory and reproductive syndrome virus [PRRSV] and porcine circovirus type 2 [PCV2]). The novel electronic microarray assay utilizes a single, user-friendly instrument that integrates and automates capture probe printing, hybridization, washing and reporting on a disposable electronic microarray cartridge with 400 features. This assay accurately detected and identified a total of 68 isolates of the seven targeted virus species including 23 samples of FMDV, representing all seven serotypes, and 10 CSFV strains, representing all three genotypes. The assay successfully detected viruses in clinical samples from the field, experimentally infected animals (as early as 1 day post-infection (dpi) for FMDV and SVDV, 4 dpi for ASFV, 5 dpi for CSFV), as well as in biological material that were spiked with target viruses. The limit of detection was 10 copies/μl for ASFV, PCV2 and PRRSV, 100 copies/μl for SVDV, CSFV, VESV and 1,000 copies/μl for FMDV. The electronic microarray component had reduced analytical sensitivity for several of the target viruses when compared with the multiplex RT-PCR. The integration of capture probe printing allows custom onsite array printing as needed, while electrophoretically driven hybridization generates results faster than conventional microarrays that rely on passive hybridization. With further refinement, this novel, rapid, highly automated microarray technology has potential applications in multipathogen surveillance of livestock diseases. © 2017 Her Majesty the Queen in Right of Canada • Transboundary and Emerging Diseases.
The T-Reflection and the deep crustal structure of the Vøring Margin offshore Mid-Norway
NASA Astrophysics Data System (ADS)
Abdelmalak, M. M.; Faleide, J. I.; Planke, S.; Gernigon, L.; Zastrozhnov, D.; Shephard, G. E.; Myklebust, R.
2017-12-01
Volcanic passive margins are characterized by massive occurrence of mafic extrusive and intrusive rocks, before and during plate breakup, playing major role in determining the evolution pattern and the deep structure of magma-rich margins. Deep seismic reflection data frequently provide imaging of strong continuous reflections in the middle/lower crust. In this context, we have completed a detailed 2D seismic interpretation of the deep crustal structure of the Vøring volcanic margin, offshore mid-Norway, where high-quality seismic data allow the identification of high-amplitude reflections, locally referred to as the T-Reflection (TR). Using the dense seismic grid we have mapped the top of the TR in order to compare it with filtered Bouguer gravity anomalies and seismic refraction data. The TR is identified between 7 and 10 s. Sometimes it consists of one single smooth reflection. However, it is frequently associated with a set of rough multiple reflections displaying discontinuous segments with varying geometries, amplitude and contact relationships. The TR seems to be connected to deep sill networks and locally located at the continuation of basement high structures or terminates over fractures and faults. The spatial correlation between the filtered positive Bouguer gravity anomalies and the TR indicates that the latter represents a high impedance boundary contrast associated with a high-density/velocity body. Within an uncertainty of ± 2.5 km, the depth of the mapped TR is found to correspond to the depth of the top of the Lower Crustal Body (LCB), characterized by high P-wave velocities (>7 km/s), in 50% of the outer Vøring Margin areas, whereas different depths between the TR and the top LCB are estimated for the remaining areas. We present a tectonic scenario, where a large part of the deep structure could be composed of preserved upper continental basement and middle to lower crustal lenses of inherited and intruded high-grade metamorphic rocks. Deep intrusions into the faulted crustal blocks are responsible for the rough character of the TR, whereas intrusions into the lower crust and detachment faults are likely responsible for its smoother appearance. Deep magma intrusions can be responsible for metamorphic processes leading to an increased velocity of the lower crust of more than 7 km/s.
Statistical Use of Argonaute Expression and RISC Assembly in microRNA Target Identification
Stanhope, Stephen A.; Sengupta, Srikumar; den Boon, Johan; Ahlquist, Paul; Newton, Michael A.
2009-01-01
MicroRNAs (miRNAs) posttranscriptionally regulate targeted messenger RNAs (mRNAs) by inducing cleavage or otherwise repressing their translation. We address the problem of detecting m/miRNA targeting relationships in homo sapiens from microarray data by developing statistical models that are motivated by the biological mechanisms used by miRNAs. The focus of our modeling is the construction, activity, and mediation of RNA-induced silencing complexes (RISCs) competent for targeted mRNA cleavage. We demonstrate that regression models accommodating RISC abundance and controlling for other mediating factors fit the expression profiles of known target pairs substantially better than models based on m/miRNA expressions alone, and lead to verifications of computational target pair predictions that are more sensitive than those based on marginal expression levels. Because our models are fully independent of exogenous results from sequence-based computational methods, they are appropriate for use as either a primary or secondary source of information regarding m/miRNA target pair relationships, especially in conjunction with high-throughput expression studies. PMID:19779550
NASA Astrophysics Data System (ADS)
Brazhnik, Kristina; Sokolova, Zinaida; Baryshnikova, Maria; Bilan, Regina; Nabiev, Igor; Sukhanova, Alyona
Multiplexed analysis of cancer markers is crucial for early tumor diagnosis and screening. We have designed lab-on-a-bead microarray for quantitative detection of three breast cancer markers in human serum. Quantum dots were used as bead-bound fluorescent tags for identifying each marker by means of flow cytometry. Antigen-specific beads reliably detected CA 15-3, CEA, and CA 125 in serum samples, providing clear discrimination between the samples with respect to the antigen levels. The novel microarray is advantageous over the routine single-analyte ones due to the simultaneous detection of various markers. Therefore the developed microarray is a promising tool for serum tumor marker profiling.
Emergent FDA biodefense issues for microarray technology: process analytical technology.
Weinberg, Sandy
2004-11-01
A successful biodefense strategy relies upon any combination of four approaches. A nation can protect its troops and citizenry first by advanced mass vaccination, second, by responsive ring vaccination, and third, by post-exposure therapeutic treatment (including vaccine therapies). Finally, protection can be achieved by rapid detection followed by exposure limitation (suites and air filters) or immediate treatment (e.g., antibiotics, rapid vaccines and iodine pills). All of these strategies rely upon or are enhanced by microarray technologies. Microarrays can be used to screen, engineer and test vaccines. They are also used to construct early detection tools. While effective biodefense utilizes a variety of tactical tools, microarray technology is a valuable arrow in that quiver.
NASA Astrophysics Data System (ADS)
Shi, Lei; Chu, Zhenyu; Dong, Xueliang; Jin, Wanqin; Dempsey, Eithne
2013-10-01
Highly oriented growth of a hybrid microarray was realized by a facile template-free method on gold substrates for the first time. The proposed formation mechanism involves an interfacial structure-directing force arising from self-assembled monolayers (SAMs) between gold substrates and hybrid crystals. Different SAMs and variable surface coverage of the assembled molecules play a critical role in the interfacial directing forces and influence the morphologies of hybrid films. A highly oriented hybrid microarray was formed on the highly aligned and vertical SAMs of 1,4-benzenedithiol molecules with rigid backbones, which afforded an intense structure-directing power for the oriented growth of hybrid crystals. Additionally, the density of the microarray could be adjusted by controlling the surface coverage of assembled molecules. Based on the hybrid microarray modified electrode with a large specific area (ca. 10 times its geometrical area), a label-free electrochemical DNA biosensor was constructed for the detection of an oligonucleotide fragment of the avian flu virus H5N1. The DNA biosensor displayed a significantly low detection limit of 5 pM (S/N = 3), a wide linear response from 10 pM to 10 nM, as well as excellent selectivity, good regeneration and high stability. We expect that the proposed template-free method can provide a new reference for the fabrication of a highly oriented hybrid array and the as-prepared microarray modified electrode will be a promising paradigm in constructing highly sensitive and selective biosensors.Highly oriented growth of a hybrid microarray was realized by a facile template-free method on gold substrates for the first time. The proposed formation mechanism involves an interfacial structure-directing force arising from self-assembled monolayers (SAMs) between gold substrates and hybrid crystals. Different SAMs and variable surface coverage of the assembled molecules play a critical role in the interfacial directing forces and influence the morphologies of hybrid films. A highly oriented hybrid microarray was formed on the highly aligned and vertical SAMs of 1,4-benzenedithiol molecules with rigid backbones, which afforded an intense structure-directing power for the oriented growth of hybrid crystals. Additionally, the density of the microarray could be adjusted by controlling the surface coverage of assembled molecules. Based on the hybrid microarray modified electrode with a large specific area (ca. 10 times its geometrical area), a label-free electrochemical DNA biosensor was constructed for the detection of an oligonucleotide fragment of the avian flu virus H5N1. The DNA biosensor displayed a significantly low detection limit of 5 pM (S/N = 3), a wide linear response from 10 pM to 10 nM, as well as excellent selectivity, good regeneration and high stability. We expect that the proposed template-free method can provide a new reference for the fabrication of a highly oriented hybrid array and the as-prepared microarray modified electrode will be a promising paradigm in constructing highly sensitive and selective biosensors. Electronic supplementary information (ESI) available: Four-probe method for determining the conductivity of the hybrid crystal (Fig. S1); stability comparisons of the hybrid films (Fig. S2); FESEM images of the hybrid microarray (Fig. S3); electrochemical characterizations of the hybrid films (Fig. S4); DFT simulations (Fig. S5); cross-sectional FESEM image of the hybrid microarray (Fig. S6); regeneration and stability tests of the DNA biosensor (Fig. S7). See DOI: 10.1039/c3nr03097k
Wang, Hong; Bi, Yongyi; Tao, Ning; Wang, Chunhong
2005-08-01
To detect the differential expression of cell signal transduction genes associated with benzene poisoning, and to explore the pathogenic mechanisms of blood system damage induced by benzene. Peripheral white blood cell gene expression profile of 7 benzene poisoning patients, including one aplastic anemia, was determined by cDNA microarray. Seven chips from normal workers were served as controls. Cluster analysis of gene expression profile was performed. Among the 4265 target genes, 176 genes associated with cell signal transduction were differentially expressed. 35 up-regulated genes including PTPRC, STAT4, IFITM1 etc were found in at least 6 pieces of microarray; 45 down-regulated genes including ARHB, PPP3CB, CDC37 etc were found in at least 5 pieces of microarray. cDNA microarray technology is an effective technique for screening the differentially expressed genes of cell signal transduction. Disorder in cell signal transduction may play certain role in the pathogenic mechanism of benzene poisoning.
Multi-task feature selection in microarray data by binary integer programming.
Lan, Liang; Vucetic, Slobodan
2013-12-20
A major challenge in microarray classification is that the number of features is typically orders of magnitude larger than the number of examples. In this paper, we propose a novel feature filter algorithm to select the feature subset with maximal discriminative power and minimal redundancy by solving a quadratic objective function with binary integer constraints. To improve the computational efficiency, the binary integer constraints are relaxed and a low-rank approximation to the quadratic term is applied. The proposed feature selection algorithm was extended to solve multi-task microarray classification problems. We compared the single-task version of the proposed feature selection algorithm with 9 existing feature selection methods on 4 benchmark microarray data sets. The empirical results show that the proposed method achieved the most accurate predictions overall. We also evaluated the multi-task version of the proposed algorithm on 8 multi-task microarray datasets. The multi-task feature selection algorithm resulted in significantly higher accuracy than when using the single-task feature selection methods.
Yamamoto, F; Yamamoto, M
2004-07-01
We previously developed a PCR-based DNA fingerprinting technique named the Methylation Sensitive (MS)-AFLP method, which permits comparative genome-wide scanning of methylation status with a manageable number of fingerprinting experiments. The technique uses the methylation sensitive restriction enzyme NotI in the context of the existing Amplified Fragment Length Polymorphism (AFLP) method. Here we report the successful conversion of this gel electrophoresis-based DNA fingerprinting technique into a DNA microarray hybridization technique (DNA Microarray MS-AFLP). By performing a total of 30 (15 x 2 reciprocal labeling) DNA Microarray MS-AFLP hybridization experiments on genomic DNA from two breast and three prostate cancer cell lines in all pairwise combinations, and Southern hybridization experiments using more than 100 different probes, we have demonstrated that the DNA Microarray MS-AFLP is a reliable method for genetic and epigenetic analyses. No statistically significant differences were observed in the number of differences between the breast-prostate hybridization experiments and the breast-breast or prostate-prostate comparisons.
Principles of gene microarray data analysis.
Mocellin, Simone; Rossi, Carlo Riccardo
2007-01-01
The development of several gene expression profiling methods, such as comparative genomic hybridization (CGH), differential display, serial analysis of gene expression (SAGE), and gene microarray, together with the sequencing of the human genome, has provided an opportunity to monitor and investigate the complex cascade of molecular events leading to tumor development and progression. The availability of such large amounts of information has shifted the attention of scientists towards a nonreductionist approach to biological phenomena. High throughput technologies can be used to follow changing patterns of gene expression over time. Among them, gene microarray has become prominent because it is easier to use, does not require large-scale DNA sequencing, and allows for the parallel quantification of thousands of genes from multiple samples. Gene microarray technology is rapidly spreading worldwide and has the potential to drastically change the therapeutic approach to patients affected with tumor. Therefore, it is of paramount importance for both researchers and clinicians to know the principles underlying the analysis of the huge amount of data generated with microarray technology.
Trivedi, Prinal; Edwards, Jode W; Wang, Jelai; Gadbury, Gary L; Srinivasasainagendra, Vinodh; Zakharkin, Stanislav O; Kim, Kyoungmi; Mehta, Tapan; Brand, Jacob P L; Patki, Amit; Page, Grier P; Allison, David B
2005-04-06
Many efforts in microarray data analysis are focused on providing tools and methods for the qualitative analysis of microarray data. HDBStat! (High-Dimensional Biology-Statistics) is a software package designed for analysis of high dimensional biology data such as microarray data. It was initially developed for the analysis of microarray gene expression data, but it can also be used for some applications in proteomics and other aspects of genomics. HDBStat! provides statisticians and biologists a flexible and easy-to-use interface to analyze complex microarray data using a variety of methods for data preprocessing, quality control analysis and hypothesis testing. Results generated from data preprocessing methods, quality control analysis and hypothesis testing methods are output in the form of Excel CSV tables, graphs and an Html report summarizing data analysis. HDBStat! is a platform-independent software that is freely available to academic institutions and non-profit organizations. It can be downloaded from our website http://www.soph.uab.edu/ssg_content.asp?id=1164.
Palacín, Arantxa; Gómez-Casado, Cristina; Rivas, Luis A.; Aguirre, Jacobo; Tordesillas, Leticia; Bartra, Joan; Blanco, Carlos; Carrillo, Teresa; Cuesta-Herranz, Javier; de Frutos, Consolación; Álvarez-Eire, Genoveva García; Fernández, Francisco J.; Gamboa, Pedro; Muñoz, Rosa; Sánchez-Monge, Rosa; Sirvent, Sofía; Torres, María J.; Varela-Losada, Susana; Rodríguez, Rosalía; Parro, Victor; Blanca, Miguel; Salcedo, Gabriel; Díaz-Perales, Araceli
2012-01-01
The study of cross-reactivity in allergy is key to both understanding. the allergic response of many patients and providing them with a rational treatment In the present study, protein microarrays and a co-sensitization graph approach were used in conjunction with an allergen microarray immunoassay. This enabled us to include a wide number of proteins and a large number of patients, and to study sensitization profiles among members of the LTP family. Fourteen LTPs from the most frequent plant food-induced allergies in the geographical area studied were printed into a microarray specifically designed for this research. 212 patients with fruit allergy and 117 food-tolerant pollen allergic subjects were recruited from seven regions of Spain with different pollen profiles, and their sera were tested with allergen microarray. This approach has proven itself to be a good tool to study cross-reactivity between members of LTP family, and could become a useful strategy to analyze other families of allergens. PMID:23272072
Stochastic models for inferring genetic regulation from microarray gene expression data.
Tian, Tianhai
2010-03-01
Microarray expression profiles are inherently noisy and many different sources of variation exist in microarray experiments. It is still a significant challenge to develop stochastic models to realize noise in microarray expression profiles, which has profound influence on the reverse engineering of genetic regulation. Using the target genes of the tumour suppressor gene p53 as the test problem, we developed stochastic differential equation models and established the relationship between the noise strength of stochastic models and parameters of an error model for describing the distribution of the microarray measurements. Numerical results indicate that the simulated variance from stochastic models with a stochastic degradation process can be represented by a monomial in terms of the hybridization intensity and the order of the monomial depends on the type of stochastic process. The developed stochastic models with multiple stochastic processes generated simulations whose variance is consistent with the prediction of the error model. This work also established a general method to develop stochastic models from experimental information. 2009 Elsevier Ireland Ltd. All rights reserved.
Large-scale analysis of gene expression using cDNA microarrays promises the
rapid detection of the mode of toxicity for drugs and other chemicals. cDNA
microarrays were used to examine chemically-induced alterations of gene
expression in HepG2 cells exposed to oxidative ...
Where statistics and molecular microarray experiments biology meet.
Kelmansky, Diana M
2013-01-01
This review chapter presents a statistical point of view to microarray experiments with the purpose of understanding the apparent contradictions that often appear in relation to their results. We give a brief introduction of molecular biology for nonspecialists. We describe microarray experiments from their construction and the biological principles the experiments rely on, to data acquisition and analysis. The role of epidemiological approaches and sample size considerations are also discussed.
The objective of this study is to develop a microarray to test for cyanobacteria and cyanotoxin genes in drinking water reservoirs as an aid to risk assessment and manages of water supplies. The microarray will include probes recognizing important freshwater cyanobacterial tax...
Chao, Jie; Li, Zhenhua; Li, Jing; Peng, Hongzhen; Su, Shao; Li, Qian; Zhu, Changfeng; Zuo, Xiaolei; Song, Shiping; Wang, Lianhui; Wang, Lihua
2016-07-15
Microarrays of biomolecules hold great promise in the fields of genomics, proteomics, and clinical assays on account of their remarkably parallel and high-throughput assay capability. However, the fluorescence detection used in most conventional DNA microarrays is still limited by sensitivity. In this study, we have demonstrated a novel universal and highly sensitive platform for fluorescent detection of sequence specific DNA at the femtomolar level by combining dextran-coated microarrays with hybridization chain reaction (HCR) signal amplification. Three-dimensional dextran matrix was covalently coated on glass surface as the scaffold to immobilize DNA recognition probes to increase the surface binding capacity and accessibility. DNA nanowire tentacles were formed on the matrix surface for efficient signal amplification by capturing multiple fluorescent molecules in a highly ordered way. By quantifying microscopic fluorescent signals, the synergetic effects of dextran and HCR greatly improved sensitivity of DNA microarrays, with a detection limit of 10fM (1×10(5) molecules). This detection assay could recognize one-base mismatch with fluorescence signals dropped down to ~20%. This cost-effective microarray platform also worked well with samples in serum and thus shows great potential for clinical diagnosis. Copyright © 2016 Elsevier B.V. All rights reserved.
Controlling false-negative errors in microarray differential expression analysis: a PRIM approach.
Cole, Steve W; Galic, Zoran; Zack, Jerome A
2003-09-22
Theoretical considerations suggest that current microarray screening algorithms may fail to detect many true differences in gene expression (Type II analytic errors). We assessed 'false negative' error rates in differential expression analyses by conventional linear statistical models (e.g. t-test), microarray-adapted variants (e.g. SAM, Cyber-T), and a novel strategy based on hold-out cross-validation. The latter approach employs the machine-learning algorithm Patient Rule Induction Method (PRIM) to infer minimum thresholds for reliable change in gene expression from Boolean conjunctions of fold-induction and raw fluorescence measurements. Monte Carlo analyses based on four empirical data sets show that conventional statistical models and their microarray-adapted variants overlook more than 50% of genes showing significant up-regulation. Conjoint PRIM prediction rules recover approximately twice as many differentially expressed transcripts while maintaining strong control over false-positive (Type I) errors. As a result, experimental replication rates increase and total analytic error rates decline. RT-PCR studies confirm that gene inductions detected by PRIM but overlooked by other methods represent true changes in mRNA levels. PRIM-based conjoint inference rules thus represent an improved strategy for high-sensitivity screening of DNA microarrays. Freestanding JAVA application at http://microarray.crump.ucla.edu/focus
Nanotechnology: moving from microarrays toward nanoarrays.
Chen, Hua; Li, Jun
2007-01-01
Microarrays are important tools for high-throughput analysis of biomolecules. The use of microarrays for parallel screening of nucleic acid and protein profiles has become an industry standard. A few limitations of microarrays are the requirement for relatively large sample volumes and elongated incubation time, as well as the limit of detection. In addition, traditional microarrays make use of bulky instrumentation for the detection, and sample amplification and labeling are quite laborious, which increase analysis cost and delays the time for obtaining results. These problems limit microarray techniques from point-of-care and field applications. One strategy for overcoming these problems is to develop nanoarrays, particularly electronics-based nanoarrays. With further miniaturization, higher sensitivity, and simplified sample preparation, nanoarrays could potentially be employed for biomolecular analysis in personal healthcare and monitoring of trace pathogens. In this chapter, it is intended to introduce the concept and advantage of nanotechnology and then describe current methods and protocols for novel nanoarrays in three aspects: (1) label-free nucleic acids analysis using nanoarrays, (2) nanoarrays for protein detection by conventional optical fluorescence microscopy as well as by novel label-free methods such as atomic force microscopy, and (3) nanoarray for enzymatic-based assay. These nanoarrays will have significant applications in drug discovery, medical diagnosis, genetic testing, environmental monitoring, and food safety inspection.
Severgnini, Marco; Bicciato, Silvio; Mangano, Eleonora; Scarlatti, Francesca; Mezzelani, Alessandra; Mattioli, Michela; Ghidoni, Riccardo; Peano, Clelia; Bonnal, Raoul; Viti, Federica; Milanesi, Luciano; De Bellis, Gianluca; Battaglia, Cristina
2006-06-01
Meta-analysis of microarray data is increasingly important, considering both the availability of multiple platforms using disparate technologies and the accumulation in public repositories of data sets from different laboratories. We addressed the issue of comparing gene expression profiles from two microarray platforms by devising a standardized investigative strategy. We tested this procedure by studying MDA-MB-231 cells, which undergo apoptosis on treatment with resveratrol. Gene expression profiles were obtained using high-density, short-oligonucleotide, single-color microarray platforms: GeneChip (Affymetrix) and CodeLink (Amersham). Interplatform analyses were carried out on 8414 common transcripts represented on both platforms, as identified by LocusLink ID, representing 70.8% and 88.6% of annotated GeneChip and CodeLink features, respectively. We identified 105 differentially expressed genes (DEGs) on CodeLink and 42 DEGs on GeneChip. Among them, only 9 DEGs were commonly identified by both platforms. Multiple analyses (BLAST alignment of probes with target sequences, gene ontology, literature mining, and quantitative real-time PCR) permitted us to investigate the factors contributing to the generation of platform-dependent results in single-color microarray experiments. An effective approach to cross-platform comparison involves microarrays of similar technologies, samples prepared by identical methods, and a standardized battery of bioinformatic and statistical analyses.
Janse, Ingmar; Bok, Jasper M.; Hamidjaja, Raditijo A.; Hodemaekers, Hennie M.; van Rotterdam, Bart J.
2012-01-01
Microarrays provide a powerful analytical tool for the simultaneous detection of multiple pathogens. We developed diagnostic suspension microarrays for sensitive and specific detection of the biothreat pathogens Bacillus anthracis, Yersinia pestis, Francisella tularensis and Coxiella burnetii. Two assay chemistries for amplification and labeling were developed, one method using direct hybridization and the other using target-specific primer extension, combined with hybridization to universal arrays. Asymmetric PCR products for both assay chemistries were produced by using a multiplex asymmetric PCR amplifying 16 DNA signatures (16-plex). The performances of both assay chemistries were compared and their advantages and disadvantages are discussed. The developed microarrays detected multiple signature sequences and an internal control which made it possible to confidently identify the targeted pathogens and assess their virulence potential. The microarrays were highly specific and detected various strains of the targeted pathogens. Detection limits for the different pathogen signatures were similar or slightly higher compared to real-time PCR. Probit analysis showed that even a few genomic copies could be detected with 95% confidence. The microarrays detected DNA from different pathogens mixed in different ratios and from spiked or naturally contaminated samples. The assays that were developed have a potential for application in surveillance and diagnostics. PMID:22355407
Grenville-Briggs, Laura J; Stansfield, Ian
2011-01-01
This report describes a linked series of Masters-level computer practical workshops. They comprise an advanced functional genomics investigation, based upon analysis of a microarray dataset probing yeast DNA damage responses. The workshops require the students to analyse highly complex transcriptomics datasets, and were designed to stimulate active learning through experience of current research methods in bioinformatics and functional genomics. They seek to closely mimic a realistic research environment, and require the students first to propose research hypotheses, then test those hypotheses using specific sections of the microarray dataset. The complexity of the microarray data provides students with the freedom to propose their own unique hypotheses, tested using appropriate sections of the microarray data. This research latitude was highly regarded by students and is a strength of this practical. In addition, the focus on DNA damage by radiation and mutagenic chemicals allows them to place their results in a human medical context, and successfully sparks broad interest in the subject material. In evaluation, 79% of students scored the practical workshops on a five-point scale as 4 or 5 (totally effective) for student learning. More broadly, the general use of microarray data as a "student research playground" is also discussed. Copyright © 2011 Wiley Periodicals, Inc.
2010-01-01
Background Recent developments in high-throughput methods of analyzing transcriptomic profiles are promising for many areas of biology, including ecophysiology. However, although commercial microarrays are available for most common laboratory models, transcriptome analysis in non-traditional model species still remains a challenge. Indeed, the signal resulting from heterologous hybridization is low and difficult to interpret because of the weak complementarity between probe and target sequences, especially when no microarray dedicated to a genetically close species is available. Results We show here that transcriptome analysis in a species genetically distant from laboratory models is made possible by using MAXRS, a new method of analyzing heterologous hybridization on microarrays. This method takes advantage of the design of several commercial microarrays, with different probes targeting the same transcript. To illustrate and test this method, we analyzed the transcriptome of king penguin pectoralis muscle hybridized to Affymetrix chicken microarrays, two organisms separated by an evolutionary distance of approximately 100 million years. The differential gene expression observed between different physiological situations computed by MAXRS was confirmed by real-time PCR on 10 genes out of 11 tested. Conclusions MAXRS appears to be an appropriate method for gene expression analysis under heterologous hybridization conditions. PMID:20509979
Microarray platform affords improved product analysis in mammalian cell growth studies
Li, Lingyun; Migliore, Nicole; Schaefer, Eugene; Sharfstein, Susan T.; Dordick, Jonathan S.; Linhardt, Robert J.
2014-01-01
High throughput (HT) platforms serve as cost-efficient and rapid screening method for evaluating the effect of cell culture conditions and screening of chemicals. The aim of the current study was to develop a high-throughput cell-based microarray platform to assess the effect of culture conditions on Chinese hamster ovary (CHO) cells. Specifically, growth, transgene expression and metabolism of a GS/MSX CHO cell line, which produces a therapeutic monoclonal antibody, was examined using microarray system in conjunction with conventional shake flask platform in a non-proprietary medium. The microarray system consists of 60 nl spots of cells encapsulated in alginate and separated in groups via an 8-well chamber system attached to the chip. Results show the non-proprietary medium developed allows cell growth, production and normal glycosylation of recombinant antibody and metabolism of the recombinant CHO cells in both the microarray and shake flask platforms. In addition, 10.3 mM glutamate addition to the defined base media results in lactate metabolism shift in the recombinant GS/MSX CHO cells in the shake flask platform. Ultimately, the results demonstrate that the high-throughput microarray platform has the potential to be utilized for evaluating the impact of media additives on cellular processes, such as, cell growth, metabolism and productivity. PMID:24227746
Janse, Ingmar; Bok, Jasper M; Hamidjaja, Raditijo A; Hodemaekers, Hennie M; van Rotterdam, Bart J
2012-01-01
Microarrays provide a powerful analytical tool for the simultaneous detection of multiple pathogens. We developed diagnostic suspension microarrays for sensitive and specific detection of the biothreat pathogens Bacillus anthracis, Yersinia pestis, Francisella tularensis and Coxiella burnetii. Two assay chemistries for amplification and labeling were developed, one method using direct hybridization and the other using target-specific primer extension, combined with hybridization to universal arrays. Asymmetric PCR products for both assay chemistries were produced by using a multiplex asymmetric PCR amplifying 16 DNA signatures (16-plex). The performances of both assay chemistries were compared and their advantages and disadvantages are discussed. The developed microarrays detected multiple signature sequences and an internal control which made it possible to confidently identify the targeted pathogens and assess their virulence potential. The microarrays were highly specific and detected various strains of the targeted pathogens. Detection limits for the different pathogen signatures were similar or slightly higher compared to real-time PCR. Probit analysis showed that even a few genomic copies could be detected with 95% confidence. The microarrays detected DNA from different pathogens mixed in different ratios and from spiked or naturally contaminated samples. The assays that were developed have a potential for application in surveillance and diagnostics.
Kawaura, Kanako; Mochida, Keiichi; Yamazaki, Yukiko; Ogihara, Yasunari
2006-04-01
In this study, we constructed a 22k wheat oligo-DNA microarray. A total of 148,676 expressed sequence tags of common wheat were collected from the database of the Wheat Genomics Consortium of Japan. These were grouped into 34,064 contigs, which were then used to design an oligonucleotide DNA microarray. Following a multistep selection of the sense strand, 21,939 60-mer oligo-DNA probes were selected for attachment on the microarray slide. This 22k oligo-DNA microarray was used to examine the transcriptional response of wheat to salt stress. More than 95% of the probes gave reproducible hybridization signals when targeted with RNAs extracted from salt-treated wheat shoots and roots. With the microarray, we identified 1,811 genes whose expressions changed more than 2-fold in response to salt. These included genes known to mediate response to salt, as well as unknown genes, and they were classified into 12 major groups by hierarchical clustering. These gene expression patterns were also confirmed by real-time reverse transcription-PCR. Many of the genes with unknown function were clustered together with genes known to be involved in response to salt stress. Thus, analysis of gene expression patterns combined with gene ontology should help identify the function of the unknown genes. Also, functional analysis of these wheat genes should provide new insight into the response to salt stress. Finally, these results indicate that the 22k oligo-DNA microarray is a reliable method for monitoring global gene expression patterns in wheat.
Identification of new autoantigens for primary biliary cirrhosis using human proteome microarrays.
Hu, Chao-Jun; Song, Guang; Huang, Wei; Liu, Guo-Zhen; Deng, Chui-Wen; Zeng, Hai-Pan; Wang, Li; Zhang, Feng-Chun; Zhang, Xuan; Jeong, Jun Seop; Blackshaw, Seth; Jiang, Li-Zhi; Zhu, Heng; Wu, Lin; Li, Yong-Zhe
2012-09-01
Primary biliary cirrhosis (PBC) is a chronic cholestatic liver disease of unknown etiology and is considered to be an autoimmune disease. Autoantibodies are important tools for accurate diagnosis of PBC. Here, we employed serum profiling analysis using a human proteome microarray composed of about 17,000 full-length unique proteins and identified 23 proteins that correlated with PBC. To validate these results, we fabricated a PBC-focused microarray with 21 of these newly identified candidates and nine additional known PBC antigens. By screening the PBC microarrays with additional cohorts of 191 PBC patients and 321 controls (43 autoimmune hepatitis, 55 hepatitis B virus, 31 hepatitis C virus, 48 rheumatoid arthritis, 45 systematic lupus erythematosus, 49 systemic sclerosis, and 50 healthy), six proteins were confirmed as novel PBC autoantigens with high sensitivities and specificities, including hexokinase-1 (isoforms I and II), Kelch-like protein 7, Kelch-like protein 12, zinc finger and BTB domain-containing protein 2, and eukaryotic translation initiation factor 2C, subunit 1. To facilitate clinical diagnosis, we developed ELISA for Kelch-like protein 12 and zinc finger and BTB domain-containing protein 2 and tested large cohorts (297 PBC and 637 control sera) to confirm the sensitivities and specificities observed in the microarray-based assays. In conclusion, our research showed that a strategy using high content protein microarray combined with a smaller but more focused protein microarray can effectively identify and validate novel PBC-specific autoantigens and has the capacity to be translated to clinical diagnosis by means of an ELISA-based method.
Li, Xiang; Harwood, Valerie J.; Nayak, Bina
2016-01-01
Pathogen identification and microbial source tracking (MST) to identify sources of fecal pollution improve evaluation of water quality. They contribute to improved assessment of human health risks and remediation of pollution sources. An MST microarray was used to simultaneously detect genes for multiple pathogens and indicators of fecal pollution in freshwater, marine water, sewage-contaminated freshwater and marine water, and treated wastewater. Dead-end ultrafiltration (DEUF) was used to concentrate organisms from water samples, yielding a recovery efficiency of >95% for Escherichia coli and human polyomavirus. Whole-genome amplification (WGA) increased gene copies from ultrafiltered samples and increased the sensitivity of the microarray. Viruses (adenovirus, bocavirus, hepatitis A virus, and human polyomaviruses) were detected in sewage-contaminated samples. Pathogens such as Legionella pneumophila, Shigella flexneri, and Campylobacter fetus were detected along with genes conferring resistance to aminoglycosides, beta-lactams, and tetracycline. Nonmetric dimensional analysis of MST marker genes grouped sewage-spiked freshwater and marine samples with sewage and apart from other fecal sources. The sensitivity (percent true positives) of the microarray probes for gene targets anticipated in sewage was 51 to 57% and was lower than the specificity (percent true negatives; 79 to 81%). A linear relationship between gene copies determined by quantitative PCR and microarray fluorescence was found, indicating the semiquantitative nature of the MST microarray. These results indicate that ultrafiltration coupled with WGA provides sufficient nucleic acids for detection of viruses, bacteria, protozoa, and antibiotic resistance genes by the microarray in applications ranging from beach monitoring to risk assessment. PMID:26729716
MADGE: scalable distributed data management software for cDNA microarrays.
McIndoe, Richard A; Lanzen, Aaron; Hurtz, Kimberly
2003-01-01
The human genome project and the development of new high-throughput technologies have created unparalleled opportunities to study the mechanism of diseases, monitor the disease progression and evaluate effective therapies. Gene expression profiling is a critical tool to accomplish these goals. The use of nucleic acid microarrays to assess the gene expression of thousands of genes simultaneously has seen phenomenal growth over the past five years. Although commercial sources of microarrays exist, investigators wanting more flexibility in the genes represented on the array will turn to in-house production. The creation and use of cDNA microarrays is a complicated process that generates an enormous amount of information. Effective data management of this information is essential to efficiently access, analyze, troubleshoot and evaluate the microarray experiments. We have developed a distributable software package designed to track and store the various pieces of data generated by a cDNA microarray facility. This includes the clone collection storage data, annotation data, workflow queues, microarray data, data repositories, sample submission information, and project/investigator information. This application was designed using a 3-tier client server model. The data access layer (1st tier) contains the relational database system tuned to support a large number of transactions. The data services layer (2nd tier) is a distributed COM server with full database transaction support. The application layer (3rd tier) is an internet based user interface that contains both client and server side code for dynamic interactions with the user. This software is freely available to academic institutions and non-profit organizations at http://www.genomics.mcg.edu/niddkbtc.
Maslow, Bat-Sheva L; Budinetz, Tara; Sueldo, Carolina; Anspach, Erica; Engmann, Lawrence; Benadiva, Claudio; Nulsen, John C
2015-07-01
To compare the analysis of chromosome number from paraffin-embedded products of conception using single-nucleotide polymorphism (SNP) microarray with the recommended screening for the evaluation of couples presenting with recurrent pregnancy loss who do not have previous fetal cytogenetic data. We performed a retrospective cohort study including all women who presented for a new evaluation of recurrent pregnancy loss over a 2-year period (January 1, 2012, to December 31, 2013). All participants had at least two documented first-trimester losses and both the recommended screening tests and SNP microarray performed on at least one paraffin-embedded products of conception sample. Single-nucleotide polymorphism microarray identifies all 24 chromosomes (22 autosomes, X, and Y). Forty-two women with a total of 178 losses were included in the study. Paraffin-embedded products of conception from 62 losses were sent for SNP microarray. Single-nucleotide polymorphism microarray successfully diagnosed fetal chromosome number in 71% (44/62) of samples, of which 43% (19/44) were euploid and 57% (25/44) were noneuploid. Seven of 42 (17%) participants had abnormalities on recurrent pregnancy loss screening. The per-person detection rate for a cause of pregnancy loss was significantly higher in the SNP microarray (0.50; 95% confidence interval [CI] 0.36-0.64) compared with recurrent pregnancy loss evaluation (0.17; 95% CI 0.08-0.31) (P=.002). Participants with one or more euploid loss identified on paraffin-embedded products of conception were significantly more likely to have an abnormality on recurrent pregnancy loss screening than those with only noneuploid results (P=.028). The significance remained when controlling for age, number of losses, number of samples, and total pregnancies. These results suggest that SNP microarray testing of paraffin-embedded products of conception is a valuable tool for the evaluation of recurrent pregnancy loss in patients without prior fetal cytogenetic results. Recommended recurrent pregnancy loss screening was unnecessary in almost half the patients in our study. II.
Zeller, Tanja; Wild, Philipp S.; Truong, Vinh; Trégouët, David-Alexandre; Munzel, Thomas; Ziegler, Andreas; Cambien, François; Blankenberg, Stefan; Tiret, Laurence
2011-01-01
Background The hypothesis of dosage compensation of genes of the X chromosome, supported by previous microarray studies, was recently challenged by RNA-sequencing data. It was suggested that microarray studies were biased toward an over-estimation of X-linked expression levels as a consequence of the filtering of genes below the detection threshold of microarrays. Methodology/Principal Findings To investigate this hypothesis, we used microarray expression data from circulating monocytes in 1,467 individuals. In total, 25,349 and 1,156 probes were unambiguously assigned to autosomes and the X chromosome, respectively. Globally, there was a clear shift of X-linked expressions toward lower levels than autosomes. We compared the ratio of expression levels of X-linked to autosomal transcripts (X∶AA) using two different filtering methods: 1. gene expressions were filtered out using a detection threshold irrespective of gene chromosomal location (the standard method in microarrays); 2. equal proportions of genes were filtered out separately on the X and on autosomes. For a wide range of filtering proportions, the X∶AA ratio estimated with the first method was not significantly different from 1, the value expected if dosage compensation was achieved, whereas it was significantly lower than 1 with the second method, leading to the rejection of the hypothesis of dosage compensation. We further showed in simulated data that the choice of the most appropriate method was dependent on biological assumptions regarding the proportion of actively expressed genes on the X chromosome comparative to the autosomes and the extent of dosage compensation. Conclusion/Significance This study shows that the method used for filtering out lowly expressed genes in microarrays may have a major impact according to the hypothesis investigated. The hypothesis of dosage compensation of X-linked genes cannot be firmly accepted or rejected using microarray-based data. PMID:21912656
Genome-scale cluster analysis of replicated microarrays using shrinkage correlation coefficient.
Yao, Jianchao; Chang, Chunqi; Salmi, Mari L; Hung, Yeung Sam; Loraine, Ann; Roux, Stanley J
2008-06-18
Currently, clustering with some form of correlation coefficient as the gene similarity metric has become a popular method for profiling genomic data. The Pearson correlation coefficient and the standard deviation (SD)-weighted correlation coefficient are the two most widely-used correlations as the similarity metrics in clustering microarray data. However, these two correlations are not optimal for analyzing replicated microarray data generated by most laboratories. An effective correlation coefficient is needed to provide statistically sufficient analysis of replicated microarray data. In this study, we describe a novel correlation coefficient, shrinkage correlation coefficient (SCC), that fully exploits the similarity between the replicated microarray experimental samples. The methodology considers both the number of replicates and the variance within each experimental group in clustering expression data, and provides a robust statistical estimation of the error of replicated microarray data. The value of SCC is revealed by its comparison with two other correlation coefficients that are currently the most widely-used (Pearson correlation coefficient and SD-weighted correlation coefficient) using statistical measures on both synthetic expression data as well as real gene expression data from Saccharomyces cerevisiae. Two leading clustering methods, hierarchical and k-means clustering were applied for the comparison. The comparison indicated that using SCC achieves better clustering performance. Applying SCC-based hierarchical clustering to the replicated microarray data obtained from germinating spores of the fern Ceratopteris richardii, we discovered two clusters of genes with shared expression patterns during spore germination. Functional analysis suggested that some of the genetic mechanisms that control germination in such diverse plant lineages as mosses and angiosperms are also conserved among ferns. This study shows that SCC is an alternative to the Pearson correlation coefficient and the SD-weighted correlation coefficient, and is particularly useful for clustering replicated microarray data. This computational approach should be generally useful for proteomic data or other high-throughput analysis methodology.
Computational synchronization of microarray data with application to Plasmodium falciparum.
Zhao, Wei; Dauwels, Justin; Niles, Jacquin C; Cao, Jianshu
2012-06-21
Microarrays are widely used to investigate the blood stage of Plasmodium falciparum infection. Starting with synchronized cells, gene expression levels are continually measured over the 48-hour intra-erythrocytic cycle (IDC). However, the cell population gradually loses synchrony during the experiment. As a result, the microarray measurements are blurred. In this paper, we propose a generalized deconvolution approach to reconstruct the intrinsic expression pattern, and apply it to P. falciparum IDC microarray data. We develop a statistical model for the decay of synchrony among cells, and reconstruct the expression pattern through statistical inference. The proposed method can handle microarray measurements with noise and missing data. The original gene expression patterns become more apparent in the reconstructed profiles, making it easier to analyze and interpret the data. We hypothesize that reconstructed gene expression patterns represent better temporally resolved expression profiles that can be probabilistically modeled to match changes in expression level to IDC transitions. In particular, we identify transcriptionally regulated protein kinases putatively involved in regulating the P. falciparum IDC. By analyzing publicly available microarray data sets for the P. falciparum IDC, protein kinases are ranked in terms of their likelihood to be involved in regulating transitions between the ring, trophozoite and schizont developmental stages of the P. falciparum IDC. In our theoretical framework, a few protein kinases have high probability rankings, and could potentially be involved in regulating these developmental transitions. This study proposes a new methodology for extracting intrinsic expression patterns from microarray data. By applying this method to P. falciparum microarray data, several protein kinases are predicted to play a significant role in the P. falciparum IDC. Earlier experiments have indeed confirmed that several of these kinases are involved in this process. Overall, these results indicate that further functional analysis of these additional putative protein kinases may reveal new insights into how the P. falciparum IDC is regulated.
Schüler, Susann; Wenz, Ingrid; Wiederanders, B; Slickers, P; Ehricht, R
2006-06-12
Recent developments in DNA microarray technology led to a variety of open and closed devices and systems including high and low density microarrays for high-throughput screening applications as well as microarrays of lower density for specific diagnostic purposes. Beside predefined microarrays for specific applications manufacturers offer the production of custom-designed microarrays adapted to customers' wishes. Array based assays demand complex procedures including several steps for sample preparation (RNA extraction, amplification and sample labelling), hybridization and detection, thus leading to a high variability between several approaches and resulting in the necessity of extensive standardization and normalization procedures. In the present work a custom designed human proteinase DNA microarray of lower density in ArrayTube format was established. This highly economic open platform only requires standard laboratory equipment and allows the study of the molecular regulation of cell behaviour by proteinases. We established a procedure for sample preparation and hybridization and verified the array based gene expression profile by quantitative real-time PCR (QRT-PCR). Moreover, we compared the results with the well established Affymetrix microarray. By application of standard labelling procedures with e.g. Klenow fragment exo-, single primer amplification (SPA) or In Vitro Transcription (IVT) we noticed a loss of signal conservation for some genes. To overcome this problem we developed a protocol in accordance with the SPA protocol, in which we included target specific primers designed individually for each spotted oligomer. Here we present a complete array based assay in which only the specific transcripts of interest are amplified in parallel and in a linear manner. The array represents a proof of principle which can be adapted to other species as well. As the designed protocol for amplifying mRNA starts from as little as 100 ng total RNA, it presents an alternative method for detecting even low expressed genes by microarray experiments in a highly reproducible and sensitive manner. Preservation of signal integrity is demonstrated out by QRT-PCR measurements. The little amounts of total RNA necessary for the analyses make this method applicable for investigations with limited material as in clinical samples from, for example, organ or tumour biopsies. Those are arguments in favour of the high potential of our assay compared to established procedures for amplification within the field of diagnostic expression profiling. Nevertheless, the screening character of microarray data must be mentioned, and independent methods should verify the results.
Plancoulaine, Benoît; Laurinaviciene, Aida; Meskauskas, Raimundas; Baltrusaityte, Indra; Besusparis, Justinas; Herlin, Paulette; Laurinavicius, Arvydas
2014-01-01
Digital image analysis (DIA) enables better reproducibility of immunohistochemistry (IHC) studies. Nevertheless, accuracy of the DIA methods needs to be ensured, demanding production of reference data sets. We have reported on methodology to calibrate DIA for Ki67 IHC in breast cancer tissue based on reference data obtained by stereology grid count. To produce the reference data more efficiently, we propose digital IHC wizard generating initial cell marks to be verified by experts. Digital images of proliferation marker Ki67 IHC from 158 patients (one tissue microarray spot per patient) with an invasive ductal carcinoma of the breast were used. Manual data (mD) were obtained by marking Ki67-positive and negative tumour cells, using a stereological method for 2D object enumeration. DIA was used as an initial step in stereology grid count to generate the digital data (dD) marks by Aperio Genie and Nuclear algorithms. The dD were collected into XML files from the DIA markup images and overlaid on the original spots along with the stereology grid. The expert correction of the dD marks resulted in corrected data (cD). The percentages of Ki67 positive tumour cells per spot in the mD, dD, and cD sets were compared by single linear regression analysis. Efficiency of cD production was estimated based on manual editing effort. The percentage of Ki67-positive tumor cells was in very good agreement in the mD, dD, and cD sets: regression of cD from dD (R2=0.92) reflects the impact of the expert editing the dD as well as accuracy of the DIA used; regression of the cD from the mD (R2=0.94) represents the consistency of the DIA-assisted ground truth (cD) with the manual procedure. Nevertheless, the accuracy of detection of individual tumour cells was much lower: in average, 18 and 219 marks per spot were edited due to the Genie and Nuclear algorithm errors, respectively. The DIA-assisted cD production in our experiment saved approximately 2/3 of manual marking. Digital IHC wizard enabled DIA-assisted stereology to produce reference data in a consistent and efficient way. It can provide quality control measure for appraising accuracy of the DIA steps.
2014-01-01
Background Digital image analysis (DIA) enables better reproducibility of immunohistochemistry (IHC) studies. Nevertheless, accuracy of the DIA methods needs to be ensured, demanding production of reference data sets. We have reported on methodology to calibrate DIA for Ki67 IHC in breast cancer tissue based on reference data obtained by stereology grid count. To produce the reference data more efficiently, we propose digital IHC wizard generating initial cell marks to be verified by experts. Methods Digital images of proliferation marker Ki67 IHC from 158 patients (one tissue microarray spot per patient) with an invasive ductal carcinoma of the breast were used. Manual data (mD) were obtained by marking Ki67-positive and negative tumour cells, using a stereological method for 2D object enumeration. DIA was used as an initial step in stereology grid count to generate the digital data (dD) marks by Aperio Genie and Nuclear algorithms. The dD were collected into XML files from the DIA markup images and overlaid on the original spots along with the stereology grid. The expert correction of the dD marks resulted in corrected data (cD). The percentages of Ki67 positive tumour cells per spot in the mD, dD, and cD sets were compared by single linear regression analysis. Efficiency of cD production was estimated based on manual editing effort. Results The percentage of Ki67-positive tumor cells was in very good agreement in the mD, dD, and cD sets: regression of cD from dD (R2=0.92) reflects the impact of the expert editing the dD as well as accuracy of the DIA used; regression of the cD from the mD (R2=0.94) represents the consistency of the DIA-assisted ground truth (cD) with the manual procedure. Nevertheless, the accuracy of detection of individual tumour cells was much lower: in average, 18 and 219 marks per spot were edited due to the Genie and Nuclear algorithm errors, respectively. The DIA-assisted cD production in our experiment saved approximately 2/3 of manual marking. Conclusions Digital IHC wizard enabled DIA-assisted stereology to produce reference data in a consistent and efficient way. It can provide quality control measure for appraising accuracy of the DIA steps. PMID:25565221
Koo, B K; O'Connell, P E
2006-04-01
The site-specific land use optimisation methodology, suggested by the authors in the first part of this two-part paper, has been applied to the River Kennet catchment at Marlborough, Wiltshire, UK, for a case study. The Marlborough catchment (143 km(2)) is an agriculture-dominated rural area over a deep chalk aquifer that is vulnerable to nitrate pollution from agricultural diffuse sources. For evaluation purposes, the catchment was discretised into a network of 1 kmx1 km grid cells. For each of the arable-land grid cells, seven land use alternatives (four arable-land alternatives and three grassland alternatives) were evaluated for their environmental and economic potential. For environmental evaluation, nitrate leaching rates of land use alternatives were estimated using SHETRAN simulations and groundwater pollution potential was evaluated using the DRASTIC index. For economic evaluation, economic gross margins were estimated using a simple agronomic model based on nitrogen response functions and agricultural land classification grades. In order to see whether the site-specific optimisation is efficient at the catchment scale, land use optimisation was carried out for four optimisation schemes (i.e. using four sets of criterion weights). Consequently, four land use scenarios were generated and the site-specifically optimised land use scenario was evaluated as the best compromise solution between long term nitrate pollution and agronomy at the catchment scale.
Incorporation of ice sheet models into an Earth system model: Focus on methodology of coupling
NASA Astrophysics Data System (ADS)
Rybak, Oleg; Volodin, Evgeny; Morozova, Polina; Nevecherja, Artiom
2018-03-01
Elaboration of a modern Earth system model (ESM) requires incorporation of ice sheet dynamics. Coupling of an ice sheet model (ICM) to an AOGCM is complicated by essential differences in spatial and temporal scales of cryospheric, atmospheric and oceanic components. To overcome this difficulty, we apply two different approaches for the incorporation of ice sheets into an ESM. Coupling of the Antarctic ice sheet model (AISM) to the AOGCM is accomplished via using procedures of resampling, interpolation and assigning to the AISM grid points annually averaged meanings of air surface temperature and precipitation fields generated by the AOGCM. Surface melting, which takes place mainly on the margins of the Antarctic peninsula and on ice shelves fringing the continent, is currently ignored. AISM returns anomalies of surface topography back to the AOGCM. To couple the Greenland ice sheet model (GrISM) to the AOGCM, we use a simple buffer energy- and water-balance model (EWBM-G) to account for orographically-driven precipitation and other sub-grid AOGCM-generated quantities. The output of the EWBM-G consists of surface mass balance and air surface temperature to force the GrISM, and freshwater run-off to force thermohaline circulation in the oceanic block of the AOGCM. Because of a rather complex coupling procedure of GrIS compared to AIS, the paper mostly focuses on Greenland.
GOCE gravity gradient data for lithospheric modeling and geophysical exploration research
NASA Astrophysics Data System (ADS)
Bouman, Johannes; Ebbing, Jörg; Meekes, Sjef; Lieb, Verena; Fuchs, Martin; Schmidt, Michael; Fattah, Rader Abdul; Gradmann, Sofie; Haagmans, Roger
2013-04-01
GOCE gravity gradient data can improve modeling of the Earth's lithosphere and upper mantle, contributing to a better understanding of the Earth's dynamic processes. We present a method to compute user-friendly GOCE gravity gradient grids at mean satellite altitude, which are easier to use than the original GOCE gradients that are given in a rotating instrument frame. In addition, the GOCE gradients are combined with terrestrial gravity data to obtain high resolution grids of gravity field information close to the Earth's surface. We also present a case study for the North-East Atlantic margin, where we analyze the use of satellite gravity gradients by comparison with a well-constrained 3D density model that provides a detailed picture from the upper mantle to the top basement (base of sediments). We demonstrate how gravity gradients can increase confidence in the modeled structures by calculating the sensitvity of model geometry and applied densities at different observation heights; e.g. satellite height and near surface. Finally, this sensitivity analysis is used as input to study the Rub' al Khali desert in Saudi Arabia. In terms of modeling and data availability this is a frontier area. Here gravity gradient data help especially to set up the regional crustal structure, which in turn allows to refine sedimentary thickness estimates and the regional heat-flow pattern. This can have implications for hydrocarbon exploration in the region.
Quantification of effective plant rooting depth: advancing global hydrological modelling
NASA Astrophysics Data System (ADS)
Yang, Y.; Donohue, R. J.; McVicar, T.
2017-12-01
Plant rooting depth (Zr) is a key parameter in hydrological and biogeochemical models, yet the global spatial distribution of Zr is largely unknown due to the difficulties in its direct measurement. Moreover, Zr observations are usually only representative of a single plant or several plants, which can differ greatly from the effective Zr over a modelling unit (e.g., catchment or grid-box). Here, we provide a global parameterization of an analytical Zr model that balances the marginal carbon cost and benefit of deeper roots, and produce a climatological (i.e., 1982-2010 average) global Zr map. To test the Zr estimates, we apply the estimated Zr in a highly transparent hydrological model (i.e., the Budyko-Choudhury-Porporato (BCP) model) to estimate mean annual actual evapotranspiration (E) across the globe. We then compare the estimated E with both water balance-based E observations at 32 major catchments and satellite grid-box retrievals across the globe. Our results show that the BCP model, when implemented with Zr estimated herein, optimally reproduced the spatial pattern of E at both scales and provides improved model outputs when compared to BCP model results from two already existing global Zr datasets. These results suggest that our Zr estimates can be effectively used in state-of-the-art hydrological models, and potentially biogeochemical models, where the determination of Zr currently largely relies on biome type-based look-up tables.
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Amanda M.; Daly, Don S.; Willse, Alan R.
The Automated Microarray Image Analysis (AMIA) Toolbox for MATLAB is a flexible, open-source microarray image analysis tool that allows the user to customize analysis of sets of microarray images. This tool provides several methods of identifying and quantify spot statistics, as well as extensive diagnostic statistics and images to identify poor data quality or processing. The open nature of this software allows researchers to understand the algorithms used to provide intensity estimates and to modify them easily if desired.
High-Throughput Nano-Biofilm Microarray for Antifungal Drug Discovery
2013-06-25
High-Throughput Nano-Biofilm Microarray for Antifungal Drug Discovery Anand Srinivasan,a, c Kai P. Leung,d Jose L. Lopez-Ribot,b, c Anand K...Ramasubramaniana, c Departments of Biomedical Engineeringa and Biologyb and South Texas Center for Emerging Infectious Diseases, c The University of Texas at San...of the opportunistic fungal pathogen Candida albicans on a microarray platform. The mi- croarray consists of 1,200 individual cultures of 30 nl of C
A Protein Microarray ELISA for the Detection of Botulinum neurotoxin A
DOE Office of Scientific and Technical Information (OSTI.GOV)
Varnum, Susan M.
An enzyme-linked immunosorbent assay (ELISA) microarray was developed for the specific and sensitive detection of botulinum neurotoxin A (BoNT/A), using high-affinity recombinant monoclonal antibodies against the receptor binding domain of the heavy chain of BoNT/A. The ELISA microarray assay, because of its sensitivity, offers a screening test with detection limits comparable to the mouse bioassay, with results available in hours instead of days.
Applications of nanotechnology, next generation sequencing and microarrays in biomedical research.
Elingaramil, Sauli; Li, Xiaolong; He, Nongyue
2013-07-01
Next-generation sequencing technologies, microarrays and advances in bio nanotechnology have had an enormous impact on research within a short time frame. This impact appears certain to increase further as many biomedical institutions are now acquiring these prevailing new technologies. Beyond conventional sampling of genome content, wide-ranging applications are rapidly evolving for next-generation sequencing, microarrays and nanotechnology. To date, these technologies have been applied in a variety of contexts, including whole-genome sequencing, targeted re sequencing and discovery of transcription factor binding sites, noncoding RNA expression profiling and molecular diagnostics. This paper thus discusses current applications of nanotechnology, next-generation sequencing technologies and microarrays in biomedical research and highlights the transforming potential these technologies offer.
Polyadenylation state microarray (PASTA) analysis.
Beilharz, Traude H; Preiss, Thomas
2011-01-01
Nearly all eukaryotic mRNAs terminate in a poly(A) tail that serves important roles in mRNA utilization. In the cytoplasm, the poly(A) tail promotes both mRNA stability and translation, and these functions are frequently regulated through changes in tail length. To identify the scope of poly(A) tail length control in a transcriptome, we developed the polyadenylation state microarray (PASTA) method. It involves the purification of mRNA based on poly(A) tail length using thermal elution from poly(U) sepharose, followed by microarray analysis of the resulting fractions. In this chapter we detail our PASTA approach and describe some methods for bulk and mRNA-specific poly(A) tail length measurements of use to monitor the procedure and independently verify the microarray data.
High-throughput screening in two dimensions: binding intensity and off-rate on a peptide microarray.
Greving, Matthew P; Belcher, Paul E; Cox, Conor D; Daniel, Douglas; Diehnelt, Chris W; Woodbury, Neal W
2010-07-01
We report a high-throughput two-dimensional microarray-based screen, incorporating both target binding intensity and off-rate, which can be used to analyze thousands of compounds in a single binding assay. Relative binding intensities and time-resolved dissociation are measured for labeled tumor necrosis factor alpha (TNF-alpha) bound to a peptide microarray. The time-resolved dissociation is fitted to a one-component exponential decay model, from which relative dissociation rates are determined for all peptides with binding intensities above background. We show that most peptides with the slowest off-rates on the microarray also have the slowest off-rates when measured by surface plasmon resonance (SPR). 2010 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moran, Meena S., E-mail: meena.moran@yale.edu; Yang Qifeng; Department of Breast Surgery, Qilu Hospital, Shandong University, Jinan, People's Republic of China
2011-12-01
Purpose: Vascular endothelial growth factor (VEGF) is an important protein involved in the process of angiogenesis that has been found to correlate with relapse-free and overall survival in breast cancer, predominantly in locally advanced and metastatic disease. A paucity of data is available on the prognostic implications of VEGF in early-stage breast cancer; specifically, its prognostic value for local relapse after breast-conserving therapy (BCT) is largely unknown. The purpose of our study was to assess VEGF expression in a cohort of early-stage breast cancer patients treated with BCT and to correlate the clinical and pathologic features and outcomes with overexpressionmore » of VEGF. Methods and Materials: After obtaining institutional review board approval, the paraffin specimens of 368 patients with early-stage breast cancer treated with BCT between 1975 and 2005 were constructed into tissue microarrays with twofold redundancy. The tissue microarrays were stained for VEGF and read by a trained pathologist, who was unaware of the clinical details, as positive or negative according the standard guidelines. The clinical and pathologic data, long-term outcomes, and results of VEGF staining were analyzed. Results: The median follow-up for the entire cohort was 6.5 years. VEGF expression was positive in 56 (15%) of the 368 patients. Although VEGF expression did not correlate with age at diagnosis, tumor size, nodal status, histologic type, family history, estrogen receptor/progesterone receptor status, or HER-2 status, a trend was seen toward increased VEGF expression in the black cohort (26% black vs. 13% white, p = .068). Within the margin-negative cohort, VEGF did not predict for local relapse-free survival (RFS) (96% vs. 95%), nodal RFS (100% vs. 100%), distant metastasis-free survival (91% vs. 92%), overall survival (92% vs. 97%), respectively (all p >.05). Subset analysis revealed that VEGF was highly predictive of local RFS in node-positive, margin-negative patients (86% vs. 100%, p = .029) on univariate analysis, but it did not retain its significance on multivariate analysis (hazard ratio, 2.52; 95% confidence interval, 0.804-7.920, p = .113). No other subgroups were identified in which a correlation was found between VEGF expression and local relapse. Conclusion: To our knowledge, our study is the first to assess the prognostic value of VEGF with the endpoint of local relapse in early-stage breast cancer treated with BCT, an important question given the recent increased use of targeted antiangiogenic agents in early-stage breast cancer. Our study results suggest that VEGF is not an independent predictor of local RFS after BCT, but additional, larger studies specifically analyzing the endpoint of VEGF and local relapse are warranted.« less
Exploring Neutrino Oscillation Parameter Space with a Monte Carlo Algorithm
NASA Astrophysics Data System (ADS)
Espejel, Hugo; Ernst, David; Cogswell, Bernadette; Latimer, David
2015-04-01
The χ2 (or likelihood) function for a global analysis of neutrino oscillation data is first calculated as a function of the neutrino mixing parameters. A computational challenge is to obtain the minima or the allowed regions for the mixing parameters. The conventional approach is to calculate the χ2 (or likelihood) function on a grid for a large number of points, and then marginalize over the likelihood function. As the number of parameters increases with the number of neutrinos, making the calculation numerically efficient becomes necessary. We implement a new Monte Carlo algorithm (D. Foreman-Mackey, D. W. Hogg, D. Lang and J. Goodman, Publications of the Astronomical Society of the Pacific, 125 306 (2013)) to determine its computational efficiency at finding the minima and allowed regions. We examine a realistic example to compare the historical and the new methods.
Robust optimization-based DC optimal power flow for managing wind generation uncertainty
NASA Astrophysics Data System (ADS)
Boonchuay, Chanwit; Tomsovic, Kevin; Li, Fangxing; Ongsakul, Weerakorn
2012-11-01
Integrating wind generation into the wider grid causes a number of challenges to traditional power system operation. Given the relatively large wind forecast errors, congestion management tools based on optimal power flow (OPF) need to be improved. In this paper, a robust optimization (RO)-based DCOPF is proposed to determine the optimal generation dispatch and locational marginal prices (LMPs) for a day-ahead competitive electricity market considering the risk of dispatch cost variation. The basic concept is to use the dispatch to hedge against the possibility of reduced or increased wind generation. The proposed RO-based DCOPF is compared with a stochastic non-linear programming (SNP) approach on a modified PJM 5-bus system. Primary test results show that the proposed DCOPF model can provide lower dispatch cost than the SNP approach.
Eminaga, Okyaz; Wei, Wei; Hawley, Sarah J; Auman, Heidi; Newcomb, Lisa F; Simko, Jeff; Hurtado-Coll, Antonio; Troyer, Dean A; Carroll, Peter R; Gleave, Martin E; Lin, Daniel W; Nelson, Peter S; Thompson, Ian M; True, Lawrence D; McKenney, Jesse K; Feng, Ziding; Fazli, Ladan; Brooks, James D
2016-01-01
The uncertainties inherent in clinical measures of prostate cancer (CaP) aggressiveness endorse the investigation of clinically validated tissue biomarkers. MUC1 expression has been previously reported to independently predict aggressive localized prostate cancer. We used a large cohort to validate whether MUC1 protein levels measured by immunohistochemistry (IHC) predict aggressive cancer, recurrence and survival outcomes after radical prostatectomy independent of clinical and pathological parameters. MUC1 IHC was performed on a multi-institutional tissue microarray (TMA) resource including 1,326 men with a median follow-up of 5 years. Associations with clinical and pathological parameters were tested by the Chi-square test and the Wilcoxon rank sum test. Relationships with outcome were assessed with univariable and multivariable Cox proportional hazard models and the Log-rank test. The presence of MUC1 expression was significantly associated with extracapsular extension and higher Gleason score, but not with seminal vesicle invasion, age, positive surgical margins or pre-operative serum PSA levels. In univariable analyses, positive MUC1 staining was significantly associated with a worse recurrence free survival (RFS) (HR: 1.24, CI 1.03-1.49, P = 0.02), although not with disease specific survival (DSS, P>0.5). On multivariable analyses, the presence of positive surgical margins, extracapsular extension, seminal vesicle invasion, as well as higher pre-operative PSA and increasing Gleason score were independently associated with RFS, while MUC1 expression was not. Positive MUC1 expression was not independently associated with disease specific survival (DSS), but was weakly associated with overall survival (OS). In our large, rigorously designed validation cohort, MUC1 protein expression was associated with adverse pathological features, although it was not an independent predictor of outcome after radical prostatectomy.
Trujillo, Kristina A.; Heaphy, Christopher M.; Mai, Minh; Vargas, Keith M.; Jones, Anna C.; Vo, Phung; Butler, Kimberly S.; Joste, Nancy E.; Bisoffi, Marco; Griffith, Jeffrey K
2011-01-01
Previous studies have shown that a field of genetically altered but histologically normal tissue extends 1 cm or more from the margins of human breast tumors. The extent, composition and biological significance of this field are only partially understood, but the molecular alterations in affected cells could provide mechanisms for limitless replicative capacity, genomic instability and a microenvironment that supports tumor initiation and progression. We demonstrate by microarray, qRT-PCR and immunohistochemistry a signature of differential gene expression that discriminates between patient-matched, tumor-adjacent histologically normal breast tissues located 1 cm and 5 cm from the margins of breast adenocarcinomas (TAHN-1 and TAHN-5, respectively). The signature includes genes involved in extracellular matrix remodeling, wound healing, fibrosis and epithelial to mesenchymal transition (EMT). Myofibroblasts, which are mediators of wound healing and fibrosis, and intra-lobular fibroblasts expressing MMP2, SPARC, TGF-β3, which are inducers of EMT, were both prevalent in TAHN-1 tissues, sparse in TAHN-5 tissues, and absent in normal tissues from reduction mammoplasty. Accordingly, EMT markers S100A4 and vimentin were elevated in both luminal and myoepithelial cells, and EMT markers α-smooth muscle actin and SNAIL were elevated in luminal epithelial cells of TAHN-1 tissues. These results identify cellular processes that are differentially activated between TAHN-1 and TAHN-5 breast tissues, implicate myofibroblasts as likely mediators of these processes, provide evidence that EMT is occurring in histologically normal tissues within the affected field and identify candidate biomarkers to investigate whether or how field cancerization contributes to the development of primary or recurrent breast tumors. PMID:21105047
Zadka, Łukasz; Kulus, Michał J; Kurnol, Krzysztof; Piotrowska, Aleksandra; Glatzel-Plucińska, Natalia; Jurek, Tomasz; Czuba, Magdalena; Nowak, Aleksandra; Chabowski, Mariusz; Janczak, Dariusz; Dzięgiel, Piotr
2018-05-03
Despite the widely described role of IL10 in immune response regulation during carcinogenesis, there is no established model describing the role of its receptor. The aim of this study is to elucidate the relationship between the subunit alpha of IL10 receptor (IL10RA) in the pathogenesis of colorectal cancer (CRC). The study was conducted on archived paraffin blocks of 125 CRC patients, from which tissue microarrays (TMA) were made. These were subsequently used for immunohistochemistry to assess the expression of IL10RA, IL10, phosphorylated STAT3 (pSTAT3) and the Ki67 proliferation index. The intensity of both reactions was assessed by independent researchers using two approaches: digital image analysis and the Remmele and Stegner score (IRS). To assess the possible correlations between the two investigated markers and the clinical stage of CRC, the Pearson correlation coefficient was calculated. The expression of aforementioned proteins was assessed in tumor samples, healthy surgical margins and healthy control samples, obtained from cadavers during autopsy from the Department of Forensic Medicine. Statistical analysis was conducted using Statistica ver. 13.05 software. The final analysis included 105 CRC patients with complete clinical and pathological data, for whom the expressions of IL10RA, IL10, pSTAT3 and Ki67 were assessed using two independent methods. There was a positive correlation between the IL10RA expression and Ki-67 proliferation index (R = 0.63, p < 0.001) and a negative correlation between the IL10RA expression and the clinical stage of CRC (R = -0.21, p = 0.022). IL10RA correlated positively with pSTAT3 and IL10 in neoplastic tissue and tumor margin (with p < 0.01 for all correlations). We also observed a significantly higher expression of IL10RA in healthy surgical margins when compared to the actual tumor (p = 0.023, the paired t-test). The expression of IL10 was significantly higher in tumors than in healthy intestinal endothelium from control group. The correlations between the expression of IL10RA and the proliferation index or the clinical stage of CRC seem to confirm the importance of IL10RA in the pathogenesis of CRC. The higher expression of IL10RA in healthy surgical margins than in the tumor itself may suggest that IL10RA plays a role in regulating immune response to the neoplasm. Copyright © 2018 Elsevier Ltd. All rights reserved.
Rapid Microarray Detection of DNA and Proteins in Microliter Volumes with SPR Imaging Measurements
Seefeld, Ting Hu; Zhou, Wen-Juan; Corn, Robert M.
2011-01-01
A four chamber microfluidic biochip is fabricated for the rapid detection of multiple proteins and nucleic acids from microliter volume samples with the technique of surface plasmon resonance imaging (SPRI). The 18 mm × 18 mm biochip consists of four 3 μL microfluidic chambers attached to an SF10 glass substrate, each of which contains three individually addressable SPRI gold thin film microarray elements. The twelve element (4 × 3) SPRI microarray consists of gold thin film spots (1 mm2 area; 45 nm thickness) each in individually addressable 0.5 μL volume microchannels. Microarrays of single-stranded DNA and RNA (ssDNA and ssRNA respectively) are fabricated by either chemical and/or enzymatic attachment reactions in these microchannels; the SPRI microarrays are then used to detect femtomole amounts (nanomolar concentrations) of DNA and proteins (single stranded DNA binding protein and thrombin via aptamer-protein bioaffinity interactions). Microarrays of ssRNA microarray elements were also used for the ultrasensitive detection of zeptomole amounts (femtomolar concentrations) of DNA via the technique of RNase H-amplified SPRI. Enzymatic removal of ssRNA from the surface due to the hybridization adsorption of target ssDNA is detected as a reflectivity decrease in the SPR imaging measurements. The observed reflectivity loss was proportional to the log of the target ssDNA concentration with a detection limit of 10 fM or 30 zeptomoles (18,000 molecules). This enzymatic amplified ssDNA detection method is not limited by diffusion of ssDNA to the interface, and thus is extremely fast, requiring only 200 seconds in the microliter volume format. PMID:21488682
Ling, Zhi-Qiang; Wang, Yi; Mukaisho, Kenichi; Hattori, Takanori; Tatsuta, Takeshi; Ge, Ming-Hua; Jin, Li; Mao, Wei-Min; Sugihara, Hiroyuki
2010-06-01
Tests of differentially expressed genes (DEGs) from microarray experiments are based on the null hypothesis that genes that are irrelevant to the phenotype/stimulus are expressed equally in the target and control samples. However, this strict hypothesis is not always true, as there can be several transcriptomic background differences between target and control samples, including different cell/tissue types, different cell cycle stages and different biological donors. These differences lead to increased false positives, which have little biological/medical significance. In this article, we propose a statistical framework to identify DEGs between target and control samples from expression microarray data allowing transcriptomic background differences between these samples by introducing a modified null hypothesis that the gene expression background difference is normally distributed. We use an iterative procedure to perform robust estimation of the null hypothesis and identify DEGs as outliers. We evaluated our method using our own triplicate microarray experiment, followed by validations with reverse transcription-polymerase chain reaction (RT-PCR) and on the MicroArray Quality Control dataset. The evaluations suggest that our technique (i) results in less false positive and false negative results, as measured by the degree of agreement with RT-PCR of the same samples, (ii) can be applied to different microarray platforms and results in better reproducibility as measured by the degree of DEG identification concordance both intra- and inter-platforms and (iii) can be applied efficiently with only a few microarray replicates. Based on these evaluations, we propose that this method not only identifies more reliable and biologically/medically significant DEG, but also reduces the power-cost tradeoff problem in the microarray field. Source code and binaries freely available for download at http://comonca.org.cn/fdca/resources/softwares/deg.zip.
Carlson, Ruth I; Cattet, Marc R L; Sarauer, Bryan L; Nielsen, Scott E; Boulanger, John; Stenhouse, Gordon B; Janz, David M
2016-01-01
A novel antibody-based protein microarray was developed that simultaneously determines expression of 31 stress-associated proteins in skin samples collected from free-ranging grizzly bears (Ursus arctos) in Alberta, Canada. The microarray determines proteins belonging to four broad functional categories associated with stress physiology: hypothalamic-pituitary-adrenal axis proteins, apoptosis/cell cycle proteins, cellular stress/proteotoxicity proteins and oxidative stress/inflammation proteins. Small skin samples (50-100 mg) were collected from captured bears using biopsy punches. Proteins were isolated and labelled with fluorescent dyes, with labelled protein homogenates loaded onto microarrays to hybridize with antibodies. Relative protein expression was determined by comparison with a pooled standard skin sample. The assay was sensitive, requiring 80 µg of protein per sample to be run in triplicate on the microarray. Intra-array and inter-array coefficients of variation for individual proteins were generally <10 and <15%, respectively. With one exception, there were no significant differences in protein expression among skin samples collected from the neck, forelimb, hindlimb and ear in a subsample of n = 4 bears. This suggests that remotely delivered biopsy darts could be used in future sampling. Using generalized linear mixed models, certain proteins within each functional category demonstrated altered expression with respect to differences in year, season, geographical sampling location within Alberta and bear biological parameters, suggesting that these general variables may influence expression of specific proteins in the microarray. Our goal is to apply the protein microarray as a conservation physiology tool that can detect, evaluate and monitor physiological stress in grizzly bears and other species at risk over time in response to environmental change.
Karyotype versus microarray testing for genetic abnormalities after stillbirth.
Reddy, Uma M; Page, Grier P; Saade, George R; Silver, Robert M; Thorsten, Vanessa R; Parker, Corette B; Pinar, Halit; Willinger, Marian; Stoll, Barbara J; Heim-Hall, Josefine; Varner, Michael W; Goldenberg, Robert L; Bukowski, Radek; Wapner, Ronald J; Drews-Botsch, Carolyn D; O'Brien, Barbara M; Dudley, Donald J; Levy, Brynn
2012-12-06
Genetic abnormalities have been associated with 6 to 13% of stillbirths, but the true prevalence may be higher. Unlike karyotype analysis, microarray analysis does not require live cells, and it detects small deletions and duplications called copy-number variants. The Stillbirth Collaborative Research Network conducted a population-based study of stillbirth in five geographic catchment areas. Standardized postmortem examinations and karyotype analyses were performed. A single-nucleotide polymorphism array was used to detect copy-number variants of at least 500 kb in placental or fetal tissue. Variants that were not identified in any of three databases of apparently unaffected persons were then classified into three groups: probably benign, clinical significance unknown, or pathogenic. We compared the results of karyotype and microarray analyses of samples obtained after delivery. In our analysis of samples from 532 stillbirths, microarray analysis yielded results more often than did karyotype analysis (87.4% vs. 70.5%, P<0.001) and provided better detection of genetic abnormalities (aneuploidy or pathogenic copy-number variants, 8.3% vs. 5.8%; P=0.007). Microarray analysis also identified more genetic abnormalities among 443 antepartum stillbirths (8.8% vs. 6.5%, P=0.02) and 67 stillbirths with congenital anomalies (29.9% vs. 19.4%, P=0.008). As compared with karyotype analysis, microarray analysis provided a relative increase in the diagnosis of genetic abnormalities of 41.9% in all stillbirths, 34.5% in antepartum stillbirths, and 53.8% in stillbirths with anomalies. Microarray analysis is more likely than karyotype analysis to provide a genetic diagnosis, primarily because of its success with nonviable tissue, and is especially valuable in analyses of stillbirths with congenital anomalies or in cases in which karyotype results cannot be obtained. (Funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development.).
Chavan, Shweta S; Bauer, Michael A; Peterson, Erich A; Heuck, Christoph J; Johann, Donald J
2013-01-01
Transcriptome analysis by microarrays has produced important advances in biomedicine. For instance in multiple myeloma (MM), microarray approaches led to the development of an effective disease subtyping via cluster assignment, and a 70 gene risk score. Both enabled an improved molecular understanding of MM, and have provided prognostic information for the purposes of clinical management. Many researchers are now transitioning to Next Generation Sequencing (NGS) approaches and RNA-seq in particular, due to its discovery-based nature, improved sensitivity, and dynamic range. Additionally, RNA-seq allows for the analysis of gene isoforms, splice variants, and novel gene fusions. Given the voluminous amounts of historical microarray data, there is now a need to associate and integrate microarray and RNA-seq data via advanced bioinformatic approaches. Custom software was developed following a model-view-controller (MVC) approach to integrate Affymetrix probe set-IDs, and gene annotation information from a variety of sources. The tool/approach employs an assortment of strategies to integrate, cross reference, and associate microarray and RNA-seq datasets. Output from a variety of transcriptome reconstruction and quantitation tools (e.g., Cufflinks) can be directly integrated, and/or associated with Affymetrix probe set data, as well as necessary gene identifiers and/or symbols from a diversity of sources. Strategies are employed to maximize the annotation and cross referencing process. Custom gene sets (e.g., MM 70 risk score (GEP-70)) can be specified, and the tool can be directly assimilated into an RNA-seq pipeline. A novel bioinformatic approach to aid in the facilitation of both annotation and association of historic microarray data, in conjunction with richer RNA-seq data, is now assisting with the study of MM cancer biology.
Addressable droplet microarrays for single cell protein analysis.
Salehi-Reyhani, Ali; Burgin, Edward; Ces, Oscar; Willison, Keith R; Klug, David R
2014-11-07
Addressable droplet microarrays are potentially attractive as a way to achieve miniaturised, reduced volume, high sensitivity analyses without the need to fabricate microfluidic devices or small volume chambers. We report a practical method for producing oil-encapsulated addressable droplet microarrays which can be used for such analyses. To demonstrate their utility, we undertake a series of single cell analyses, to determine the variation in copy number of p53 proteins in cells of a human cancer cell line.
The future of microarray technology: networking the genome search.
D'Ambrosio, C; Gatta, L; Bonini, S
2005-10-01
In recent years microarray technology has been increasingly used in both basic and clinical research, providing substantial information for a better understanding of genome-environment interactions responsible for diseases, as well as for their diagnosis and treatment. However, in genomic research using microarray technology there are several unresolved issues, including scientific, ethical and legal issues. Networks of excellence like GA(2)LEN may represent the best approach for teaching, cost reduction, data repositories, and functional studies implementation.
Temperature-controlled microintaglio printing for high-resolution micropatterning of RNA molecules.
Kobayashi, Ryo; Biyani, Manish; Ueno, Shingo; Kumal, Subhashini Raj; Kuramochi, Hiromi; Ichiki, Takanori
2015-05-15
We have developed an advanced microintaglio printing method for fabricating fine and high-density micropatterns and applied it to the microarraying of RNA molecules. The microintaglio printing of RNA reported here is based on the hybridization of RNA with immobilized complementary DNA probes. The hybridization was controlled by switching the RNA conformation via the temperature, and an RNA microarray with a diameter of 1.5 µm and a density of 40,000 spots/mm(2) with high contrast was successfully fabricated. Specifically, no size effects were observed in the uniformity of patterned signals over a range of microarray feature sizes spanning one order of magnitude. Additionally, we have developed a microintaglio printing method for transcribed RNA microarrays on demand using DNA-immobilized magnetic beads. The beads were arrayed on wells fabricated on a printing mold and the wells were filled with in vitro transcription reagent and sealed with a DNA-immobilized glass substrate. Subsequently, RNA was in situ synthesized using the bead-immobilized DNA as a template and printed onto the substrate via hybridization. Since the microintaglio printing of RNA using DNA-immobilized beads enables the fabrication of a microarray of spots composed of multiple RNA sequences, it will be possible to screen or analyze RNA functions using an RNA microarray fabricated by temperature-controlled microintaglio printing (TC-µIP). Copyright © 2014 Elsevier B.V. All rights reserved.
Fully automated analysis of multi-resolution four-channel micro-array genotyping data
NASA Astrophysics Data System (ADS)
Abbaspour, Mohsen; Abugharbieh, Rafeef; Podder, Mohua; Tebbutt, Scott J.
2006-03-01
We present a fully-automated and robust microarray image analysis system for handling multi-resolution images (down to 3-micron with sizes up to 80 MBs per channel). The system is developed to provide rapid and accurate data extraction for our recently developed microarray analysis and quality control tool (SNP Chart). Currently available commercial microarray image analysis applications are inefficient, due to the considerable user interaction typically required. Four-channel DNA microarray technology is a robust and accurate tool for determining genotypes of multiple genetic markers in individuals. It plays an important role in the state of the art trend where traditional medical treatments are to be replaced by personalized genetic medicine, i.e. individualized therapy based on the patient's genetic heritage. However, fast, robust, and precise image processing tools are required for the prospective practical use of microarray-based genetic testing for predicting disease susceptibilities and drug effects in clinical practice, which require a turn-around timeline compatible with clinical decision-making. In this paper we have developed a fully-automated image analysis platform for the rapid investigation of hundreds of genetic variations across multiple genes. Validation tests indicate very high accuracy levels for genotyping results. Our method achieves a significant reduction in analysis time, from several hours to just a few minutes, and is completely automated requiring no manual interaction or guidance.
Quantifying protein-protein interactions in high throughput using protein domain microarrays.
Kaushansky, Alexis; Allen, John E; Gordus, Andrew; Stiffler, Michael A; Karp, Ethan S; Chang, Bryan H; MacBeath, Gavin
2010-04-01
Protein microarrays provide an efficient way to identify and quantify protein-protein interactions in high throughput. One drawback of this technique is that proteins show a broad range of physicochemical properties and are often difficult to produce recombinantly. To circumvent these problems, we have focused on families of protein interaction domains. Here we provide protocols for constructing microarrays of protein interaction domains in individual wells of 96-well microtiter plates, and for quantifying domain-peptide interactions in high throughput using fluorescently labeled synthetic peptides. As specific examples, we will describe the construction of microarrays of virtually every human Src homology 2 (SH2) and phosphotyrosine binding (PTB) domain, as well as microarrays of mouse PDZ domains, all produced recombinantly in Escherichia coli. For domains that mediate high-affinity interactions, such as SH2 and PTB domains, equilibrium dissociation constants (K(D)s) for their peptide ligands can be measured directly on arrays by obtaining saturation binding curves. For weaker binding domains, such as PDZ domains, arrays are best used to identify candidate interactions, which are then retested and quantified by fluorescence polarization. Overall, protein domain microarrays provide the ability to rapidly identify and quantify protein-ligand interactions with minimal sample consumption. Because entire domain families can be interrogated simultaneously, they provide a powerful way to assess binding selectivity on a proteome-wide scale and provide an unbiased perspective on the connectivity of protein-protein interaction networks.
Integrative missing value estimation for microarray data.
Hu, Jianjun; Li, Haifeng; Waterman, Michael S; Zhou, Xianghong Jasmine
2006-10-12
Missing value estimation is an important preprocessing step in microarray analysis. Although several methods have been developed to solve this problem, their performance is unsatisfactory for datasets with high rates of missing data, high measurement noise, or limited numbers of samples. In fact, more than 80% of the time-series datasets in Stanford Microarray Database contain less than eight samples. We present the integrative Missing Value Estimation method (iMISS) by incorporating information from multiple reference microarray datasets to improve missing value estimation. For each gene with missing data, we derive a consistent neighbor-gene list by taking reference data sets into consideration. To determine whether the given reference data sets are sufficiently informative for integration, we use a submatrix imputation approach. Our experiments showed that iMISS can significantly and consistently improve the accuracy of the state-of-the-art Local Least Square (LLS) imputation algorithm by up to 15% improvement in our benchmark tests. We demonstrated that the order-statistics-based integrative imputation algorithms can achieve significant improvements over the state-of-the-art missing value estimation approaches such as LLS and is especially good for imputing microarray datasets with a limited number of samples, high rates of missing data, or very noisy measurements. With the rapid accumulation of microarray datasets, the performance of our approach can be further improved by incorporating larger and more appropriate reference datasets.
Rode, Tone Mari; Berget, Ingunn; Langsrud, Solveig; Møretrø, Trond; Holck, Askild
2009-07-01
Microorganisms are constantly exposed to new and altered growth conditions, and respond by changing gene expression patterns. Several methods for studying gene expression exist. During the last decade, the analysis of microarrays has been one of the most common approaches applied for large scale gene expression studies. A relatively new method for gene expression analysis is MassARRAY, which combines real competitive-PCR and MALDI-TOF (matrix-assisted laser desorption/ionization time-of-flight) mass spectrometry. In contrast to microarray methods, MassARRAY technology is suitable for analysing a larger number of samples, though for a smaller set of genes. In this study we compare the results from MassARRAY with microarrays on gene expression responses of Staphylococcus aureus exposed to acid stress at pH 4.5. RNA isolated from the same stress experiments was analysed using both the MassARRAY and the microarray methods. The MassARRAY and microarray methods showed good correlation. Both MassARRAY and microarray estimated somewhat lower fold changes compared with quantitative real-time PCR (qRT-PCR). The results confirmed the up-regulation of the urease genes in acidic environments, and also indicated the importance of metal ion regulation. This study shows that the MassARRAY technology is suitable for gene expression analysis in prokaryotes, and has advantages when a set of genes is being analysed for an organism exposed to many different environmental conditions.
A fisheye viewer for microarray-based gene expression data
Wu, Min; Thao, Cheng; Mu, Xiangming; Munson, Ethan V
2006-01-01
Background Microarray has been widely used to measure the relative amounts of every mRNA transcript from the genome in a single scan. Biologists have been accustomed to reading their experimental data directly from tables. However, microarray data are quite large and are stored in a series of files in a machine-readable format, so direct reading of the full data set is not feasible. The challenge is to design a user interface that allows biologists to usefully view large tables of raw microarray-based gene expression data. This paper presents one such interface – an electronic table (E-table) that uses fisheye distortion technology. Results The Fisheye Viewer for microarray-based gene expression data has been successfully developed to view MIAME data stored in the MAGE-ML format. The viewer can be downloaded from the project web site . The fisheye viewer was implemented in Java so that it could run on multiple platforms. We implemented the E-table by adapting JTable, a default table implementation in the Java Swing user interface library. Fisheye views use variable magnification to balance magnification for easy viewing and compression for maximizing the amount of data on the screen. Conclusion This Fisheye Viewer is a lightweight but useful tool for biologists to quickly overview the raw microarray-based gene expression data in an E-table. PMID:17038193
Fiber-optic microarray for simultaneous detection of multiple harmful algal bloom species.
Ahn, Soohyoun; Kulis, David M; Erdner, Deana L; Anderson, Donald M; Walt, David R
2006-09-01
Harmful algal blooms (HABs) are a serious threat to coastal resources, causing a variety of impacts on public health, regional economies, and ecosystems. Plankton analysis is a valuable component of many HAB monitoring and research programs, but the diversity of plankton poses a problem in discriminating toxic from nontoxic species using conventional detection methods. Here we describe a sensitive and specific sandwich hybridization assay that combines fiber-optic microarrays with oligonucleotide probes to detect and enumerate the HAB species Alexandrium fundyense, Alexandrium ostenfeldii, and Pseudo-nitzschia australis. Microarrays were prepared by loading oligonucleotide probe-coupled microspheres (diameter, 3 mum) onto the distal ends of chemically etched imaging fiber bundles. Hybridization of target rRNA from HAB cells to immobilized probes on the microspheres was visualized using Cy3-labeled secondary probes in a sandwich-type assay format. We applied these microarrays to the detection and enumeration of HAB cells in both cultured and field samples. Our study demonstrated a detection limit of approximately 5 cells for all three target organisms within 45 min, without a separate amplification step, in both sample types. We also developed a multiplexed microarray to detect the three HAB species simultaneously, which successfully detected the target organisms, alone and in combination, without cross-reactivity. Our study suggests that fiber-optic microarrays can be used for rapid and sensitive detection and potential enumeration of HAB species in the environment.
Microarray analysis of potential genes in the pathogenesis of recurrent oral ulcer.
Han, Jingying; He, Zhiwei; Li, Kun; Hou, Lu
2015-01-01
Recurrent oral ulcer seriously threatens patients' daily life and health. This study investigated potential genes and pathways that participate in the pathogenesis of recurrent oral ulcer by high throughput bioinformatic analysis. RT-PCR and Western blot were applied to further verify screened interleukins effect. Recurrent oral ulcer related genes were collected from websites and papers, and further found out from Human Genome 280 6.0 microarray data. Each pathway of recurrent oral ulcer related genes were got through chip hybridization. RT-PCR was applied to test four recurrent oral ulcer related genes to verify the microarray data. Data transformation, scatter plot, clustering analysis, and expression pattern analysis were used to analyze recurrent oral ulcer related gene expression changes. Recurrent oral ulcer gene microarray was successfully established. Microarray showed that 551 genes involved in recurrent oral ulcer activity and 196 genes were recurrent oral ulcer related genes. Of them, 76 genes up-regulated, 62 genes down-regulated, and 58 genes up-/down-regulated. Total expression level up-regulated 752 times (60%) and down-regulated 485 times (40%). IL-2 plays an important role in the occurrence, development and recurrence of recurrent oral ulcer on the mRNA and protein levels. Gene microarray can be used to analyze potential genes and pathways in recurrent oral ulcer. IL-2 may be involved in the pathogenesis of recurrent oral ulcer.
Scholten, Johannes C M; Culley, David E; Nie, Lei; Munn, Kyle J; Chow, Lely; Brockman, Fred J; Zhang, Weiwen
2007-06-29
The application of DNA microarray technology to investigate multiple-species microbial communities presents great challenges. In this study, we reported the design and quality assessment of four whole genome oligonucleotide microarrays for two syntroph bacteria, Desulfovibrio vulgaris and Syntrophobacter fumaroxidans, and two archaeal methanogens, Methanosarcina barkeri, and Methanospirillum hungatei, and their application to analyze global gene expression in a four-species microbial community in response to oxidative stress. In order to minimize the possibility of cross-hybridization, cross-genome comparison was performed to assure all probes unique to each genome so that the microarrays could provide species-level resolution. Microarray quality was validated by the good reproducibility of experimental measurements of multiple biological and analytical replicates. This study showed that S. fumaroxidans and M. hungatei responded to the oxidative stress with up-regulation of several genes known to be involved in reactive oxygen species (ROS) detoxification, such as catalase and rubrerythrin in S. fumaroxidans and thioredoxin and heat shock protein Hsp20 in M. hungatei. However, D. vulgaris seemed to be less sensitive to the oxidative stress as a member of a four-species community, since no gene involved in ROS detoxification was up-regulated. Our work demonstrated the successful application of microarrays to a multiple-species microbial community, and our preliminary results indicated that this approach could provide novel insights on the metabolism within microbial communities.
A proposed metric for assessing the measurement quality of individual microarrays
Kim, Kyoungmi; Page, Grier P; Beasley, T Mark; Barnes, Stephen; Scheirer, Katherine E; Allison, David B
2006-01-01
Background High-density microarray technology is increasingly applied to study gene expression levels on a large scale. Microarray experiments rely on several critical steps that may introduce error and uncertainty in analyses. These steps include mRNA sample extraction, amplification and labeling, hybridization, and scanning. In some cases this may be manifested as systematic spatial variation on the surface of microarray in which expression measurements within an individual array may vary as a function of geographic position on the array surface. Results We hypothesized that an index of the degree of spatiality of gene expression measurements associated with their physical geographic locations on an array could indicate the summary of the physical reliability of the microarray. We introduced a novel way to formulate this index using a statistical analysis tool. Our approach regressed gene expression intensity measurements on a polynomial response surface of the microarray's Cartesian coordinates. We demonstrated this method using a fixed model and presented results from real and simulated datasets. Conclusion We demonstrated the potential of such a quantitative metric for assessing the reliability of individual arrays. Moreover, we showed that this procedure can be incorporated into laboratory practice as a means to set quality control specifications and as a tool to determine whether an array has sufficient quality to be retained in terms of spatial correlation of gene expression measurements. PMID:16430768
Tojo, Axel; Malm, Johan; Marko-Varga, György; Lilja, Hans; Laurell, Thomas
2014-01-01
The antibody microarrays have become widespread, but their use for quantitative analyses in clinical samples has not yet been established. We investigated an immunoassay based on nanoporous silicon antibody microarrays for quantification of total prostate-specific-antigen (PSA) in 80 clinical plasma samples, and provide quantitative data from a duplex microarray assay that simultaneously quantifies free and total PSA in plasma. To further develop the assay the porous silicon chips was placed into a standard 96-well microtiter plate for higher throughput analysis. The samples analyzed by this quantitative microarray were 80 plasma samples obtained from men undergoing clinical PSA testing (dynamic range: 0.14-44ng/ml, LOD: 0.14ng/ml). The second dataset, measuring free PSA (dynamic range: 0.40-74.9ng/ml, LOD: 0.47ng/ml) and total PSA (dynamic range: 0.87-295ng/ml, LOD: 0.76ng/ml), was also obtained from the clinical routine. The reference for the quantification was a commercially available assay, the ProStatus PSA Free/Total DELFIA. In an analysis of 80 plasma samples the microarray platform performs well across the range of total PSA levels. This assay might have the potential to substitute for the large-scale microtiter plate format in diagnostic applications. The duplex assay paves the way for a future quantitative multiplex assay, which analyses several prostate cancer biomarkers simultaneously. PMID:22921878
Wimmer, Isabella; Tröscher, Anna R; Brunner, Florian; Rubino, Stephen J; Bien, Christian G; Weiner, Howard L; Lassmann, Hans; Bauer, Jan
2018-04-20
Formalin-fixed paraffin-embedded (FFPE) tissues are valuable resources commonly used in pathology. However, formalin fixation modifies nucleic acids challenging the isolation of high-quality RNA for genetic profiling. Here, we assessed feasibility and reliability of microarray studies analysing transcriptome data from fresh, fresh-frozen (FF) and FFPE tissues. We show that reproducible microarray data can be generated from only 2 ng FFPE-derived RNA. For RNA quality assessment, fragment size distribution (DV200) and qPCR proved most suitable. During RNA isolation, extending tissue lysis time to 10 hours reduced high-molecular-weight species, while additional incubation at 70 °C markedly increased RNA yields. Since FF- and FFPE-derived microarrays constitute different data entities, we used indirect measures to investigate gene signal variation and relative gene expression. Whole-genome analyses revealed high concordance rates, while reviewing on single-genes basis showed higher data variation in FFPE than FF arrays. Using an experimental model, gene set enrichment analysis (GSEA) of FFPE-derived microarrays and fresh tissue-derived RNA-Seq datasets yielded similarly affected pathways confirming the applicability of FFPE tissue in global gene expression analysis. Our study provides a workflow comprising RNA isolation, quality assessment and microarray profiling using minimal RNA input, thus enabling hypothesis-generating pathway analyses from limited amounts of precious, pathologically significant FFPE tissues.
Skip to main content DNA Microarray Technology Enter Search Term(s): Español Research Funding An Overview Bioinformatics Current Grants Education and Training Funding Extramural Research News Features Funding Divisions Funding ...
Yang, Yunfeng; Zhu, Mengxia; Wu, Liyou; Zhou, Jizhong
2008-09-16
Using genomic DNA as common reference in microarray experiments has recently been tested by different laboratories. Conflicting results have been reported with regard to the reliability of microarray results using this method. To explain it, we hypothesize that data processing is a critical element that impacts the data quality. Microarray experiments were performed in a gamma-proteobacterium Shewanella oneidensis. Pair-wise comparison of three experimental conditions was obtained either with two labeled cDNA samples co-hybridized to the same array, or by employing Shewanella genomic DNA as a standard reference. Various data processing techniques were exploited to reduce the amount of inconsistency between both methods and the results were assessed. We discovered that data quality was significantly improved by imposing the constraint of minimal number of replicates, logarithmic transformation and random error analyses. These findings demonstrate that data processing significantly influences data quality, which provides an explanation for the conflicting evaluation in the literature. This work could serve as a guideline for microarray data analysis using genomic DNA as a standard reference.
Improved microarray methods for profiling the yeast knockout strain collection
Yuan, Daniel S.; Pan, Xuewen; Ooi, Siew Loon; Peyser, Brian D.; Spencer, Forrest A.; Irizarry, Rafael A.; Boeke, Jef D.
2005-01-01
A remarkable feature of the Yeast Knockout strain collection is the presence of two unique 20mer TAG sequences in almost every strain. In principle, the relative abundances of strains in a complex mixture can be profiled swiftly and quantitatively by amplifying these sequences and hybridizing them to microarrays, but TAG microarrays have not been widely used. Here, we introduce a TAG microarray design with sophisticated controls and describe a robust method for hybridizing high concentrations of dye-labeled TAGs in single-stranded form. We also highlight the importance of avoiding PCR contamination and provide procedures for detection and eradication. Validation experiments using these methods yielded false positive (FP) and false negative (FN) rates for individual TAG detection of 3–6% and 15–18%, respectively. Analysis demonstrated that cross-hybridization was the chief source of FPs, while TAG amplification defects were the main cause of FNs. The materials, protocols, data and associated software described here comprise a suite of experimental resources that should facilitate the use of TAG microarrays for a wide variety of genetic screens. PMID:15994458
An efficient method to identify differentially expressed genes in microarray experiments
Qin, Huaizhen; Feng, Tao; Harding, Scott A.; Tsai, Chung-Jui; Zhang, Shuanglin
2013-01-01
Motivation Microarray experiments typically analyze thousands to tens of thousands of genes from small numbers of biological replicates. The fact that genes are normally expressed in functionally relevant patterns suggests that gene-expression data can be stratified and clustered into relatively homogenous groups. Cluster-wise dimensionality reduction should make it feasible to improve screening power while minimizing information loss. Results We propose a powerful and computationally simple method for finding differentially expressed genes in small microarray experiments. The method incorporates a novel stratification-based tight clustering algorithm, principal component analysis and information pooling. Comprehensive simulations show that our method is substantially more powerful than the popular SAM and eBayes approaches. We applied the method to three real microarray datasets: one from a Populus nitrogen stress experiment with 3 biological replicates; and two from public microarray datasets of human cancers with 10 to 40 biological replicates. In all three analyses, our method proved more robust than the popular alternatives for identification of differentially expressed genes. Availability The C++ code to implement the proposed method is available upon request for academic use. PMID:18453554
Clustering approaches to identifying gene expression patterns from DNA microarray data.
Do, Jin Hwan; Choi, Dong-Kug
2008-04-30
The analysis of microarray data is essential for large amounts of gene expression data. In this review we focus on clustering techniques. The biological rationale for this approach is the fact that many co-expressed genes are co-regulated, and identifying co-expressed genes could aid in functional annotation of novel genes, de novo identification of transcription factor binding sites and elucidation of complex biological pathways. Co-expressed genes are usually identified in microarray experiments by clustering techniques. There are many such methods, and the results obtained even for the same datasets may vary considerably depending on the algorithms and metrics for dissimilarity measures used, as well as on user-selectable parameters such as desired number of clusters and initial values. Therefore, biologists who want to interpret microarray data should be aware of the weakness and strengths of the clustering methods used. In this review, we survey the basic principles of clustering of DNA microarray data from crisp clustering algorithms such as hierarchical clustering, K-means and self-organizing maps, to complex clustering algorithms like fuzzy clustering.
Lee, SangWook; Kim, Soyoun; Malm, Johan; Jeong, Ok Chan; Lilja, Hans; Laurell, Thomas
2014-01-01
Enriching the surface density of immobilized capture antibodies enhances the detection signal of antibody sandwich microarrays. In this study, we improved the detection sensitivity of our previously developed P-Si (porous silicon) antibody microarray by optimizing concentrations of the capturing antibody. We investigated immunoassays using a P-Si microarray at three different capture antibody (PSA - prostate specific antigen) concentrations, analyzing the influence of the antibody density on the assay detection sensitivity. The LOD (limit of detection) for PSA was 2.5ngmL−1, 80pgmL−1, and 800fgmL−1 when arraying the PSA antibody, H117 at the concentration 15µgmL−1, 35µgmL−1 and 154µgmL−1, respectively. We further investigated PSA spiked into human female serum in the range of 800fgmL−1 to 500ngmL−1. The microarray showed a LOD of 800fgmL−1 and a dynamic range of 800 fgmL−1 to 80ngmL−1 in serum spiked samples. PMID:24016590
Shin, Hwa Hui; Hwang, Byeong Hee; Seo, Jeong Hyun
2014-01-01
It is important to rapidly and selectively detect and analyze pathogenic Salmonella enterica subsp. enterica in contaminated food to reduce the morbidity and mortality of Salmonella infection and to guarantee food safety. In the present work, we developed an oligonucleotide microarray containing duplicate specific capture probes based on the carB gene, which encodes the carbamoyl phosphate synthetase large subunit, as a competent biomarker evaluated by genetic analysis to selectively and efficiently detect and discriminate three S. enterica subsp. enterica serotypes: Choleraesuis, Enteritidis, and Typhimurium. Using the developed microarray system, three serotype targets were successfully analyzed in a range as low as 1.6 to 3.1 nM and were specifically discriminated from each other without nonspecific signals. In addition, the constructed microarray did not have cross-reactivity with other common pathogenic bacteria and even enabled the clear discrimination of the target Salmonella serotype from a bacterial mixture. Therefore, these results demonstrated that our novel carB-based oligonucleotide microarray can be used as an effective and specific detection system for S. enterica subsp. enterica serotypes. PMID:24185846
Shin, Hwa Hui; Hwang, Byeong Hee; Seo, Jeong Hyun; Cha, Hyung Joon
2014-01-01
It is important to rapidly and selectively detect and analyze pathogenic Salmonella enterica subsp. enterica in contaminated food to reduce the morbidity and mortality of Salmonella infection and to guarantee food safety. In the present work, we developed an oligonucleotide microarray containing duplicate specific capture probes based on the carB gene, which encodes the carbamoyl phosphate synthetase large subunit, as a competent biomarker evaluated by genetic analysis to selectively and efficiently detect and discriminate three S. enterica subsp. enterica serotypes: Choleraesuis, Enteritidis, and Typhimurium. Using the developed microarray system, three serotype targets were successfully analyzed in a range as low as 1.6 to 3.1 nM and were specifically discriminated from each other without nonspecific signals. In addition, the constructed microarray did not have cross-reactivity with other common pathogenic bacteria and even enabled the clear discrimination of the target Salmonella serotype from a bacterial mixture. Therefore, these results demonstrated that our novel carB-based oligonucleotide microarray can be used as an effective and specific detection system for S. enterica subsp. enterica serotypes.
Xu, Xiaodan; Li, Yingcong; Zhao, Heng; Wen, Si-yuan; Wang, Sheng-qi; Huang, Jian; Huang, Kun-lun; Luo, Yun-bo
2005-05-18
To devise a rapid and reliable method for the detection and identification of genetically modified (GM) events, we developed a multiplex polymerase chain reaction (PCR) coupled with a DNA microarray system simultaneously aiming at many targets in a single reaction. The system included probes for screening gene, species reference gene, specific gene, construct-specific gene, event-specific gene, and internal and negative control genes. 18S rRNA was combined with species reference genes as internal controls to assess the efficiency of all reactions and to eliminate false negatives. Two sets of the multiplex PCR system were used to amplify four and five targets, respectively. Eight different structure genes could be detected and identified simultaneously for Roundup Ready soybean in a single microarray. The microarray specificity was validated by its ability to discriminate two GM maizes Bt176 and Bt11. The advantages of this method are its high specificity and greatly reduced false-positives and -negatives. The multiplex PCR coupled with microarray technology presented here is a rapid and reliable tool for the simultaneous detection of GM organism ingredients.
A study of metaheuristic algorithms for high dimensional feature selection on microarray data
NASA Astrophysics Data System (ADS)
Dankolo, Muhammad Nasiru; Radzi, Nor Haizan Mohamed; Sallehuddin, Roselina; Mustaffa, Noorfa Haszlinna
2017-11-01
Microarray systems enable experts to examine gene profile at molecular level using machine learning algorithms. It increases the potentials of classification and diagnosis of many diseases at gene expression level. Though, numerous difficulties may affect the efficiency of machine learning algorithms which includes vast number of genes features comprised in the original data. Many of these features may be unrelated to the intended analysis. Therefore, feature selection is necessary to be performed in the data pre-processing. Many feature selection algorithms are developed and applied on microarray which including the metaheuristic optimization algorithms. This paper discusses the application of the metaheuristics algorithms for feature selection in microarray dataset. This study reveals that, the algorithms have yield an interesting result with limited resources thereby saving computational expenses of machine learning algorithms.
A DNA microarray-based assay to detect dual infection with two dengue virus serotypes.
Díaz-Badillo, Alvaro; Muñoz, María de Lourdes; Perez-Ramirez, Gerardo; Altuzar, Victor; Burgueño, Juan; Mendoza-Alvarez, Julio G; Martínez-Muñoz, Jorge P; Cisneros, Alejandro; Navarrete-Espinosa, Joel; Sanchez-Sinencio, Feliciano
2014-04-25
Here; we have described and tested a microarray based-method for the screening of dengue virus (DENV) serotypes. This DNA microarray assay is specific and sensitive and can detect dual infections with two dengue virus serotypes and single-serotype infections. Other methodologies may underestimate samples containing more than one serotype. This technology can be used to discriminate between the four DENV serotypes. Single-stranded DNA targets were covalently attached to glass slides and hybridised with specific labelled probes. DENV isolates and dengue samples were used to evaluate microarray performance. Our results demonstrate that the probes hybridized specifically to DENV serotypes; with no detection of unspecific signals. This finding provides evidence that specific probes can effectively identify single and double infections in DENV samples.
A DNA Microarray-Based Assay to Detect Dual Infection with Two Dengue Virus Serotypes
Díaz-Badillo, Alvaro; de Lourdes Muñoz, María; Perez-Ramirez, Gerardo; Altuzar, Victor; Burgueño, Juan; Mendoza-Alvarez, Julio G.; Martínez-Muñoz, Jorge P.; Cisneros, Alejandro; Navarrete-Espinosa, Joel; Sanchez-Sinencio, Feliciano
2014-01-01
Here; we have described and tested a microarray based-method for the screening of dengue virus (DENV) serotypes. This DNA microarray assay is specific and sensitive and can detect dual infections with two dengue virus serotypes and single-serotype infections. Other methodologies may underestimate samples containing more than one serotype. This technology can be used to discriminate between the four DENV serotypes. Single-stranded DNA targets were covalently attached to glass slides and hybridised with specific labelled probes. DENV isolates and dengue samples were used to evaluate microarray performance. Our results demonstrate that the probes hybridized specifically to DENV serotypes; with no detection of unspecific signals. This finding provides evidence that specific probes can effectively identify single and double infections in DENV samples. PMID:24776933
NASA Astrophysics Data System (ADS)
Gao, S. S.; Kong, F.; Wu, J.; Liu, L.; Liu, K. H.
2017-12-01
Seismic azimuthal anisotropy is measured at 83 stations situated at the southeastern margin of the Tibetan Plateau and adjacent regions based on shear-wave splitting analyses. A total of 1701 individual pairs of splitting parameters (fast polarization orientations and splitting delay times) are obtained using the PKS, SKKS, and SKS phases. The splitting parameters from 21 stations exhibit systematic back-azimuthal variations with a 90° periodicity, which is consistent with a two-layer anisotropy model. The resulting upper-layer splitting parameters computed based on a grid-search algorithm are comparable with crustal anisotropy measurements obtained independently based on the sinusoidal moveout of P-to-S conversions from the Moho. The fast orientations of the upper layer anisotropy, which is mostly parallel with major shear zones, are associated with crustal fabrics with a vertical foliation plane. The lower layer anisotropy and the station averaged splitting parameters at stations with azimuthally invariant splitting parameters can be adequately explained by the differential movement between the lithosphere and asthenosphere. The NW-SE fast orientations obtained in the northern part of the study area probably reflect the southeastward extruded mantle flow from central Tibet. In contrast, the NE-SW to E-W fast orientations observed in the southern part of the study area are most likely related to the northeastward to eastward mantle flow induced by the subduction of the Burma microplate.
[Hyperspectral remote sensing image classification based on SVM optimized by clonal selection].
Liu, Qing-Jie; Jing, Lin-Hai; Wang, Meng-Fei; Lin, Qi-Zhong
2013-03-01
Model selection for support vector machine (SVM) involving kernel and the margin parameter values selection is usually time-consuming, impacts training efficiency of SVM model and final classification accuracies of SVM hyperspectral remote sensing image classifier greatly. Firstly, based on combinatorial optimization theory and cross-validation method, artificial immune clonal selection algorithm is introduced to the optimal selection of SVM (CSSVM) kernel parameter a and margin parameter C to improve the training efficiency of SVM model. Then an experiment of classifying AVIRIS in India Pine site of USA was performed for testing the novel CSSVM, as well as a traditional SVM classifier with general Grid Searching cross-validation method (GSSVM) for comparison. And then, evaluation indexes including SVM model training time, classification overall accuracy (OA) and Kappa index of both CSSVM and GSSVM were all analyzed quantitatively. It is demonstrated that OA of CSSVM on test samples and whole image are 85.1% and 81.58, the differences from that of GSSVM are both within 0.08% respectively; And Kappa indexes reach 0.8213 and 0.7728, the differences from that of GSSVM are both within 0.001; While the ratio of model training time of CSSVM and GSSVM is between 1/6 and 1/10. Therefore, CSSVM is fast and accurate algorithm for hyperspectral image classification and is superior to GSSVM.
Dartnell, P.; Gardner, J.V.
2009-01-01
The seafloor off greater Los Angeles, California, has been extensively studied for the past century. Terrain analysis of recently compiled multibeam bathymetry reveals the detailed seafloor morphology along the Los Angeles Margin and San Pedro Basin. The terrain analysis uses the multibeam bathymetry to calculate two seafloor indices, a seafloor slope, and a Topographic Position Index. The derived grids along with depth are analyzed in a hierarchical, decision-tree classification to delineate six seafloor provinces-high-relief shelf, low-relief shelf, steep-basin slope, gentle-basin slope, gullies and canyons, and basins. Rock outcrops protrude in places above the generally smooth continental shelf. Gullies incise the steep-basin slopes, and some submarine canyons extend from the coastline to the basin floor. San Pedro Basin is separated from the Santa Monica Basin to the north by a ridge consisting of the Redondo Knoll and the Redondo Submarine Canyon delta. An 865-m-deep sill separates the two basins. Water depths of San Pedro Basin are ??100 m deeper than those in the San Diego Trough to the south, and three passes breach a ridge that separates the San Pedro Basin from the San Diego Trough. Information gained from this study can be used as base maps for such future studies as tectonic reconstructions, identifying sedimentary processes, tracking pollution transport, and defining benthic habitats. ?? 2009 The Geological Society of America.
Double stranded nucleic acid biochips
Chernov, Boris; Golova, Julia
2006-05-23
This invention describes a new method of constructing double-stranded DNA (dsDNA) microarrays based on the use of pre-synthesized or natural DNA duplexes without a stem-loop structure. The complementary oligonucleotide chains are bonded together by a novel connector that includes a linker for immobilization on a matrix. A non-enzymatic method for synthesizing double-stranded nucleic acids with this novel connector enables the construction of inexpensive and robust dsDNA/dsRNA microarrays. DNA-DNA and DNA-protein interactions are investigated using the microarrays.
Screening Mammalian Cells on a Hydrogel: Functionalized Small Molecule Microarray.
Zhu, Biwei; Jiang, Bo; Na, Zhenkun; Yao, Shao Q
2017-01-01
Mammalian cell-based microarray technology has gained wide attention, for its plethora of promising applications. The platform is able to provide simultaneous information on multiple parameters for a given target, or even multiple target proteins, in a complex biological system. Here we describe the preparation of mammalian cell-based microarrays using selectively captured of human prostate cancer cells (PC-3). This platform was then used in controlled drug release and measuring the associated drug effects on these cancer cells.
2004-10-01
informative in this regard. Key signature genes will serve as the basis for rapid diagnostic approaches that could be accessed when an outbreak is suspected...AD Award Number: DAMD17-01-1-0787 TITLE: Use of DNA Microarrays to Identify Diagnostic Signature Transcription Profiles for Host Responses to...Sep 2004) 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS Use of DNA Microarrays to Identify Diagnostic Signature DAMD17-01-1-0787 Transcription Profiles for
Workflows for microarray data processing in the Kepler environment.
Stropp, Thomas; McPhillips, Timothy; Ludäscher, Bertram; Bieda, Mark
2012-05-17
Microarray data analysis has been the subject of extensive and ongoing pipeline development due to its complexity, the availability of several options at each analysis step, and the development of new analysis demands, including integration with new data sources. Bioinformatics pipelines are usually custom built for different applications, making them typically difficult to modify, extend and repurpose. Scientific workflow systems are intended to address these issues by providing general-purpose frameworks in which to develop and execute such pipelines. The Kepler workflow environment is a well-established system under continual development that is employed in several areas of scientific research. Kepler provides a flexible graphical interface, featuring clear display of parameter values, for design and modification of workflows. It has capabilities for developing novel computational components in the R, Python, and Java programming languages, all of which are widely used for bioinformatics algorithm development, along with capabilities for invoking external applications and using web services. We developed a series of fully functional bioinformatics pipelines addressing common tasks in microarray processing in the Kepler workflow environment. These pipelines consist of a set of tools for GFF file processing of NimbleGen chromatin immunoprecipitation on microarray (ChIP-chip) datasets and more comprehensive workflows for Affymetrix gene expression microarray bioinformatics and basic primer design for PCR experiments, which are often used to validate microarray results. Although functional in themselves, these workflows can be easily customized, extended, or repurposed to match the needs of specific projects and are designed to be a toolkit and starting point for specific applications. These workflows illustrate a workflow programming paradigm focusing on local resources (programs and data) and therefore are close to traditional shell scripting or R/BioConductor scripting approaches to pipeline design. Finally, we suggest that microarray data processing task workflows may provide a basis for future example-based comparison of different workflow systems. We provide a set of tools and complete workflows for microarray data analysis in the Kepler environment, which has the advantages of offering graphical, clear display of conceptual steps and parameters and the ability to easily integrate other resources such as remote data and web services.
NASA Astrophysics Data System (ADS)
Desa, Maria Ana; Ismaiel, Mohammad; Suresh, Yenne; Krishna, Kolluru Sree
2018-05-01
The ocean floor in the Bay of Bengal has evolved after the breakup of India from Antarctica since the Early Cretaceous. Recent geophysical investigations including updated satellite derived gravity map postulated two phases for the tectonic evolution of the Bay of Bengal, the first phase of spreading occurred in the NW-SE direction forming its Western Basin, while the second phase occurred in the N-S direction resulting in its Eastern Basin. Lack of magnetic data along the spreading direction in the Western Basin prompted us to acquire new magnetic data along four tracks (totaling ∼3000 km) to validate the previously identified magnetic anomaly picks. Comparison of the synthetic seafloor spreading model with the observed magnetic anomalies confirmed the presence of Mesozoic anomalies M12n to M0 in the Western Basin. Further, the model suggests that this spreading between India and Antarctica took place with half-spreading rates of 2.7-4.5 cm/yr. The trend of the fracture zones in the Western Basin with respect to that of the Southeastern Continental Margin of India (SCMI) suggests that SCMI is an oblique transform margin with 37° obliquity. Further, the SCMI consists of two oblique transform segments separated by a small rift segment. The strike-slip motion along the SCMI is bounded by the rift segments of the Northeastern Continental Margin of India and the southern margin of Sri Lanka. The margin configuration and fracture zones inferred in its conjugate Western Enderby Basin, East Antarctica helped in inferring three spreading corridors off the SCMI in the Western Basin of the Bay of Bengal. Detailed grid reconstruction models traced the oblique strike-slip motion off the SCMI since M12n time. The strike-slip motion along the short northern transform segment ended by M11n time. The longer transform segment, found east of Sri Lanka lost its obliquity and became a pure oceanic transform fault by M0 time. The eastward propagation of the Africa-Antarctica spreading center initiated the anticlockwise separation of Sri Lanka from India by M12n time. Seafloor spreading south of Sri Lanka due to the India-Antarctica spreading episode and the simultaneously occurring strike-slip motion east of Sri Lanka restricted this separation resulting in a failed rift. Thus Sri Lanka with strike-slip motion to its east, failed rift towards west, continental extension to its north and rifting to its south behaved as a short lived microplate during the Early Cretaceous period and remained attached to India thereafter.
Fast Dynamic Simulation-Based Small Signal Stability Assessment and Control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acharya, Naresh; Baone, Chaitanya; Veda, Santosh
2014-12-31
Power grid planning and operation decisions are made based on simulation of the dynamic behavior of the system. Enabling substantial energy savings while increasing the reliability of the aging North American power grid through improved utilization of existing transmission assets hinges on the adoption of wide-area measurement systems (WAMS) for power system stabilization. However, adoption of WAMS alone will not suffice if the power system is to reach its full entitlement in stability and reliability. It is necessary to enhance predictability with "faster than real-time" dynamic simulations that will enable the dynamic stability margins, proactive real-time control, and improve gridmore » resiliency to fast time-scale phenomena such as cascading network failures. Present-day dynamic simulations are performed only during offline planning studies, considering only worst case conditions such as summer peak, winter peak days, etc. With widespread deployment of renewable generation, controllable loads, energy storage devices and plug-in hybrid electric vehicles expected in the near future and greater integration of cyber infrastructure (communications, computation and control), monitoring and controlling the dynamic performance of the grid in real-time would become increasingly important. The state-of-the-art dynamic simulation tools have limited computational speed and are not suitable for real-time applications, given the large set of contingency conditions to be evaluated. These tools are optimized for best performance of single-processor computers, but the simulation is still several times slower than real-time due to its computational complexity. With recent significant advances in numerical methods and computational hardware, the expectations have been rising towards more efficient and faster techniques to be implemented in power system simulators. This is a natural expectation, given that the core solution algorithms of most commercial simulators were developed decades ago, when High Performance Computing (HPC) resources were not commonly available.« less
RDFBuilder: a tool to automatically build RDF-based interfaces for MAGE-OM microarray data sources.
Anguita, Alberto; Martin, Luis; Garcia-Remesal, Miguel; Maojo, Victor
2013-07-01
This paper presents RDFBuilder, a tool that enables RDF-based access to MAGE-ML-compliant microarray databases. We have developed a system that automatically transforms the MAGE-OM model and microarray data stored in the ArrayExpress database into RDF format. Additionally, the system automatically enables a SPARQL endpoint. This allows users to execute SPARQL queries for retrieving microarray data, either from specific experiments or from more than one experiment at a time. Our system optimizes response times by caching and reusing information from previous queries. In this paper, we describe our methods for achieving this transformation. We show that our approach is complementary to other existing initiatives, such as Bio2RDF, for accessing and retrieving data from the ArrayExpress database. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Autoregressive-model-based missing value estimation for DNA microarray time series data.
Choong, Miew Keen; Charbit, Maurice; Yan, Hong
2009-01-01
Missing value estimation is important in DNA microarray data analysis. A number of algorithms have been developed to solve this problem, but they have several limitations. Most existing algorithms are not able to deal with the situation where a particular time point (column) of the data is missing entirely. In this paper, we present an autoregressive-model-based missing value estimation method (ARLSimpute) that takes into account the dynamic property of microarray temporal data and the local similarity structures in the data. ARLSimpute is especially effective for the situation where a particular time point contains many missing values or where the entire time point is missing. Experiment results suggest that our proposed algorithm is an accurate missing value estimator in comparison with other imputation methods on simulated as well as real microarray time series datasets.
Fei, Yiyan; Landry, James P; Sun, Yungshin; Zhu, Xiangdong; Wang, Xiaobing; Luo, Juntao; Wu, Chun-Yi; Lam, Kit S
2010-01-01
We describe a high-throughput scanning optical microscope for detecting small-molecule compound microarrays on functionalized glass slides. It is based on measurements of oblique-incidence reflectivity difference and employs a combination of a y-scan galvometer mirror and an x-scan translation stage with an effective field of view of 2 cm x 4 cm. Such a field of view can accommodate a printed small-molecule compound microarray with as many as 10,000 to 20,000 targets. The scanning microscope is capable of measuring kinetics as well as endpoints of protein-ligand reactions simultaneously. We present the experimental results on solution-phase protein reactions with small-molecule compound microarrays synthesized from one-bead, one-compound combinatorial chemistry and immobilized on a streptavidin-functionalized glass slide.
Fei, Yiyan; Landry, James P.; Sun, Yungshin; Zhu, Xiangdong; Wang, Xiaobing; Luo, Juntao; Wu, Chun-Yi; Lam, Kit S.
2010-01-01
We describe a high-throughput scanning optical microscope for detecting small-molecule compound microarrays on functionalized glass slides. It is based on measurements of oblique-incidence reflectivity difference and employs a combination of a y-scan galvometer mirror and an x-scan translation stage with an effective field of view of 2 cm×4 cm. Such a field of view can accommodate a printed small-molecule compound microarray with as many as 10,000 to 20,000 targets. The scanning microscope is capable of measuring kinetics as well as endpoints of protein-ligand reactions simultaneously. We present the experimental results on solution-phase protein reactions with small-molecule compound microarrays synthesized from one-bead, one-compound combinatorial chemistry and immobilized on a streptavidin-functionalized glass slide. PMID:20210464
Microarray slide hybridization using fluorescently labeled cDNA.
Ares, Manuel
2014-01-01
Microarray hybridization is used to determine the amount and genomic origins of RNA molecules in an experimental sample. Unlabeled probe sequences for each gene or gene region are printed in an array on the surface of a slide, and fluorescently labeled cDNA derived from the RNA target is hybridized to it. This protocol describes a blocking and hybridization protocol for microarray slides. The blocking step is particular to the chemistry of "CodeLink" slides, but it serves to remind us that almost every kind of microarray has a treatment step that occurs after printing but before hybridization. We recommend making sure of the precise treatment necessary for the particular chemistry used in the slides to be hybridized because the attachment chemistries differ significantly. Hybridization is similar to northern or Southern blots, but on a much smaller scale.
Bessonov, Kyrylo; Walkey, Christopher J.; Shelp, Barry J.; van Vuuren, Hennie J. J.; Chiu, David; van der Merwe, George
2013-01-01
Analyzing time-course expression data captured in microarray datasets is a complex undertaking as the vast and complex data space is represented by a relatively low number of samples as compared to thousands of available genes. Here, we developed the Interdependent Correlation Clustering (ICC) method to analyze relationships that exist among genes conditioned on the expression of a specific target gene in microarray data. Based on Correlation Clustering, the ICC method analyzes a large set of correlation values related to gene expression profiles extracted from given microarray datasets. ICC can be applied to any microarray dataset and any target gene. We applied this method to microarray data generated from wine fermentations and selected NSF1, which encodes a C2H2 zinc finger-type transcription factor, as the target gene. The validity of the method was verified by accurate identifications of the previously known functional roles of NSF1. In addition, we identified and verified potential new functions for this gene; specifically, NSF1 is a negative regulator for the expression of sulfur metabolism genes, the nuclear localization of Nsf1 protein (Nsf1p) is controlled in a sulfur-dependent manner, and the transcription of NSF1 is regulated by Met4p, an important transcriptional activator of sulfur metabolism genes. The inter-disciplinary approach adopted here highlighted the accuracy and relevancy of the ICC method in mining for novel gene functions using complex microarray datasets with a limited number of samples. PMID:24130853
On the classification techniques in data mining for microarray data classification
NASA Astrophysics Data System (ADS)
Aydadenta, Husna; Adiwijaya
2018-03-01
Cancer is one of the deadly diseases, according to data from WHO by 2015 there are 8.8 million more deaths caused by cancer, and this will increase every year if not resolved earlier. Microarray data has become one of the most popular cancer-identification studies in the field of health, since microarray data can be used to look at levels of gene expression in certain cell samples that serve to analyze thousands of genes simultaneously. By using data mining technique, we can classify the sample of microarray data thus it can be identified with cancer or not. In this paper we will discuss some research using some data mining techniques using microarray data, such as Support Vector Machine (SVM), Artificial Neural Network (ANN), Naive Bayes, k-Nearest Neighbor (kNN), and C4.5, and simulation of Random Forest algorithm with technique of reduction dimension using Relief. The result of this paper show performance measure (accuracy) from classification algorithm (SVM, ANN, Naive Bayes, kNN, C4.5, and Random Forets).The results in this paper show the accuracy of Random Forest algorithm higher than other classification algorithms (Support Vector Machine (SVM), Artificial Neural Network (ANN), Naive Bayes, k-Nearest Neighbor (kNN), and C4.5). It is hoped that this paper can provide some information about the speed, accuracy, performance and computational cost generated from each Data Mining Classification Technique based on microarray data.
Jain, K K
2001-02-01
Cambridge Healthtech Institute's Third Annual Conference on Lab-on-a-Chip and Microarray technology covered the latest advances in this technology and applications in life sciences. Highlights of the meetings are reported briefly with emphasis on applications in genomics, drug discovery and molecular diagnostics. There was an emphasis on microfluidics because of the wide applications in laboratory and drug discovery. The lab-on-a-chip provides the facilities of a complete laboratory in a hand-held miniature device. Several microarray systems have been used for hybridisation and detection techniques. Oligonucleotide scanning arrays provide a versatile tool for the analysis of nucleic acid interactions and provide a platform for improving the array-based methods for investigation of antisense therapeutics. A method for analysing combinatorial DNA arrays using oligonucleotide-modified gold nanoparticle probes and a conventional scanner has considerable potential in molecular diagnostics. Various applications of microarray technology for high-throughput screening in drug discovery and single nucleotide polymorphisms (SNP) analysis were discussed. Protein chips have important applications in proteomics. With the considerable amount of data generated by the different technologies using microarrays, it is obvious that the reading of the information and its interpretation and management through the use of bioinformatics is essential. Various techniques for data analysis were presented. Biochip and microarray technology has an essential role to play in the evolving trends in healthcare, which integrate diagnosis with prevention/treatment and emphasise personalised medicines.
BIOPHYSICAL PROPERTIES OF NUCLEIC ACIDS AT SURFACES RELEVANT TO MICROARRAY PERFORMANCE.
Rao, Archana N; Grainger, David W
2014-04-01
Both clinical and analytical metrics produced by microarray-based assay technology have recognized problems in reproducibility, reliability and analytical sensitivity. These issues are often attributed to poor understanding and control of nucleic acid behaviors and properties at solid-liquid interfaces. Nucleic acid hybridization, central to DNA and RNA microarray formats, depends on the properties and behaviors of single strand (ss) nucleic acids (e.g., probe oligomeric DNA) bound to surfaces. ssDNA's persistence length, radius of gyration, electrostatics, conformations on different surfaces and under various assay conditions, its chain flexibility and curvature, charging effects in ionic solutions, and fluorescent labeling all influence its physical chemistry and hybridization under assay conditions. Nucleic acid (e.g., both RNA and DNA) target interactions with immobilized ssDNA strands are highly impacted by these biophysical states. Furthermore, the kinetics, thermodynamics, and enthalpic and entropic contributions to DNA hybridization reflect global probe/target structures and interaction dynamics. Here we review several biophysical issues relevant to oligomeric nucleic acid molecular behaviors at surfaces and their influences on duplex formation that influence microarray assay performance. Correlation of biophysical aspects of single and double-stranded nucleic acids with their complexes in bulk solution is common. Such analysis at surfaces is not commonly reported, despite its importance to microarray assays. We seek to provide further insight into nucleic acid-surface challenges facing microarray diagnostic formats that have hindered their clinical adoption and compromise their research quality and value as genomics tools.
BIOPHYSICAL PROPERTIES OF NUCLEIC ACIDS AT SURFACES RELEVANT TO MICROARRAY PERFORMANCE
Rao, Archana N.; Grainger, David W.
2014-01-01
Both clinical and analytical metrics produced by microarray-based assay technology have recognized problems in reproducibility, reliability and analytical sensitivity. These issues are often attributed to poor understanding and control of nucleic acid behaviors and properties at solid-liquid interfaces. Nucleic acid hybridization, central to DNA and RNA microarray formats, depends on the properties and behaviors of single strand (ss) nucleic acids (e.g., probe oligomeric DNA) bound to surfaces. ssDNA’s persistence length, radius of gyration, electrostatics, conformations on different surfaces and under various assay conditions, its chain flexibility and curvature, charging effects in ionic solutions, and fluorescent labeling all influence its physical chemistry and hybridization under assay conditions. Nucleic acid (e.g., both RNA and DNA) target interactions with immobilized ssDNA strands are highly impacted by these biophysical states. Furthermore, the kinetics, thermodynamics, and enthalpic and entropic contributions to DNA hybridization reflect global probe/target structures and interaction dynamics. Here we review several biophysical issues relevant to oligomeric nucleic acid molecular behaviors at surfaces and their influences on duplex formation that influence microarray assay performance. Correlation of biophysical aspects of single and double-stranded nucleic acids with their complexes in bulk solution is common. Such analysis at surfaces is not commonly reported, despite its importance to microarray assays. We seek to provide further insight into nucleic acid-surface challenges facing microarray diagnostic formats that have hindered their clinical adoption and compromise their research quality and value as genomics tools. PMID:24765522
Khan, Haseeb Ahmad
2004-01-01
The massive surge in the production of microarray data poses a great challenge for proper analysis and interpretation. In recent years numerous computational tools have been developed to extract meaningful interpretation of microarray gene expression data. However, a convenient tool for two-groups comparison of microarray data is still lacking and users have to rely on commercial statistical packages that might be costly and require special skills, in addition to extra time and effort for transferring data from one platform to other. Various statistical methods, including the t-test, analysis of variance, Pearson test and Mann-Whitney U test, have been reported for comparing microarray data, whereas the utilization of the Wilcoxon signed-rank test, which is an appropriate test for two-groups comparison of gene expression data, has largely been neglected in microarray studies. The aim of this investigation was to build an integrated tool, ArraySolver, for colour-coded graphical display and comparison of gene expression data using the Wilcoxon signed-rank test. The results of software validation showed similar outputs with ArraySolver and SPSS for large datasets. Whereas the former program appeared to be more accurate for 25 or fewer pairs (n < or = 25), suggesting its potential application in analysing molecular signatures that usually contain small numbers of genes. The main advantages of ArraySolver are easy data selection, convenient report format, accurate statistics and the familiar Excel platform.
2004-01-01
The massive surge in the production of microarray data poses a great challenge for proper analysis and interpretation. In recent years numerous computational tools have been developed to extract meaningful interpretation of microarray gene expression data. However, a convenient tool for two-groups comparison of microarray data is still lacking and users have to rely on commercial statistical packages that might be costly and require special skills, in addition to extra time and effort for transferring data from one platform to other. Various statistical methods, including the t-test, analysis of variance, Pearson test and Mann–Whitney U test, have been reported for comparing microarray data, whereas the utilization of the Wilcoxon signed-rank test, which is an appropriate test for two-groups comparison of gene expression data, has largely been neglected in microarray studies. The aim of this investigation was to build an integrated tool, ArraySolver, for colour-coded graphical display and comparison of gene expression data using the Wilcoxon signed-rank test. The results of software validation showed similar outputs with ArraySolver and SPSS for large datasets. Whereas the former program appeared to be more accurate for 25 or fewer pairs (n ≤ 25), suggesting its potential application in analysing molecular signatures that usually contain small numbers of genes. The main advantages of ArraySolver are easy data selection, convenient report format, accurate statistics and the familiar Excel platform. PMID:18629036
Fish and chips: Various methodologies demonstrate utility of a 16,006-gene salmonid microarray
von Schalburg, Kristian R; Rise, Matthew L; Cooper, Glenn A; Brown, Gordon D; Gibbs, A Ross; Nelson, Colleen C; Davidson, William S; Koop, Ben F
2005-01-01
Background We have developed and fabricated a salmonid microarray containing cDNAs representing 16,006 genes. The genes spotted on the array have been stringently selected from Atlantic salmon and rainbow trout expressed sequence tag (EST) databases. The EST databases presently contain over 300,000 sequences from over 175 salmonid cDNA libraries derived from a wide variety of tissues and different developmental stages. In order to evaluate the utility of the microarray, a number of hybridization techniques and screening methods have been developed and tested. Results We have analyzed and evaluated the utility of a microarray containing 16,006 (16K) salmonid cDNAs in a variety of potential experimental settings. We quantified the amount of transcriptome binding that occurred in cross-species, organ complexity and intraspecific variation hybridization studies. We also developed a methodology to rapidly identify and confirm the contents of a bacterial artificial chromosome (BAC) library containing Atlantic salmon genomic DNA. Conclusion We validate and demonstrate the usefulness of the 16K microarray over a wide range of teleosts, even for transcriptome targets from species distantly related to salmonids. We show the potential of the use of the microarray in a variety of experimental settings through hybridization studies that examine the binding of targets derived from different organs and tissues. Intraspecific variation in transcriptome expression is evaluated and discussed. Finally, BAC hybridizations are demonstrated as a rapid and accurate means to identify gene content. PMID:16164747
Wu, Baolin
2006-02-15
Differential gene expression detection and sample classification using microarray data have received much research interest recently. Owing to the large number of genes p and small number of samples n (p > n), microarray data analysis poses big challenges for statistical analysis. An obvious problem owing to the 'large p small n' is over-fitting. Just by chance, we are likely to find some non-differentially expressed genes that can classify the samples very well. The idea of shrinkage is to regularize the model parameters to reduce the effects of noise and produce reliable inferences. Shrinkage has been successfully applied in the microarray data analysis. The SAM statistics proposed by Tusher et al. and the 'nearest shrunken centroid' proposed by Tibshirani et al. are ad hoc shrinkage methods. Both methods are simple, intuitive and prove to be useful in empirical studies. Recently Wu proposed the penalized t/F-statistics with shrinkage by formally using the (1) penalized linear regression models for two-class microarray data, showing good performance. In this paper we systematically discussed the use of penalized regression models for analyzing microarray data. We generalize the two-class penalized t/F-statistics proposed by Wu to multi-class microarray data. We formally derive the ad hoc shrunken centroid used by Tibshirani et al. using the (1) penalized regression models. And we show that the penalized linear regression models provide a rigorous and unified statistical framework for sample classification and differential gene expression detection.
ArrayWiki: an enabling technology for sharing public microarray data repositories and meta-analyses
Stokes, Todd H; Torrance, JT; Li, Henry; Wang, May D
2008-01-01
Background A survey of microarray databases reveals that most of the repository contents and data models are heterogeneous (i.e., data obtained from different chip manufacturers), and that the repositories provide only basic biological keywords linking to PubMed. As a result, it is difficult to find datasets using research context or analysis parameters information beyond a few keywords. For example, to reduce the "curse-of-dimension" problem in microarray analysis, the number of samples is often increased by merging array data from different datasets. Knowing chip data parameters such as pre-processing steps (e.g., normalization, artefact removal, etc), and knowing any previous biological validation of the dataset is essential due to the heterogeneity of the data. However, most of the microarray repositories do not have meta-data information in the first place, and do not have a a mechanism to add or insert this information. Thus, there is a critical need to create "intelligent" microarray repositories that (1) enable update of meta-data with the raw array data, and (2) provide standardized archiving protocols to minimize bias from the raw data sources. Results To address the problems discussed, we have developed a community maintained system called ArrayWiki that unites disparate meta-data of microarray meta-experiments from multiple primary sources with four key features. First, ArrayWiki provides a user-friendly knowledge management interface in addition to a programmable interface using standards developed by Wikipedia. Second, ArrayWiki includes automated quality control processes (caCORRECT) and novel visualization methods (BioPNG, Gel Plots), which provide extra information about data quality unavailable in other microarray repositories. Third, it provides a user-curation capability through the familiar Wiki interface. Fourth, ArrayWiki provides users with simple text-based searches across all experiment meta-data, and exposes data to search engine crawlers (Semantic Agents) such as Google to further enhance data discovery. Conclusions Microarray data and meta information in ArrayWiki are distributed and visualized using a novel and compact data storage format, BioPNG. Also, they are open to the research community for curation, modification, and contribution. By making a small investment of time to learn the syntax and structure common to all sites running MediaWiki software, domain scientists and practioners can all contribute to make better use of microarray technologies in research and medical practices. ArrayWiki is available at . PMID:18541053
D'Arrigo, Stefano; Gavazzi, Francesco; Alfei, Enrico; Zuffardi, Orsetta; Montomoli, Cristina; Corso, Barbara; Buzzi, Erika; Sciacca, Francesca L; Bulgheroni, Sara; Riva, Daria; Pantaleoni, Chiara
2016-05-01
Microarray-based comparative genomic hybridization is a method of molecular analysis that identifies chromosomal anomalies (or copy number variants) that correlate with clinical phenotypes. The aim of the present study was to apply a clinical score previously designated by de Vries to 329 patients with intellectual disability/developmental disorder (intellectual disability/developmental delay) referred to our tertiary center and to see whether the clinical factors are associated with a positive outcome of aCGH analyses. Another goal was to test the association between a positive microarray-based comparative genomic hybridization result and the severity of intellectual disability/developmental delay. Microarray-based comparative genomic hybridization identified structural chromosomal alterations responsible for the intellectual disability/developmental delay phenotype in 16% of our sample. Our study showed that causative copy number variants are frequently found even in cases of mild intellectual disability (30.77%). We want to emphasize the need to conduct microarray-based comparative genomic hybridization on all individuals with intellectual disability/developmental delay, regardless of the severity, because the degree of intellectual disability/developmental delay does not predict the diagnostic yield of microarray-based comparative genomic hybridization. © The Author(s) 2015.
Yu, Hualong; Hong, Shufang; Yang, Xibei; Ni, Jun; Dan, Yuanyuan; Qin, Bin
2013-01-01
DNA microarray technology can measure the activities of tens of thousands of genes simultaneously, which provides an efficient way to diagnose cancer at the molecular level. Although this strategy has attracted significant research attention, most studies neglect an important problem, namely, that most DNA microarray datasets are skewed, which causes traditional learning algorithms to produce inaccurate results. Some studies have considered this problem, yet they merely focus on binary-class problem. In this paper, we dealt with multiclass imbalanced classification problem, as encountered in cancer DNA microarray, by using ensemble learning. We utilized one-against-all coding strategy to transform multiclass to multiple binary classes, each of them carrying out feature subspace, which is an evolving version of random subspace that generates multiple diverse training subsets. Next, we introduced one of two different correction technologies, namely, decision threshold adjustment or random undersampling, into each training subset to alleviate the damage of class imbalance. Specifically, support vector machine was used as base classifier, and a novel voting rule called counter voting was presented for making a final decision. Experimental results on eight skewed multiclass cancer microarray datasets indicate that unlike many traditional classification approaches, our methods are insensitive to class imbalance.
Watson, Christopher M.; Crinnion, Laura A.; Gurgel‐Gianetti, Juliana; Harrison, Sally M.; Daly, Catherine; Antanavicuite, Agne; Lascelles, Carolina; Markham, Alexander F.; Pena, Sergio D. J.; Bonthron, David T.
2015-01-01
ABSTRACT Autozygosity mapping is a powerful technique for the identification of rare, autosomal recessive, disease‐causing genes. The ease with which this category of disease gene can be identified has greatly increased through the availability of genome‐wide SNP genotyping microarrays and subsequently of exome sequencing. Although these methods have simplified the generation of experimental data, its analysis, particularly when disparate data types must be integrated, remains time consuming. Moreover, the huge volume of sequence variant data generated from next generation sequencing experiments opens up the possibility of using these data instead of microarray genotype data to identify disease loci. To allow these two types of data to be used in an integrated fashion, we have developed AgileVCFMapper, a program that performs both the mapping of disease loci by SNP genotyping and the analysis of potentially deleterious variants using exome sequence variant data, in a single step. This method does not require microarray SNP genotype data, although analysis with a combination of microarray and exome genotype data enables more precise delineation of disease loci, due to superior marker density and distribution. PMID:26037133
Guo, Xi; Geng, Peng; Wang, Quan; Cao, Boyang; Liu, Bin
2014-10-01
Severe acute respiratory syndrome (SARS), a disease that spread widely in the world during late 2002 to 2004, severely threatened public health. Although there have been no reported infections since 2004, the extremely pathogenic SARS coronavirus (SARS-CoV), as the causative agent of SARS, has recently been identified in animals, showing the potential for the re-emergence of this disease. Previous studies showed that 27 single nucleotide polymorphism (SNP) mutations among the spike (S) gene of this virus are correlated closely with the SARS pathogenicity and epidemicity. We have developed a SNP DNA microarray in order to detect and genotype these SNPs, and to obtain related information on the pathogenicity and epidemicity of a given strain. The microarray was hybridized with PCR products amplified from cDNAs obtained from different SARS-CoV strains. We were able to detect 24 SNPs and determine the type of a given strain. The hybridization profile showed that 19 samples were detected and genotyped correctly by using our microarray, with 100% accuracy. Our microarray provides a novel method for the detection and epidemiological surveillance of SARS-CoV.
Automatic Identification and Quantification of Extra-Well Fluorescence in Microarray Images.
Rivera, Robert; Wang, Jie; Yu, Xiaobo; Demirkan, Gokhan; Hopper, Marika; Bian, Xiaofang; Tahsin, Tasnia; Magee, D Mitchell; Qiu, Ji; LaBaer, Joshua; Wallstrom, Garrick
2017-11-03
In recent studies involving NAPPA microarrays, extra-well fluorescence is used as a key measure for identifying disease biomarkers because there is evidence to support that it is better correlated with strong antibody responses than statistical analysis involving intraspot intensity. Because this feature is not well quantified by traditional image analysis software, identification and quantification of extra-well fluorescence is performed manually, which is both time-consuming and highly susceptible to variation between raters. A system that could automate this task efficiently and effectively would greatly improve the process of data acquisition in microarray studies, thereby accelerating the discovery of disease biomarkers. In this study, we experimented with different machine learning methods, as well as novel heuristics, for identifying spots exhibiting extra-well fluorescence (rings) in microarray images and assigning each ring a grade of 1-5 based on its intensity and morphology. The sensitivity of our final system for identifying rings was found to be 72% at 99% specificity and 98% at 92% specificity. Our system performs this task significantly faster than a human, while maintaining high performance, and therefore represents a valuable tool for microarray image analysis.
Development of a low-cost detection method for miRNA microarray.
Li, Wei; Zhao, Botao; Jin, Youxin; Ruan, Kangcheng
2010-04-01
MicroRNA (miRNA) microarray is a powerful tool to explore the expression profiling of miRNA. The current detection method used in miRNA microarray is mainly fluorescence based, which usually requires costly detection system such as laser confocal scanner of tens of thousands of dollars. Recently, we developed a low-cost yet sensitive detection method for miRNA microarray based on enzyme-linked assay. In this approach, the biotinylated miRNAs were captured by the corresponding oligonucleotide probes immobilized on microarray slide; and then the biotinylated miRNAs would capture streptavidin-conjugated alkaline phosphatase. A purple-black precipitation on each biotinylated miRNA spot was produced by the enzyme catalytic reaction. It could be easily detected by a charge-coupled device digital camera mounted on a microscope, which lowers the detection cost more than 100 fold compared with that of fluorescence method. Our data showed that signal intensity of the spot correlates well with the biotinylated miRNA concentration and the detection limit for miRNAs is at least 0.4 fmol and the detection dynamic range spans about 2.5 orders of magnitude, which is comparable to that of fluorescence method.
Stephenson, Kathryn E.; Neubauer, George H.; Reimer, Ulf; ...
2014-11-14
An effective vaccine against human immunodeficiency virus type 1 (HIV-1) will have to provide protection against a vast array of different HIV-1 strains. Current methods to measure HIV-1-specific binding antibodies following immunization typically focus on determining the magnitude of antibody responses, but the epitope diversity of antibody responses has remained largely unexplored. Here we describe the development of a global HIV-1 peptide microarray that contains 6564 peptides from across the HIV-1 proteome and covers the majority of HIV-1 sequences in the Los Alamos National Laboratory global HIV-1 sequence database. Using this microarray, we quantified the magnitude, breadth, and depth ofmore » IgG binding to linear HIV-1 sequences in HIV-1-infected humans and HIV-1-vaccinated humans, rhesus monkeys and guinea pigs. The microarray measured potentially important differences in antibody epitope diversity, particularly regarding the depth of epitope variants recognized at each binding site. Our data suggest that the global HIV-1 peptide microarray may be a useful tool for both preclinical and clinical HIV-1 research.« less
Fluorescent labeling of NASBA amplified tmRNA molecules for microarray applications
Scheler, Ott; Glynn, Barry; Parkel, Sven; Palta, Priit; Toome, Kadri; Kaplinski, Lauris; Remm, Maido; Maher, Majella; Kurg, Ants
2009-01-01
Background Here we present a novel promising microbial diagnostic method that combines the sensitivity of Nucleic Acid Sequence Based Amplification (NASBA) with the high information content of microarray technology for the detection of bacterial tmRNA molecules. The NASBA protocol was modified to include aminoallyl-UTP (aaUTP) molecules that were incorporated into nascent RNA during the NASBA reaction. Post-amplification labeling with fluorescent dye was carried out subsequently and tmRNA hybridization signal intensities were measured using microarray technology. Significant optimization of the labeled NASBA protocol was required to maintain the required sensitivity of the reactions. Results Two different aaUTP salts were evaluated and optimum final concentrations were identified for both. The final 2 mM concentration of aaUTP Li-salt in NASBA reaction resulted in highest microarray signals overall, being twice as high as the strongest signals with 1 mM aaUTP Na-salt. Conclusion We have successfully demonstrated efficient combination of NASBA amplification technology with microarray based hybridization detection. The method is applicative for many different areas of microbial diagnostics including environmental monitoring, bio threat detection, industrial process monitoring and clinical microbiology. PMID:19445684
Shin, Hwa Hui; Seo, Jeong Hyun; Kim, Chang Sup; Hwang, Byeong Hee; Cha, Hyung Joon
2016-05-15
Life-threatening diarrheal cholera is usually caused by water or food contaminated with cholera toxin-producing Vibrio cholerae. For the prevention and surveillance of cholera, it is crucial to rapidly and precisely detect and identify the etiological causes, such as V. cholerae and/or its toxin. In the present work, we propose the use of a hybrid double biomolecular marker (DBM) microarray containing 16S rRNA-based DNA capture probe to genotypically identify V. cholerae and GM1 pentasaccharide capture probe to phenotypically detect cholera toxin. We employed a simple sample preparation method to directly obtain genomic DNA and secreted cholera toxin as target materials from bacterial cells. By utilizing the constructed DBM microarray and prepared samples, V. cholerae and cholera toxin were detected successfully, selectively, and simultaneously; the DBM microarray was able to analyze the pathogenicity of the identified V. cholerae regardless of whether the bacteria produces toxin. Therefore, our proposed DBM microarray is a new effective platform for identifying bacteria and analyzing bacterial pathogenicity simultaneously. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stephenson, Kathryn E.; Neubauer, George H.; Reimer, Ulf
An effective vaccine against human immunodeficiency virus type 1 (HIV-1) will have to provide protection against a vast array of different HIV-1 strains. Current methods to measure HIV-1-specific binding antibodies following immunization typically focus on determining the magnitude of antibody responses, but the epitope diversity of antibody responses has remained largely unexplored. Here we describe the development of a global HIV-1 peptide microarray that contains 6564 peptides from across the HIV-1 proteome and covers the majority of HIV-1 sequences in the Los Alamos National Laboratory global HIV-1 sequence database. Using this microarray, we quantified the magnitude, breadth, and depth ofmore » IgG binding to linear HIV-1 sequences in HIV-1-infected humans and HIV-1-vaccinated humans, rhesus monkeys and guinea pigs. The microarray measured potentially important differences in antibody epitope diversity, particularly regarding the depth of epitope variants recognized at each binding site. Our data suggest that the global HIV-1 peptide microarray may be a useful tool for both preclinical and clinical HIV-1 research.« less
See what you eat--broad GMO screening with microarrays.
von Götz, Franz
2010-03-01
Despite the controversy of whether genetically modified organisms (GMOs) are beneficial or harmful for humans, animals, and/or ecosystems, the number of cultivated GMOs is increasing every year. Many countries and federations have implemented safety and surveillance systems for GMOs. Potent testing technologies need to be developed and implemented to monitor the increasing number of GMOs. First, these GMO tests need to be comprehensive, i.e., should detect all, or at least the most important, GMOs on the market. This type of GMO screening requires a high degree of parallel tests or multiplexing. To date, DNA microarrays have the highest number of multiplexing capabilities when nucleic acids are analyzed. This trend article focuses on the evolution of DNA microarrays for GMO testing. Over the last 7 years, combinations of multiplex PCR detection and microarray detection have been developed to qualitatively assess the presence of GMOs. One example is the commercially available DualChip GMO (Eppendorf, Germany; http://www.eppendorf-biochip.com), which is the only GMO screening system successfully validated in a multicenter study. With use of innovative amplification techniques, promising steps have recently been taken to make GMO detection with microarrays quantitative.
Genome Consortium for Active Teaching: Meeting the Goals of BIO2010
Ledbetter, Mary Lee S.; Hoopes, Laura L.M.; Eckdahl, Todd T.; Heyer, Laurie J.; Rosenwald, Anne; Fowlks, Edison; Tonidandel, Scott; Bucholtz, Brooke; Gottfried, Gail
2007-01-01
The Genome Consortium for Active Teaching (GCAT) facilitates the use of modern genomics methods in undergraduate education. Initially focused on microarray technology, but with an eye toward diversification, GCAT is a community working to improve the education of tomorrow's life science professionals. GCAT participants have access to affordable microarrays, microarray scanners, free software for data analysis, and faculty workshops. Microarrays provided by GCAT have been used by 141 faculty on 134 campuses, including 21 faculty that serve large numbers of underrepresented minority students. An estimated 9480 undergraduates a year will have access to microarrays by 2009 as a direct result of GCAT faculty workshops. Gains for students include significantly improved comprehension of topics in functional genomics and increased interest in research. Faculty reported improved access to new technology and gains in understanding thanks to their involvement with GCAT. GCAT's network of supportive colleagues encourages faculty to explore genomics through student research and to learn a new and complex method with their undergraduates. GCAT is meeting important goals of BIO2010 by making research methods accessible to undergraduates, training faculty in genomics and bioinformatics, integrating mathematics into the biology curriculum, and increasing participation by underrepresented minority students. PMID:17548873
Genome Consortium for Active Teaching: meeting the goals of BIO2010.
Campbell, A Malcolm; Ledbetter, Mary Lee S; Hoopes, Laura L M; Eckdahl, Todd T; Heyer, Laurie J; Rosenwald, Anne; Fowlks, Edison; Tonidandel, Scott; Bucholtz, Brooke; Gottfried, Gail
2007-01-01
The Genome Consortium for Active Teaching (GCAT) facilitates the use of modern genomics methods in undergraduate education. Initially focused on microarray technology, but with an eye toward diversification, GCAT is a community working to improve the education of tomorrow's life science professionals. GCAT participants have access to affordable microarrays, microarray scanners, free software for data analysis, and faculty workshops. Microarrays provided by GCAT have been used by 141 faculty on 134 campuses, including 21 faculty that serve large numbers of underrepresented minority students. An estimated 9480 undergraduates a year will have access to microarrays by 2009 as a direct result of GCAT faculty workshops. Gains for students include significantly improved comprehension of topics in functional genomics and increased interest in research. Faculty reported improved access to new technology and gains in understanding thanks to their involvement with GCAT. GCAT's network of supportive colleagues encourages faculty to explore genomics through student research and to learn a new and complex method with their undergraduates. GCAT is meeting important goals of BIO2010 by making research methods accessible to undergraduates, training faculty in genomics and bioinformatics, integrating mathematics into the biology curriculum, and increasing participation by underrepresented minority students.
Detection of Multiple Waterborne Pathogens Using Microsequencing Arrays
Aims: A microarray was developed to simultaneously detect Cryptosporidium parvum, Cryptosporidium hominis, Enterococcus faecium, Bacillus anthracis and Francisella tularensis in water. Methods and Results: A DNA microarray was designed to contain probes that specifically dete...
An Novel Continuation Power Flow Method Based on Line Voltage Stability Index
NASA Astrophysics Data System (ADS)
Zhou, Jianfang; He, Yuqing; He, Hongbin; Jiang, Zhuohan
2018-01-01
An novel continuation power flow method based on line voltage stability index is proposed in this paper. Line voltage stability index is used to determine the selection of parameterized lines, and constantly updated with the change of load parameterized lines. The calculation stages of the continuation power flow decided by the angle changes of the prediction of development trend equation direction vector are proposed in this paper. And, an adaptive step length control strategy is used to calculate the next prediction direction and value according to different calculation stages. The proposed method is applied clear physical concept, and the high computing speed, also considering the local characteristics of voltage instability which can reflect the weak nodes and weak area in a power system. Due to more fully to calculate the PV curves, the proposed method has certain advantages on analysing the voltage stability margin to large-scale power grid.
NASA Technical Reports Server (NTRS)
Bryan, W. B.
1976-01-01
Apollo 15 photographs of the southern parts of Serenitatis and Imbrium were used for a study of the morphology and distribution of wrinkle ridges. Volcanic and structural features along the south margin of Serenitatis were also studied, including the Dawes basalt cinder cones. Volcanic and structural features in crater Aitken were investigated as well. Study of crater Goclenius showed a close relationship between morphology of the impact crater and grabens which tend to parallel directions of the lunar grid. Similar trends were observed in the walls of crater Tsiolkovsky and other linear structures. Small craters of possible volcanic origin were also studied. Possible cinder cones were found associated with the Dawes basalt and in the floor of craters Aitken and Goclenius. Small pit craters were observed in the floors of these craters. Attempts were made to obtain contour maps of specific small features and to compare Orbiter and Apollo photographs to determine short term changes associated with other processes.
Inversion for the driving forces of plate tectonics
NASA Technical Reports Server (NTRS)
Richardson, R. M.
1983-01-01
Inverse modeling techniques have been applied to the problem of determining the roles of various forces that may drive and resist plate tectonic motions. Separate linear inverse problems have been solved to find the best fitting pole of rotation for finite element grid point velocities and to find the best combination of force models to fit the observed relative plate velocities for the earth's twelve major plates using the generalized inverse operator. Variance-covariance data on plate motion have also been included. Results emphasize the relative importance of ridge push forces in the driving mechanism. Convergent margin forces are smaller by at least a factor of two, and perhaps by as much as a factor of twenty. Slab pull, apparently, is poorly transmitted to the surface plate as a driving force. Drag forces at the base of the plate are smaller than ridge push forces, although the sign of the force remains in question.
Lagrangian Assimilation of Satellite Data for Climate Studies in the Arctic
NASA Technical Reports Server (NTRS)
Lindsay, Ronald W.; Zhang, Jin-Lun; Stern, Harry
2004-01-01
Under this grant we have developed and tested a new Lagrangian model of sea ice. A Lagrangian model keeps track of material parcels as they drift in the model domain. Besides providing a natural framework for the assimilation of Lagrangian data, it has other advantages: 1) a model that follows material elements is well suited for a medium such as sea ice in which an element retains its identity for a long period of time; 2) model cells can be added or dropped as needed, allowing the spatial resolution to be increased in areas of high variability or dense observations; 3) ice from particular regions, such as the marginal seas, can be marked and traced for a long time; and 4) slip lines in the ice motion are accommodated more naturally because there is no internal grid. Our work makes use of these strengths of the Lagrangian formulation.
Strauss, Christian; Endimiani, Andrea; Perreten, Vincent
2015-01-01
A rapid and simple DNA labeling system has been developed for disposable microarrays and has been validated for the detection of 117 antibiotic resistance genes abundant in Gram-positive bacteria. The DNA was fragmented and amplified using phi-29 polymerase and random primers with linkers. Labeling and further amplification were then performed by classic PCR amplification using biotinylated primers specific for the linkers. The microarray developed by Perreten et al. (Perreten, V., Vorlet-Fawer, L., Slickers, P., Ehricht, R., Kuhnert, P., Frey, J., 2005. Microarray-based detection of 90 antibiotic resistance genes of gram-positive bacteria. J.Clin.Microbiol. 43, 2291-2302.) was improved by additional oligonucleotides. A total of 244 oligonucleotides (26 to 37 nucleotide length and with similar melting temperatures) were spotted on the microarray, including genes conferring resistance to clinically important antibiotic classes like β-lactams, macrolides, aminoglycosides, glycopeptides and tetracyclines. Each antibiotic resistance gene is represented by at least 2 oligonucleotides designed from consensus sequences of gene families. The specificity of the oligonucleotides and the quality of the amplification and labeling were verified by analysis of a collection of 65 strains belonging to 24 species. Association between genotype and phenotype was verified for 6 antibiotics using 77 Staphylococcus strains belonging to different species and revealed 95% test specificity and a 93% predictive value of a positive test. The DNA labeling and amplification is independent of the species and of the target genes and could be used for different types of microarrays. This system has also the advantage to detect several genes within one bacterium at once, like in Staphylococcus aureus strain BM3318, in which up to 15 genes were detected. This new microarray-based detection system offers a large potential for applications in clinical diagnostic, basic research, food safety and surveillance programs for antimicrobial resistance. Copyright © 2014 Elsevier B.V. All rights reserved.
Howat, William J; Blows, Fiona M; Provenzano, Elena; Brook, Mark N; Morris, Lorna; Gazinska, Patrycja; Johnson, Nicola; McDuffus, Leigh‐Anne; Miller, Jodi; Sawyer, Elinor J; Pinder, Sarah; van Deurzen, Carolien H M; Jones, Louise; Sironen, Reijo; Visscher, Daniel; Caldas, Carlos; Daley, Frances; Coulson, Penny; Broeks, Annegien; Sanders, Joyce; Wesseling, Jelle; Nevanlinna, Heli; Fagerholm, Rainer; Blomqvist, Carl; Heikkilä, Päivi; Ali, H Raza; Dawson, Sarah‐Jane; Figueroa, Jonine; Lissowska, Jolanta; Brinton, Louise; Mannermaa, Arto; Kataja, Vesa; Kosma, Veli‐Matti; Cox, Angela; Brock, Ian W; Cross, Simon S; Reed, Malcolm W; Couch, Fergus J; Olson, Janet E; Devillee, Peter; Mesker, Wilma E; Seyaneve, Caroline M; Hollestelle, Antoinette; Benitez, Javier; Perez, Jose Ignacio Arias; Menéndez, Primitiva; Bolla, Manjeet K; Easton, Douglas F; Schmidt, Marjanka K; Pharoah, Paul D; Sherman, Mark E
2014-01-01
Abstract Breast cancer risk factors and clinical outcomes vary by tumour marker expression. However, individual studies often lack the power required to assess these relationships, and large‐scale analyses are limited by the need for high throughput, standardized scoring methods. To address these limitations, we assessed whether automated image analysis of immunohistochemically stained tissue microarrays can permit rapid, standardized scoring of tumour markers from multiple studies. Tissue microarray sections prepared in nine studies containing 20 263 cores from 8267 breast cancers stained for two nuclear (oestrogen receptor, progesterone receptor), two membranous (human epidermal growth factor receptor 2 and epidermal growth factor receptor) and one cytoplasmic (cytokeratin 5/6) marker were scanned as digital images. Automated algorithms were used to score markers in tumour cells using the Ariol system. We compared automated scores against visual reads, and their associations with breast cancer survival. Approximately 65–70% of tissue microarray cores were satisfactory for scoring. Among satisfactory cores, agreement between dichotomous automated and visual scores was highest for oestrogen receptor (Kappa = 0.76), followed by human epidermal growth factor receptor 2 (Kappa = 0.69) and progesterone receptor (Kappa = 0.67). Automated quantitative scores for these markers were associated with hazard ratios for breast cancer mortality in a dose‐response manner. Considering visual scores of epidermal growth factor receptor or cytokeratin 5/6 as the reference, automated scoring achieved excellent negative predictive value (96–98%), but yielded many false positives (positive predictive value = 30–32%). For all markers, we observed substantial heterogeneity in automated scoring performance across tissue microarrays. Automated analysis is a potentially useful tool for large‐scale, quantitative scoring of immunohistochemically stained tissue microarrays available in consortia. However, continued optimization, rigorous marker‐specific quality control measures and standardization of tissue microarray designs, staining and scoring protocols is needed to enhance results. PMID:27499890
Khan, Rishi L; Gonye, Gregory E; Gao, Guang; Schwaber, James S
2006-01-01
Background Using microarrays by co-hybridizing two samples labeled with different dyes enables differential gene expression measurements and comparisons across slides while controlling for within-slide variability. Typically one dye produces weaker signal intensities than the other often causing signals to be undetectable. In addition, undetectable spots represent a large problem for two-color microarray designs and most arrays contain at least 40% undetectable spots even when labeled with reference samples such as Stratagene's Universal Reference RNAs™. Results We introduce a novel universal reference sample that produces strong signal for all spots on the array, increasing the average fraction of detectable spots to 97%. Maximizing detectable spots on the reference image channel also decreases the variability of microarray data allowing for reliable detection of smaller differential gene expression changes. The reference sample is derived from sequence contained in the parental EST clone vector pT7T3D-Pac and is called vector RNA (vRNA). We show that vRNA can also be used for quality control of microarray printing and PCR product quality, detection of hybridization anomalies, and simplification of spot finding and segmentation tasks. This reference sample can be made inexpensively in large quantities as a renewable resource that is consistent across experiments. Conclusion Results of this study show that vRNA provides a useful universal reference that yields high signal for almost all spots on a microarray, reduces variation and allows for comparisons between experiments and laboratories. Further, it can be used for quality control of microarray printing and PCR product quality, detection of hybridization anomalies, and simplification of spot finding and segmentation tasks. This type of reference allows for detection of small changes in differential expression while reference designs in general allow for large-scale multivariate experimental designs. vRNA in combination with reference designs enable systems biology microarray experiments of small physiologically relevant changes. PMID:16677381
Grubaugh, Nathan D.; McMenamy, Scott S.; Turell, Michael J.; Lee, John S.
2013-01-01
Background Arthropod-borne viruses are important emerging pathogens world-wide. Viruses transmitted by mosquitoes, such as dengue, yellow fever, and Japanese encephalitis viruses, infect hundreds of millions of people and animals each year. Global surveillance of these viruses in mosquito vectors using molecular based assays is critical for prevention and control of the associated diseases. Here, we report an oligonucleotide DNA microarray design, termed ArboChip5.1, for multi-gene detection and identification of mosquito-borne RNA viruses from the genera Flavivirus (family Flaviviridae), Alphavirus (Togaviridae), Orthobunyavirus (Bunyaviridae), and Phlebovirus (Bunyaviridae). Methodology/Principal Findings The assay utilizes targeted PCR amplification of three genes from each virus genus for electrochemical detection on a portable, field-tested microarray platform. Fifty-two viruses propagated in cell-culture were used to evaluate the specificity of the PCR primer sets and the ArboChip5.1 microarray capture probes. The microarray detected all of the tested viruses and differentiated between many closely related viruses such as members of the dengue, Japanese encephalitis, and Semliki Forest virus clades. Laboratory infected mosquitoes were used to simulate field samples and to determine the limits of detection. Additionally, we identified dengue virus type 3, Japanese encephalitis virus, Tembusu virus, Culex flavivirus, and a Quang Binh-like virus from mosquitoes collected in Thailand in 2011 and 2012. Conclusions/Significance We demonstrated that the described assay can be utilized in a comprehensive field surveillance program by the broad-range amplification and specific identification of arboviruses from infected mosquitoes. Furthermore, the microarray platform can be deployed in the field and viral RNA extraction to data analysis can occur in as little as 12 h. The information derived from the ArboChip5.1 microarray can help to establish public health priorities, detect disease outbreaks, and evaluate control programs. PMID:23967358
Ai, Lin; Chen, Jun-Hu; Chen, Shao-Hong; Zhang, Yong-Nian; Cai, Yu-Chun; Zhu, Xing-Quan; Zhou, Xiao-Nong
2012-01-01
Background Food-borne helminthiases (FBHs) have become increasingly important due to frequent occurrence and worldwide distribution. There is increasing demand for developing more sensitive, high-throughput techniques for the simultaneous detection of multiple parasitic diseases due to limitations in differential clinical diagnosis of FBHs with similar symptoms. These infections are difficult to diagnose correctly by conventional diagnostic approaches including serological approaches. Methodology/Principal Findings In this study, antigens obtained from 5 parasite species, namely Cysticercus cellulosae, Angiostrongylus cantonensis, Paragonimus westermani, Trichinella spiralis and Spirometra sp., were semi-purified after immunoblotting. Sera from 365 human cases of helminthiasis and 80 healthy individuals were assayed with semi-purified antigens by both a protein microarray and the enzyme-linked immunosorbent assay (ELISA). The sensitivity, specificity and simplicity of each test for the end-user were evaluated. The specificity of the tests ranged from 97.0% (95% confidence interval (CI): 95.3–98.7%) to 100.0% (95% CI: 100.0%) in the protein microarray and from 97.7% (95% CI: 96.2–99.2%) to 100.0% (95% CI: 100.0%) in ELISA. The sensitivity varied from 85.7% (95% CI: 75.1–96.3%) to 92.1% (95% CI: 83.5–100.0%) in the protein microarray, while the corresponding values for ELISA were 82.0% (95% CI: 71.4–92.6%) to 92.1% (95% CI: 83.5–100.0%). Furthermore, the Youden index spanned from 0.83 to 0.92 in the protein microarray and from 0.80 to 0.92 in ELISA. For each parasite, the Youden index from the protein microarray was often slightly higher than the one from ELISA even though the same antigen was used. Conclusions/Significance The protein microarray platform is a convenient, versatile, high-throughput method that can easily be adapted to massive FBH screening. PMID:23209851
A remark on copy number variation detection methods.
Li, Shuo; Dou, Xialiang; Gao, Ruiqi; Ge, Xinzhou; Qian, Minping; Wan, Lin
2018-01-01
Copy number variations (CNVs) are gain and loss of DNA sequence of a genome. High throughput platforms such as microarrays and next generation sequencing technologies (NGS) have been applied for genome wide copy number losses. Although progress has been made in both approaches, the accuracy and consistency of CNV calling from the two platforms remain in dispute. In this study, we perform a deep analysis on copy number losses on 254 human DNA samples, which have both SNP microarray data and NGS data publicly available from Hapmap Project and 1000 Genomes Project respectively. We show that the copy number losses reported from Hapmap Project and 1000 Genome Project only have < 30% overlap, while these reports are required to have cross-platform (e.g. PCR, microarray and high-throughput sequencing) experimental supporting by their corresponding projects, even though state-of-art calling methods were employed. On the other hand, copy number losses are found directly from HapMap microarray data by an accurate algorithm, i.e. CNVhac, almost all of which have lower read mapping depth in NGS data; furthermore, 88% of which can be supported by the sequences with breakpoint in NGS data. Our results suggest the ability of microarray calling CNVs and the possible introduction of false negatives from the unessential requirement of the additional cross-platform supporting. The inconsistency of CNV reports from Hapmap Project and 1000 Genomes Project might result from the inadequate information containing in microarray data, the inconsistent detection criteria, or the filtration effect of cross-platform supporting. The statistical test on CNVs called from CNVhac show that the microarray data can offer reliable CNV reports, and majority of CNV candidates can be confirmed by raw sequences. Therefore, the CNV candidates given by a good caller could be highly reliable without cross-platform supporting, so additional experimental information should be applied in need instead of necessarily.
Equalizer reduces SNP bias in Affymetrix microarrays.
Quigley, David
2015-07-30
Gene expression microarrays measure the levels of messenger ribonucleic acid (mRNA) in a sample using probe sequences that hybridize with transcribed regions. These probe sequences are designed using a reference genome for the relevant species. However, most model organisms and all humans have genomes that deviate from their reference. These variations, which include single nucleotide polymorphisms, insertions of additional nucleotides, and nucleotide deletions, can affect the microarray's performance. Genetic experiments comparing individuals bearing different population-associated single nucleotide polymorphisms that intersect microarray probes are therefore subject to systemic bias, as the reduction in binding efficiency due to a technical artifact is confounded with genetic differences between parental strains. This problem has been recognized for some time, and earlier methods of compensation have attempted to identify probes affected by genome variants using statistical models. These methods may require replicate microarray measurement of gene expression in the relevant tissue in inbred parental samples, which are not always available in model organisms and are never available in humans. By using sequence information for the genomes of organisms under investigation, potentially problematic probes can now be identified a priori. However, there is no published software tool that makes it easy to eliminate these probes from an annotation. I present equalizer, a software package that uses genome variant data to modify annotation files for the commonly used Affymetrix IVT and Gene/Exon platforms. These files can be used by any microarray normalization method for subsequent analysis. I demonstrate how use of equalizer on experiments mapping germline influence on gene expression in a genetic cross between two divergent mouse species and in human samples significantly reduces probe hybridization-induced bias, reducing false positive and false negative findings. The equalizer package reduces probe hybridization bias from experiments performed on the Affymetrix microarray platform, allowing accurate assessment of germline influence on gene expression.
Evaluation of artificial time series microarray data for dynamic gene regulatory network inference.
Xenitidis, P; Seimenis, I; Kakolyris, S; Adamopoulos, A
2017-08-07
High-throughput technology like microarrays is widely used in the inference of gene regulatory networks (GRNs). We focused on time series data since we are interested in the dynamics of GRNs and the identification of dynamic networks. We evaluated the amount of information that exists in artificial time series microarray data and the ability of an inference process to produce accurate models based on them. We used dynamic artificial gene regulatory networks in order to create artificial microarray data. Key features that characterize microarray data such as the time separation of directly triggered genes, the percentage of directly triggered genes and the triggering function type were altered in order to reveal the limits that are imposed by the nature of microarray data on the inference process. We examined the effect of various factors on the inference performance such as the network size, the presence of noise in microarray data, and the network sparseness. We used a system theory approach and examined the relationship between the pole placement of the inferred system and the inference performance. We examined the relationship between the inference performance in the time domain and the true system parameter identification. Simulation results indicated that time separation and the percentage of directly triggered genes are crucial factors. Also, network sparseness, the triggering function type and noise in input data affect the inference performance. When two factors were simultaneously varied, it was found that variation of one parameter significantly affects the dynamic response of the other. Crucial factors were also examined using a real GRN and acquired results confirmed simulation findings with artificial data. Different initial conditions were also used as an alternative triggering approach. Relevant results confirmed that the number of datasets constitutes the most significant parameter with regard to the inference performance. Copyright © 2017 Elsevier Ltd. All rights reserved.
Contact printing of protein microarrays.
Austin, John; Holway, Antonia H
2011-01-01
A review is provided of contact-printing technologies for the fabrication of planar protein microarrays. The key printing performance parameters for creating protein arrays are reviewed. Solid pin and quill pin technologies are described and their strengths and weaknesses compared.
Microfluidic extraction and microarray detection of biomarkers from cancer tissue slides
NASA Astrophysics Data System (ADS)
Nguyen, H. T.; Dupont, L. N.; Jean, A. M.; Géhin, T.; Chevolot, Y.; Laurenceau, E.; Gijs, M. A. M.
2018-03-01
We report here a new microfluidic method allowing for the quantification of human epidermal growth factor receptor 2 (HER2) expression levels from formalin-fixed breast cancer tissues. After partial extraction of proteins from the tissue slide, the extract is routed to an antibody (Ab) microarray for HER2 titration by fluorescence. Then the HER2-expressing cell area is evaluated by immunofluorescence (IF) staining of the tissue slide and used to normalize the fluorescent HER2 signal measured from the Ab microarray. The number of HER2 gene copies measured by fluorescence in situ hybridization (FISH) on an adjacent tissue slide is concordant with the normalized HER2 expression signal. This work is the first study implementing biomarker extraction and detection from cancer tissue slides using microfluidics in combination with a microarray system, paving the way for further developments towards multiplex and precise quantification of cancer biomarkers.
Glycan microarray screening assay for glycosyltransferase specificities.
Peng, Wenjie; Nycholat, Corwin M; Razi, Nahid
2013-01-01
Glycan microarrays represent a high-throughput approach to determining the specificity of glycan-binding proteins against a large set of glycans in a single format. This chapter describes the use of a glycan microarray platform for evaluating the activity and substrate specificity of glycosyltransferases (GTs). The methodology allows simultaneous screening of hundreds of immobilized glycan acceptor substrates by in situ incubation of a GT and its appropriate donor substrate on the microarray surface. Using biotin-conjugated donor substrate enables direct detection of the incorporated sugar residues on acceptor substrates on the array. In addition, the feasibility of the method has been validated using label-free donor substrate combined with lectin-based detection of product to assess enzyme activity. Here, we describe the application of both procedures to assess the specificity of a recombinant human α2-6 sialyltransferase. This technique is readily adaptable to studying other glycosyltransferases.
The Use of Atomic Force Microscopy for 3D Analysis of Nucleic Acid Hybridization on Microarrays.
Dubrovin, E V; Presnova, G V; Rubtsova, M Yu; Egorov, A M; Grigorenko, V G; Yaminsky, I V
2015-01-01
Oligonucleotide microarrays are considered today to be one of the most efficient methods of gene diagnostics. The capability of atomic force microscopy (AFM) to characterize the three-dimensional morphology of single molecules on a surface allows one to use it as an effective tool for the 3D analysis of a microarray for the detection of nucleic acids. The high resolution of AFM offers ways to decrease the detection threshold of target DNA and increase the signal-to-noise ratio. In this work, we suggest an approach to the evaluation of the results of hybridization of gold nanoparticle-labeled nucleic acids on silicon microarrays based on an AFM analysis of the surface both in air and in liquid which takes into account of their three-dimensional structure. We suggest a quantitative measure of the hybridization results which is based on the fraction of the surface area occupied by the nanoparticles.
A Versatile Microarray Platform for Capturing Rare Cells
NASA Astrophysics Data System (ADS)
Brinkmann, Falko; Hirtz, Michael; Haller, Anna; Gorges, Tobias M.; Vellekoop, Michael J.; Riethdorf, Sabine; Müller, Volkmar; Pantel, Klaus; Fuchs, Harald
2015-10-01
Analyses of rare events occurring at extremely low frequencies in body fluids are still challenging. We established a versatile microarray-based platform able to capture single target cells from large background populations. As use case we chose the challenging application of detecting circulating tumor cells (CTCs) - about one cell in a billion normal blood cells. After incubation with an antibody cocktail, targeted cells are extracted on a microarray in a microfluidic chip. The accessibility of our platform allows for subsequent recovery of targets for further analysis. The microarray facilitates exclusion of false positive capture events by co-localization allowing for detection without fluorescent labelling. Analyzing blood samples from cancer patients with our platform reached and partly outreached gold standard performance, demonstrating feasibility for clinical application. Clinical researchers free choice of antibody cocktail without need for altered chip manufacturing or incubation protocol, allows virtual arbitrary targeting of capture species and therefore wide spread applications in biomedical sciences.
Mining microarrays for metabolic meaning: nutritional regulation of hypothalamic gene expression.
Mobbs, Charles V; Yen, Kelvin; Mastaitis, Jason; Nguyen, Ha; Watson, Elizabeth; Wurmbach, Elisa; Sealfon, Stuart C; Brooks, Andrew; Salton, Stephen R J
2004-06-01
DNA microarray analysis has been used to investigate relative changes in the level of gene expression in the CNS, including changes that are associated with disease, injury, psychiatric disorders, drug exposure or withdrawal, and memory formation. We have used oligonucleotide microarrays to identify hypothalamic genes that respond to nutritional manipulation. In addition to commonly used microarray analysis based on criteria such as fold-regulation, we have also found that simply carrying out multiple t tests then sorting by P value constitutes a highly reliable method to detect true regulation, as assessed by real-time polymerase chain reaction (PCR), even for relatively low abundance genes or relatively low magnitude of regulation. Such analyses directly suggested novel mechanisms that mediate effects of nutritional state on neuroendocrine function and are being used to identify regulated gene products that may elucidate the metabolic pathology of obese ob/ob, lean Vgf-/Vgf-, and other models with profound metabolic impairments.
He, Xianmin; Wei, Qing; Sun, Meiqian; Fu, Xuping; Fan, Sichang; Li, Yao
2006-05-01
Biological techniques such as Array-Comparative genomic hybridization (CGH), fluorescent in situ hybridization (FISH) and affymetrix single nucleotide pleomorphism (SNP) array have been used to detect cytogenetic aberrations. However, on genomic scale, these techniques are labor intensive and time consuming. Comparative genomic microarray analysis (CGMA) has been used to identify cytogenetic changes in hepatocellular carcinoma (HCC) using gene expression microarray data. However, CGMA algorithm can not give precise localization of aberrations, fails to identify small cytogenetic changes, and exhibits false negatives and positives. Locally un-weighted smoothing cytogenetic aberrations prediction (LS-CAP) based on local smoothing and binomial distribution can be expected to address these problems. LS-CAP algorithm was built and used on HCC microarray profiles. Eighteen cytogenetic abnormalities were identified, among them 5 were reported previously, and 12 were proven by CGH studies. LS-CAP effectively reduced the false negatives and positives, and precisely located small fragments with cytogenetic aberrations.
DNA microarrays: a powerful genomic tool for biomedical and clinical research
Trevino, Victor; Falciani, Francesco; Barrera-Saldaña, Hugo A.
2007-01-01
Among the many benefits of the Human Genome Project are new and powerful tools such as the genome-wide hybridization devices referred as microarrays. Initially designed to measure gene transcriptional levels, microarray technologies are now used for comparing other genome features among individuals and their tissues and cells. Results provide valuable information on disease subcategories, disease prognosis, and treatment outcome. Likewise, reveal differences in genetic makeup, regulatory mechanisms and subtle variations are approaching the era of personalized medicine. To understand this powerful tool, its versatility and how it is dramatically changing the molecular approach to biomedical and clinical research, this review describes the technology, its applications, a didactic step-by-step review of a typical microarray protocol, and a real experiment. Finally, it calls the attention of the medical community to integrate multidisciplinary teams, to take advantage of this technology and its expanding applications that in a slide reveals our genetic inheritance and destiny. PMID:17660860
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsia, Chu Chieh; Chizhikov, Vladimir E.; Yang, Amy X.
Hepatitis B virus (HBV), hepatitis C virus (HCV), and human immunodeficiency virus type-1 (HIV-1) are transfusion-transmitted human pathogens that have a major impact on blood safety and public health worldwide. We developed a microarray multiplex assay for the simultaneous detection and discrimination of these three viruses. The microarray consists of 16 oligonucleotide probes, immobilized on a silylated glass slide. Amplicons from multiplex PCR were labeled with Cy-5 and hybridized to the microarray. The assay detected 1 International Unit (IU), 10 IU, 20 IU of HBV, HCV, and HIV-1, respectively, in a single multiplex reaction. The assay also detected and discriminatedmore » the presence of two or three of these viruses in a single sample. Our data represent a proof-of-concept for the possible use of highly sensitive multiplex microarray assay to screen and confirm the presence of these viruses in blood donors and patients.« less
Selective recognition of DNA from olive leaves and olive oil by PNA and modified-PNA microarrays
Rossi, Stefano; Calabretta, Alessandro; Tedeschi, Tullia; Sforza, Stefano; Arcioni, Sergio; Baldoni, Luciana; Corradini, Roberto; Marchelli, Rosangela
2012-01-01
PNA probes for the specific detection of DNA from olive oil samples by microarray technology were developed. The presence of as low as 5% refined hazelnut (Corylus avellana) oil in extra-virgin olive oil (Olea europaea L.) could be detected by using a PNA microarray. A set of two single nucleotide polymorphisms (SNPs) from the Actin gene of Olive was chosen as a model for evaluating the ability of PNA probes for discriminating olive cultivars. Both unmodified and C2-modified PNAs bearing an arginine side-chain were used, the latter showing higher sequence specificity. DNA extracted from leaves of three different cultivars (Ogliarola leccese, Canino and Frantoio) could be easily discriminated using a microarray with unmodified PNA probes, whereas discrimination of DNA from oil samples was more challenging, and could be obtained only by using chiral PNA probes. PMID:22772038
[Research progress of probe design software of oligonucleotide microarrays].
Chen, Xi; Wu, Zaoquan; Liu, Zhengchun
2014-02-01
DNA microarray has become an essential medical genetic diagnostic tool for its high-throughput, miniaturization and automation. The design and selection of oligonucleotide probes are critical for preparing gene chips with high quality. Several sets of probe design software have been developed and are available to perform this work now. Every set of the software aims to different target sequences and shows different advantages and limitations. In this article, the research and development of these sets of software are reviewed in line with three main criteria, including specificity, sensitivity and melting temperature (Tm). In addition, based on the experimental results from literatures, these sets of software are classified according to their applications. This review will be helpful for users to choose an appropriate probe-design software. It will also reduce the costs of microarrays, improve the application efficiency of microarrays, and promote both the research and development (R&D) and commercialization of high-performance probe design software.