A fortran program for Monte Carlo simulation of oil-field discovery sequences
Bohling, Geoffrey C.; Davis, J.C.
1993-01-01
We have developed a program for performing Monte Carlo simulation of oil-field discovery histories. A synthetic parent population of fields is generated as a finite sample from a distribution of specified form. The discovery sequence then is simulated by sampling without replacement from this parent population in accordance with a probabilistic discovery process model. The program computes a chi-squared deviation between synthetic and actual discovery sequences as a function of the parameters of the discovery process model, the number of fields in the parent population, and the distributional parameters of the parent population. The program employs the three-parameter log gamma model for the distribution of field sizes and employs a two-parameter discovery process model, allowing the simulation of a wide range of scenarios. ?? 1993.
Building Cognition: The Construction of Computational Representations for Scientific Discovery.
Chandrasekharan, Sanjay; Nersessian, Nancy J
2015-11-01
Novel computational representations, such as simulation models of complex systems and video games for scientific discovery (Foldit, EteRNA etc.), are dramatically changing the way discoveries emerge in science and engineering. The cognitive roles played by such computational representations in discovery are not well understood. We present a theoretical analysis of the cognitive roles such representations play, based on an ethnographic study of the building of computational models in a systems biology laboratory. Specifically, we focus on a case of model-building by an engineer that led to a remarkable discovery in basic bioscience. Accounting for such discoveries requires a distributed cognition (DC) analysis, as DC focuses on the roles played by external representations in cognitive processes. However, DC analyses by and large have not examined scientific discovery, and they mostly focus on memory offloading, particularly how the use of existing external representations changes the nature of cognitive tasks. In contrast, we study discovery processes and argue that discoveries emerge from the processes of building the computational representation. The building process integrates manipulations in imagination and in the representation, creating a coupled cognitive system of model and modeler, where the model is incorporated into the modeler's imagination. This account extends DC significantly, and we present some of the theoretical and application implications of this extended account. Copyright © 2014 Cognitive Science Society, Inc.
Girardi, Dominic; Küng, Josef; Kleiser, Raimund; Sonnberger, Michael; Csillag, Doris; Trenkler, Johannes; Holzinger, Andreas
2016-09-01
Established process models for knowledge discovery find the domain-expert in a customer-like and supervising role. In the field of biomedical research, it is necessary to move the domain-experts into the center of this process with far-reaching consequences for both their research output and the process itself. In this paper, we revise the established process models for knowledge discovery and propose a new process model for domain-expert-driven interactive knowledge discovery. Furthermore, we present a research infrastructure which is adapted to this new process model and demonstrate how the domain-expert can be deeply integrated even into the highly complex data-mining process and data-exploration tasks. We evaluated this approach in the medical domain for the case of cerebral aneurysms research.
Petroleum-resource appraisal and discovery rate forecasting in partially explored regions
Drew, Lawrence J.; Schuenemeyer, J.H.; Root, David H.; Attanasi, E.D.
1980-01-01
PART A: A model of the discovery process can be used to predict the size distribution of future petroleum discoveries in partially explored basins. The parameters of the model are estimated directly from the historical drilling record, rather than being determined by assumptions or analogies. The model is based on the concept of the area of influence of a drill hole, which states that the area of a basin exhausted by a drill hole varies with the size and shape of targets in the basin and with the density of previously drilled wells. It also uses the concept of discovery efficiency, which measures the rate of discovery within several classes of deposit size. The model was tested using 25 years of historical exploration data (1949-74) from the Denver basin. From the trend in the discovery rate (the number of discoveries per unit area exhausted), the discovery efficiencies in each class of deposit size were estimated. Using pre-1956 discovery and drilling data, the model accurately predicted the size distribution of discoveries for the 1956-74 period. PART B: A stochastic model of the discovery process has been developed to predict, using past drilling and discovery data, the distribution of future petroleum deposits in partially explored basins, and the basic mathematical properties of the model have been established. The model has two exogenous parameters, the efficiency of exploration and the effective basin size. The first parameter is the ratio of the probability that an actual exploratory well will make a discovery to the probability that a randomly sited well will make a discovery. The second parameter, the effective basin size, is the area of that part of the basin in which drillers are willing to site wells. Methods for estimating these parameters from locations of past wells and from the sizes and locations of past discoveries were derived, and the properties of estimators of the parameters were studied by simulation. PART C: This study examines the temporal properties and determinants of petroleum exploration for firms operating in the Denver basin. Expectations associated with the favorability of a specific area are modeled by using distributed lag proxy variables (of previous discoveries) and predictions from a discovery process model. In the second part of the study, a discovery process model is linked with a behavioral well-drilling model in order to predict the supply of new reserves. Results of the study indicate that the positive effects of new discoveries on drilling increase for several periods and then diminish to zero within 2? years after the deposit discovery date. Tests of alternative specifications of the argument of the distributed lag function using alternative minimum size classes of deposits produced little change in the model's explanatory power. This result suggests that, once an exploration play is underway, favorable operator expectations are sustained by the quantity of oil found per time period rather than by the discovery of specific size deposits. When predictions of the value of undiscovered deposits (generated from a discovery process model) were substituted for the expectations variable in models used to explain exploration effort, operator behavior was found to be consistent with these predictions. This result suggests that operators, on the average, were efficiently using information contained in the discovery history of the basin in carrying out their exploration plans. Comparison of the two approaches to modeling unobservable operator expectations indicates that the two models produced very similar results. The integration of the behavioral well-drilling model and discovery process model to predict the additions to reserves per unit time was successful only when the quarterly predictions were aggregated to annual values. The accuracy of the aggregated predictions was also found to be reasonably robust to errors in predictions from the behavioral well-drilling equation.
Yu, Helen W H
2016-02-01
The current drug discovery and development process is stalling the translation of basic science into lifesaving products. Known as the 'Valley of Death', the traditional technology transfer model fails to bridge the gap between early-stage discoveries and preclinical research to advance innovations beyond the discovery phase. In addition, the stigma associated with 'commercialization' detracts from the importance of efficient translation of basic research. Here, I introduce a drug discovery model whereby the respective expertise of academia and industry are brought together to take promising discoveries through to proof of concept as a way to derisk the drug discovery and development process. Known as the 'integrated drug discovery model', I examine here the extent to which existing legal frameworks support this model. Copyright © 2015 Elsevier Ltd. All rights reserved.
Analysis of latency performance of bluetooth low energy (BLE) networks.
Cho, Keuchul; Park, Woojin; Hong, Moonki; Park, Gisu; Cho, Wooseong; Seo, Jihoon; Han, Kijun
2014-12-23
Bluetooth Low Energy (BLE) is a short-range wireless communication technology aiming at low-cost and low-power communication. The performance evaluation of classical Bluetooth device discovery have been intensively studied using analytical modeling and simulative methods, but these techniques are not applicable to BLE, since BLE has a fundamental change in the design of the discovery mechanism, including the usage of three advertising channels. Recently, there several works have analyzed the topic of BLE device discovery, but these studies are still far from thorough. It is thus necessary to develop a new, accurate model for the BLE discovery process. In particular, the wide range settings of the parameters introduce lots of potential for BLE devices to customize their discovery performance. This motivates our study of modeling the BLE discovery process and performing intensive simulation. This paper is focused on building an analytical model to investigate the discovery probability, as well as the expected discovery latency, which are then validated via extensive experiments. Our analysis considers both continuous and discontinuous scanning modes. We analyze the sensitivity of these performance metrics to parameter settings to quantitatively examine to what extent parameters influence the performance metric of the discovery processes.
Analysis of Latency Performance of Bluetooth Low Energy (BLE) Networks
Cho, Keuchul; Park, Woojin; Hong, Moonki; Park, Gisu; Cho, Wooseong; Seo, Jihoon; Han, Kijun
2015-01-01
Bluetooth Low Energy (BLE) is a short-range wireless communication technology aiming at low-cost and low-power communication. The performance evaluation of classical Bluetooth device discovery have been intensively studied using analytical modeling and simulative methods, but these techniques are not applicable to BLE, since BLE has a fundamental change in the design of the discovery mechanism, including the usage of three advertising channels. Recently, there several works have analyzed the topic of BLE device discovery, but these studies are still far from thorough. It is thus necessary to develop a new, accurate model for the BLE discovery process. In particular, the wide range settings of the parameters introduce lots of potential for BLE devices to customize their discovery performance. This motivates our study of modeling the BLE discovery process and performing intensive simulation. This paper is focused on building an analytical model to investigate the discovery probability, as well as the expected discovery latency, which are then validated via extensive experiments. Our analysis considers both continuous and discontinuous scanning modes. We analyze the sensitivity of these performance metrics to parameter settings to quantitatively examine to what extent parameters influence the performance metric of the discovery processes. PMID:25545266
Proposal and Evaluation of BLE Discovery Process Based on New Features of Bluetooth 5.0.
Hernández-Solana, Ángela; Perez-Diaz-de-Cerio, David; Valdovinos, Antonio; Valenzuela, Jose Luis
2017-08-30
The device discovery process is one of the most crucial aspects in real deployments of sensor networks. Recently, several works have analyzed the topic of Bluetooth Low Energy (BLE) device discovery through analytical or simulation models limited to version 4.x. Non-connectable and non-scannable undirected advertising has been shown to be a reliable alternative for discovering a high number of devices in a relatively short time period. However, new features of Bluetooth 5.0 allow us to define a variant on the device discovery process, based on BLE scannable undirected advertising events, which results in higher discovering capacities and also lower power consumption. In order to characterize this new device discovery process, we experimentally model the real device behavior of BLE scannable undirected advertising events. Non-detection packet probability, discovery probability, and discovery latency for a varying number of devices and parameters are compared by simulations and experimental measurements. We demonstrate that our proposal outperforms previous works, diminishing the discovery time and increasing the potential user device density. A mathematical model is also developed in order to easily obtain a measure of the potential capacity in high density scenarios.
Proposal and Evaluation of BLE Discovery Process Based on New Features of Bluetooth 5.0
2017-01-01
The device discovery process is one of the most crucial aspects in real deployments of sensor networks. Recently, several works have analyzed the topic of Bluetooth Low Energy (BLE) device discovery through analytical or simulation models limited to version 4.x. Non-connectable and non-scannable undirected advertising has been shown to be a reliable alternative for discovering a high number of devices in a relatively short time period. However, new features of Bluetooth 5.0 allow us to define a variant on the device discovery process, based on BLE scannable undirected advertising events, which results in higher discovering capacities and also lower power consumption. In order to characterize this new device discovery process, we experimentally model the real device behavior of BLE scannable undirected advertising events. Non-detection packet probability, discovery probability, and discovery latency for a varying number of devices and parameters are compared by simulations and experimental measurements. We demonstrate that our proposal outperforms previous works, diminishing the discovery time and increasing the potential user device density. A mathematical model is also developed in order to easily obtain a measure of the potential capacity in high density scenarios. PMID:28867786
NASA Astrophysics Data System (ADS)
Schenck, Natalya A.; Horvath, Philip A.; Sinha, Amit K.
2018-02-01
While the literature on price discovery process and information flow between dominant and satellite market is exhaustive, most studies have applied an approach that can be traced back to Hasbrouck (1995) or Gonzalo and Granger (1995). In this paper, however, we propose a Generalized Langevin process with asymmetric double-well potential function, with co-integrated time series and interconnected diffusion processes to model the information flow and price discovery process in two, a dominant and a satellite, interconnected markets. A simulated illustration of the model is also provided.
Science of the science, drug discovery and artificial neural networks.
Patel, Jigneshkumar
2013-03-01
Drug discovery process many times encounters complex problems, which may be difficult to solve by human intelligence. Artificial Neural Networks (ANNs) are one of the Artificial Intelligence (AI) technologies used for solving such complex problems. ANNs are widely used for primary virtual screening of compounds, quantitative structure activity relationship studies, receptor modeling, formulation development, pharmacokinetics and in all other processes involving complex mathematical modeling. Despite having such advanced technologies and enough understanding of biological systems, drug discovery is still a lengthy, expensive, difficult and inefficient process with low rate of new successful therapeutic discovery. In this paper, author has discussed the drug discovery science and ANN from very basic angle, which may be helpful to understand the application of ANN for drug discovery to improve efficiency.
Service-based analysis of biological pathways
Zheng, George; Bouguettaya, Athman
2009-01-01
Background Computer-based pathway discovery is concerned with two important objectives: pathway identification and analysis. Conventional mining and modeling approaches aimed at pathway discovery are often effective at achieving either objective, but not both. Such limitations can be effectively tackled leveraging a Web service-based modeling and mining approach. Results Inspired by molecular recognitions and drug discovery processes, we developed a Web service mining tool, named PathExplorer, to discover potentially interesting biological pathways linking service models of biological processes. The tool uses an innovative approach to identify useful pathways based on graph-based hints and service-based simulation verifying user's hypotheses. Conclusion Web service modeling of biological processes allows the easy access and invocation of these processes on the Web. Web service mining techniques described in this paper enable the discovery of biological pathways linking these process service models. Algorithms presented in this paper for automatically highlighting interesting subgraph within an identified pathway network enable the user to formulate hypothesis, which can be tested out using our simulation algorithm that are also described in this paper. PMID:19796403
ERIC Educational Resources Information Center
Weeber, Marc; Klein, Henny; de Jong-van den Berg, Lolkje T. W.; Vos, Rein
2001-01-01
Proposes a two-step model of discovery in which new scientific hypotheses can be generated and subsequently tested. Applying advanced natural language processing techniques to find biomedical concepts in text, the model is implemented in a versatile interactive discovery support tool. This tool is used to successfully simulate Don R. Swanson's…
Pandey, Udai Bhan
2011-01-01
The common fruit fly, Drosophila melanogaster, is a well studied and highly tractable genetic model organism for understanding molecular mechanisms of human diseases. Many basic biological, physiological, and neurological properties are conserved between mammals and D. melanogaster, and nearly 75% of human disease-causing genes are believed to have a functional homolog in the fly. In the discovery process for therapeutics, traditional approaches employ high-throughput screening for small molecules that is based primarily on in vitro cell culture, enzymatic assays, or receptor binding assays. The majority of positive hits identified through these types of in vitro screens, unfortunately, are found to be ineffective and/or toxic in subsequent validation experiments in whole-animal models. New tools and platforms are needed in the discovery arena to overcome these limitations. The incorporation of D. melanogaster into the therapeutic discovery process holds tremendous promise for an enhanced rate of discovery of higher quality leads. D. melanogaster models of human diseases provide several unique features such as powerful genetics, highly conserved disease pathways, and very low comparative costs. The fly can effectively be used for low- to high-throughput drug screens as well as in target discovery. Here, we review the basic biology of the fly and discuss models of human diseases and opportunities for therapeutic discovery for central nervous system disorders, inflammatory disorders, cardiovascular disease, cancer, and diabetes. We also provide information and resources for those interested in pursuing fly models of human disease, as well as those interested in using D. melanogaster in the drug discovery process. PMID:21415126
Novel opportunities for computational biology and sociology in drug discovery
Yao, Lixia
2009-01-01
Drug discovery today is impossible without sophisticated modeling and computation. In this review we touch on previous advances in computational biology and by tracing the steps involved in pharmaceutical development, we explore a range of novel, high value opportunities for computational innovation in modeling the biological process of disease and the social process of drug discovery. These opportunities include text mining for new drug leads, modeling molecular pathways and predicting the efficacy of drug cocktails, analyzing genetic overlap between diseases and predicting alternative drug use. Computation can also be used to model research teams and innovative regions and to estimate the value of academy-industry ties for scientific and human benefit. Attention to these opportunities could promise punctuated advance, and will complement the well-established computational work on which drug discovery currently relies. PMID:19674801
Computational methods for a three-dimensional model of the petroleum-discovery process
Schuenemeyer, J.H.; Bawiec, W.J.; Drew, L.J.
1980-01-01
A discovery-process model devised by Drew, Schuenemeyer, and Root can be used to predict the amount of petroleum to be discovered in a basin from some future level of exploratory effort: the predictions are based on historical drilling and discovery data. Because marginal costs of discovery and production are a function of field size, the model can be used to make estimates of future discoveries within deposit size classes. The modeling approach is a geometric one in which the area searched is a function of the size and shape of the targets being sought. A high correlation is assumed between the surface-projection area of the fields and the volume of petroleum. To predict how much oil remains to be found, the area searched must be computed, and the basin size and discovery efficiency must be estimated. The basin is assumed to be explored randomly rather than by pattern drilling. The model may be used to compute independent estimates of future oil at different depth intervals for a play involving multiple producing horizons. We have written FORTRAN computer programs that are used with Drew, Schuenemeyer, and Root's model to merge the discovery and drilling information and perform the necessary computations to estimate undiscovered petroleum. These program may be modified easily for the estimation of remaining quantities of commodities other than petroleum. ?? 1980.
Bhardwaj, Anshu; Scaria, Vinod; Raghava, Gajendra Pal Singh; Lynn, Andrew Michael; Chandra, Nagasuma; Banerjee, Sulagna; Raghunandanan, Muthukurussi V; Pandey, Vikas; Taneja, Bhupesh; Yadav, Jyoti; Dash, Debasis; Bhattacharya, Jaijit; Misra, Amit; Kumar, Anil; Ramachandran, Srinivasan; Thomas, Zakir; Brahmachari, Samir K
2011-09-01
It is being realized that the traditional closed-door and market driven approaches for drug discovery may not be the best suited model for the diseases of the developing world such as tuberculosis and malaria, because most patients suffering from these diseases have poor paying capacity. To ensure that new drugs are created for patients suffering from these diseases, it is necessary to formulate an alternate paradigm of drug discovery process. The current model constrained by limitations for collaboration and for sharing of resources with confidentiality hampers the opportunities for bringing expertise from diverse fields. These limitations hinder the possibilities of lowering the cost of drug discovery. The Open Source Drug Discovery project initiated by Council of Scientific and Industrial Research, India has adopted an open source model to power wide participation across geographical borders. Open Source Drug Discovery emphasizes integrative science through collaboration, open-sharing, taking up multi-faceted approaches and accruing benefits from advances on different fronts of new drug discovery. Because the open source model is based on community participation, it has the potential to self-sustain continuous development by generating a storehouse of alternatives towards continued pursuit for new drug discovery. Since the inventions are community generated, the new chemical entities developed by Open Source Drug Discovery will be taken up for clinical trial in a non-exclusive manner by participation of multiple companies with majority funding from Open Source Drug Discovery. This will ensure availability of drugs through a lower cost community driven drug discovery process for diseases afflicting people with poor paying capacity. Hopefully what LINUX the World Wide Web have done for the information technology, Open Source Drug Discovery will do for drug discovery. Copyright © 2011 Elsevier Ltd. All rights reserved.
A quantum causal discovery algorithm
NASA Astrophysics Data System (ADS)
Giarmatzi, Christina; Costa, Fabio
2018-03-01
Finding a causal model for a set of classical variables is now a well-established task—but what about the quantum equivalent? Even the notion of a quantum causal model is controversial. Here, we present a causal discovery algorithm for quantum systems. The input to the algorithm is a process matrix describing correlations between quantum events. Its output consists of different levels of information about the underlying causal model. Our algorithm determines whether the process is causally ordered by grouping the events into causally ordered non-signaling sets. It detects if all relevant common causes are included in the process, which we label Markovian, or alternatively if some causal relations are mediated through some external memory. For a Markovian process, it outputs a causal model, namely the causal relations and the corresponding mechanisms, represented as quantum states and channels. Our algorithm opens the route to more general quantum causal discovery methods.
Namouchi, Amine; Cimino, Mena; Favre-Rochex, Sandrine; Charles, Patricia; Gicquel, Brigitte
2017-07-13
Tuberculosis (TB) is caused by Mycobacterium tuberculosis and represents one of the major challenges facing drug discovery initiatives worldwide. The considerable rise in bacterial drug resistance in recent years has led to the need of new drugs and drug regimens. Model systems are regularly used to speed-up the drug discovery process and circumvent biosafety issues associated with manipulating M. tuberculosis. These include the use of strains such as Mycobacterium smegmatis and Mycobacterium marinum that can be handled in biosafety level 2 facilities, making high-throughput screening feasible. However, each of these model species have their own limitations. We report and describe the first complete genome sequence of Mycobacterium aurum ATCC23366, an environmental mycobacterium that can also grow in the gut of humans and animals as part of the microbiota. This species shows a comparable resistance profile to that of M. tuberculosis for several anti-TB drugs. The aims of this study were to (i) determine the drug resistance profile of a recently proposed model species, Mycobacterium aurum, strain ATCC23366, for anti-TB drug discovery as well as Mycobacterium smegmatis and Mycobacterium marinum (ii) sequence and annotate the complete genome sequence of this species obtained using Pacific Bioscience technology (iii) perform comparative genomics analyses of the various surrogate strains with M. tuberculosis (iv) discuss how the choice of the surrogate model used for drug screening can affect the drug discovery process. We describe the complete genome sequence of M. aurum, a surrogate model for anti-tuberculosis drug discovery. Most of the genes already reported to be associated with drug resistance are shared between all the surrogate strains and M. tuberculosis. We consider that M. aurum might be used in high-throughput screening for tuberculosis drug discovery. We also highly recommend the use of different model species during the drug discovery screening process.
Perez-Diaz de Cerio, David; Hernández, Ángela; Valenzuela, Jose Luis; Valdovinos, Antonio
2017-01-01
The purpose of this paper is to evaluate from a real perspective the performance of Bluetooth Low Energy (BLE) as a technology that enables fast and reliable discovery of a large number of users/devices in a short period of time. The BLE standard specifies a wide range of configurable parameter values that determine the discovery process and need to be set according to the particular application requirements. Many previous works have been addressed to investigate the discovery process through analytical and simulation models, according to the ideal specification of the standard. However, measurements show that additional scanning gaps appear in the scanning process, which reduce the discovery capabilities. These gaps have been identified in all of the analyzed devices and respond to both regular patterns and variable events associated with the decoding process. We have demonstrated that these non-idealities, which are not taken into account in other studies, have a severe impact on the discovery process performance. Extensive performance evaluation for a varying number of devices and feasible parameter combinations has been done by comparing simulations and experimental measurements. This work also includes a simple mathematical model that closely matches both the standard implementation and the different chipset peculiarities for any possible parameter value specified in the standard and for any number of simultaneous advertising devices under scanner coverage. PMID:28273801
Perez-Diaz de Cerio, David; Hernández, Ángela; Valenzuela, Jose Luis; Valdovinos, Antonio
2017-03-03
The purpose of this paper is to evaluate from a real perspective the performance of Bluetooth Low Energy (BLE) as a technology that enables fast and reliable discovery of a large number of users/devices in a short period of time. The BLE standard specifies a wide range of configurable parameter values that determine the discovery process and need to be set according to the particular application requirements. Many previous works have been addressed to investigate the discovery process through analytical and simulation models, according to the ideal specification of the standard. However, measurements show that additional scanning gaps appear in the scanning process, which reduce the discovery capabilities. These gaps have been identified in all of the analyzed devices and respond to both regular patterns and variable events associated with the decoding process. We have demonstrated that these non-idealities, which are not taken into account in other studies, have a severe impact on the discovery process performance. Extensive performance evaluation for a varying number of devices and feasible parameter combinations has been done by comparing simulations and experimental measurements. This work also includes a simple mathematical model that closely matches both the standard implementation and the different chipset peculiarities for any possible parameter value specified in the standard and for any number of simultaneous advertising devices under scanner coverage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yue, Peng; Gong, Jianya; Di, Liping
Abstract A geospatial catalogue service provides a network-based meta-information repository and interface for advertising and discovering shared geospatial data and services. Descriptive information (i.e., metadata) for geospatial data and services is structured and organized in catalogue services. The approaches currently available for searching and using that information are often inadequate. Semantic Web technologies show promise for better discovery methods by exploiting the underlying semantics. Such development needs special attention from the Cyberinfrastructure perspective, so that the traditional focus on discovery of and access to geospatial data can be expanded to support the increased demand for processing of geospatial information andmore » discovery of knowledge. Semantic descriptions for geospatial data, services, and geoprocessing service chains are structured, organized, and registered through extending elements in the ebXML Registry Information Model (ebRIM) of a geospatial catalogue service, which follows the interface specifications of the Open Geospatial Consortium (OGC) Catalogue Services for the Web (CSW). The process models for geoprocessing service chains, as a type of geospatial knowledge, are captured, registered, and discoverable. Semantics-enhanced discovery for geospatial data, services/service chains, and process models is described. Semantic search middleware that can support virtual data product materialization is developed for the geospatial catalogue service. The creation of such a semantics-enhanced geospatial catalogue service is important in meeting the demands for geospatial information discovery and analysis in Cyberinfrastructure.« less
Simple animal models for amyotrophic lateral sclerosis drug discovery.
Patten, Shunmoogum A; Parker, J Alex; Wen, Xiao-Yan; Drapeau, Pierre
2016-08-01
Simple animal models have enabled great progress in uncovering the disease mechanisms of amyotrophic lateral sclerosis (ALS) and are helping in the selection of therapeutic compounds through chemical genetic approaches. Within this article, the authors provide a concise overview of simple model organisms, C. elegans, Drosophila and zebrafish, which have been employed to study ALS and discuss their value to ALS drug discovery. In particular, the authors focus on innovative chemical screens that have established simple organisms as important models for ALS drug discovery. There are several advantages of using simple animal model organisms to accelerate drug discovery for ALS. It is the authors' particular belief that the amenability of simple animal models to various genetic manipulations, the availability of a wide range of transgenic strains for labelling motoneurons and other cell types, combined with live imaging and chemical screens should allow for new detailed studies elucidating early pathological processes in ALS and subsequent drug and target discovery.
Comparison: Discovery on WSMOLX and miAamics/jABC
NASA Astrophysics Data System (ADS)
Kubczak, Christian; Vitvar, Tomas; Winkler, Christian; Zaharia, Raluca; Zaremba, Maciej
This chapter compares the solutions to the SWS-Challenge discovery problems provided by DERI Galway and the joint solution from the Technical University of Dortmund and University of Postdam. The two approaches are described in depth in Chapters 10 and 13. The discovery scenario raises problems associated with making service discovery an automated process. It requires fine-grained specifications of search requests and service functionality including support for fetching dynamic information during the discovery process (e.g., shipment price). Both teams utilize semantics to describe services, service requests and data models in order to enable search at the required fine-grained level of detail.
A model for the prediction of latent errors using data obtained during the development process
NASA Technical Reports Server (NTRS)
Gaffney, J. E., Jr.; Martello, S. J.
1984-01-01
A model implemented in a program that runs on the IBM PC for estimating the latent (or post ship) content of a body of software upon its initial release to the user is presented. The model employs the count of errors discovered at one or more of the error discovery processes during development, such as a design inspection, as the input data for a process which provides estimates of the total life-time (injected) error content and of the latent (or post ship) error content--the errors remaining a delivery. The model presented presumes that these activities cover all of the opportunities during the software development process for error discovery (and removal).
Novel opportunities for computational biology and sociology in drug discovery☆
Yao, Lixia; Evans, James A.; Rzhetsky, Andrey
2013-01-01
Current drug discovery is impossible without sophisticated modeling and computation. In this review we outline previous advances in computational biology and, by tracing the steps involved in pharmaceutical development, explore a range of novel, high-value opportunities for computational innovation in modeling the biological process of disease and the social process of drug discovery. These opportunities include text mining for new drug leads, modeling molecular pathways and predicting the efficacy of drug cocktails, analyzing genetic overlap between diseases and predicting alternative drug use. Computation can also be used to model research teams and innovative regions and to estimate the value of academy–industry links for scientific and human benefit. Attention to these opportunities could promise punctuated advance and will complement the well-established computational work on which drug discovery currently relies. PMID:20349528
Giri, Shibashish; Bader, Augustinus
2015-01-01
Knockout, knock-in and conditional mutant gene-targeted mice are routinely used for disease modeling in the drug discovery process, but the human response is often difficult to predict from these models. It is believed that patient-derived induced pluripotent stem cells (iPSCs) could replace millions of animals currently sacrificed in preclinical testing and provide a route to new safer pharmaceutical products. In this review, we discuss the use of IPSCs in the drug discovery process. We highlight how they can be used to assess the toxicity and clinical efficacy of drug candidates before the latter are moved into costly and lengthy preclinical and clinical trials. Copyright © 2014 Elsevier Ltd. All rights reserved.
Analyzing Student Inquiry Data Using Process Discovery and Sequence Classification
ERIC Educational Resources Information Center
Emond, Bruno; Buffett, Scott
2015-01-01
This paper reports on results of applying process discovery mining and sequence classification mining techniques to a data set of semi-structured learning activities. The main research objective is to advance educational data mining to model and support self-regulated learning in heterogeneous environments of learning content, activities, and…
Information Fusion for Natural and Man-Made Disasters
2007-01-31
comprehensively large, and metaphysically accurate model of situations, through which specific tasks such as situation assessment, knowledge discovery , or the...significance” is always context specific. Event discovery is a very important element of the HLF process, which can lead to knowledge discovery about...expected, given the current state of knowledge . Examples of such behavior may include discovery of a new aggregate or situation, a specific pattern of
SemaTyP: a knowledge graph based literature mining method for drug discovery.
Sang, Shengtian; Yang, Zhihao; Wang, Lei; Liu, Xiaoxia; Lin, Hongfei; Wang, Jian
2018-05-30
Drug discovery is the process through which potential new medicines are identified. High-throughput screening and computer-aided drug discovery/design are the two main drug discovery methods for now, which have successfully discovered a series of drugs. However, development of new drugs is still an extremely time-consuming and expensive process. Biomedical literature contains important clues for the identification of potential treatments. It could support experts in biomedicine on their way towards new discoveries. Here, we propose a biomedical knowledge graph-based drug discovery method called SemaTyP, which discovers candidate drugs for diseases by mining published biomedical literature. We first construct a biomedical knowledge graph with the relations extracted from biomedical abstracts, then a logistic regression model is trained by learning the semantic types of paths of known drug therapies' existing in the biomedical knowledge graph, finally the learned model is used to discover drug therapies for new diseases. The experimental results show that our method could not only effectively discover new drug therapies for new diseases, but also could provide the potential mechanism of action of the candidate drugs. In this paper we propose a novel knowledge graph based literature mining method for drug discovery. It could be a supplementary method for current drug discovery methods.
Predicting Error Bars for QSAR Models
NASA Astrophysics Data System (ADS)
Schroeter, Timon; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert
2007-09-01
Unfavorable physicochemical properties often cause drug failures. It is therefore important to take lipophilicity and water solubility into account early on in lead discovery. This study presents log D7 models built using Gaussian Process regression, Support Vector Machines, decision trees and ridge regression algorithms based on 14556 drug discovery compounds of Bayer Schering Pharma. A blind test was conducted using 7013 new measurements from the last months. We also present independent evaluations using public data. Apart from accuracy, we discuss the quality of error bars that can be computed by Gaussian Process models, and ensemble and distance based techniques for the other modelling approaches.
Liu, Jun-Jun; Xiang, Yu
2011-01-01
WRKY transcription factors are key regulators of numerous biological processes in plant growth and development, as well as plant responses to abiotic and biotic stresses. Research on biological functions of plant WRKY genes has focused in the past on model plant species or species with largely characterized transcriptomes. However, a variety of non-model plants, such as forest conifers, are essential as feed, biofuel, and wood or for sustainable ecosystems. Identification of WRKY genes in these non-model plants is equally important for understanding the evolutionary and function-adaptive processes of this transcription factor family. Because of limited genomic information, the rarity of regulatory gene mRNAs in transcriptomes, and the sequence divergence to model organism genes, identification of transcription factors in non-model plants using methods similar to those generally used for model plants is difficult. This chapter describes a gene family discovery strategy for identification of WRKY transcription factors in conifers by a combination of in silico-based prediction and PCR-based experimental approaches. Compared to traditional cDNA library screening or EST sequencing at transcriptome scales, this integrated gene discovery strategy provides fast, simple, reliable, and specific methods to unveil the WRKY gene family at both genome and transcriptome levels in non-model plants.
Application of PBPK modelling in drug discovery and development at Pfizer.
Jones, Hannah M; Dickins, Maurice; Youdim, Kuresh; Gosset, James R; Attkins, Neil J; Hay, Tanya L; Gurrell, Ian K; Logan, Y Raj; Bungay, Peter J; Jones, Barry C; Gardner, Iain B
2012-01-01
Early prediction of human pharmacokinetics (PK) and drug-drug interactions (DDI) in drug discovery and development allows for more informed decision making. Physiologically based pharmacokinetic (PBPK) modelling can be used to answer a number of questions throughout the process of drug discovery and development and is thus becoming a very popular tool. PBPK models provide the opportunity to integrate key input parameters from different sources to not only estimate PK parameters and plasma concentration-time profiles, but also to gain mechanistic insight into compound properties. Using examples from the literature and our own company, we have shown how PBPK techniques can be utilized through the stages of drug discovery and development to increase efficiency, reduce the need for animal studies, replace clinical trials and to increase PK understanding. Given the mechanistic nature of these models, the future use of PBPK modelling in drug discovery and development is promising, however, some limitations need to be addressed to realize its application and utility more broadly.
Improving Mathematics Achievement of Indonesian 5th Grade Students through Guided Discovery Learning
ERIC Educational Resources Information Center
Yurniwati; Hanum, Latipa
2017-01-01
This research aims to find information about the improvement of mathematics achievement of grade five student through guided discovery learning. This research method is classroom action research using Kemmis and Taggart model consists of three cycles. Data used in this study is learning process and learning results. Learning process data is…
ERIC Educational Resources Information Center
Teichert, Melonie A.; Tien, Lydia T.; Dysleski, Lisa; Rickey, Dawn
2017-01-01
This study investigated relationships between the thinking processes that 28 undergraduate chemistry students engaged in during guided discovery and their subsequent success at reasoning through a transfer problem during an end-of-semester interview. During a guided-discovery laboratory module, students were prompted to use words, pictures, and…
NASA Astrophysics Data System (ADS)
Paulraj, D.; Swamynathan, S.; Madhaiyan, M.
2012-11-01
Web Service composition has become indispensable as a single web service cannot satisfy complex functional requirements. Composition of services has received much interest to support business-to-business (B2B) or enterprise application integration. An important component of the service composition is the discovery of relevant services. In Semantic Web Services (SWS), service discovery is generally achieved by using service profile of Ontology Web Languages for Services (OWL-S). The profile of the service is a derived and concise description but not a functional part of the service. The information contained in the service profile is sufficient for atomic service discovery, but it is not sufficient for the discovery of composite semantic web services (CSWS). The purpose of this article is two-fold: first to prove that the process model is a better choice than the service profile for service discovery. Second, to facilitate the composition of inter-organisational CSWS by proposing a new composition method which uses process ontology. The proposed service composition approach uses an algorithm which performs a fine grained match at the level of atomic process rather than at the level of the entire service in a composite semantic web service. Many works carried out in this area have proposed solutions only for the composition of atomic services and this article proposes a solution for the composition of composite semantic web services.
From basic to clinical neuropharmacology: targetophilia or pharmacodynamics?
Green, A Richard; Aronson, Jeffrey K
2012-06-01
Historically, much drug discovery and development in psychopharmacology tended to be empirical. However, over the last 20 years it has primarily been target oriented, with synthesis and selection of compounds designed to act at a specific neurochemical site. Such compounds are then examined in functional animal models of disease. There is little evidence that this approach (which we call 'targetophilia') has enhanced the discovery process and some indications that it may have retarded it. A major problem is the weakness of many animal models in mimicking the disease and the lack of appropriate biochemical markers of drug action in animals and patients. In this review we argue that preclinical studies should be conducted as if they were clinical studies in design, analysis, and reporting, and that clinical pharmacologists should be involved at the earliest stages, to help ensure that animal models reflect as closely as possible the clinical disease. In addition, their familiarity with pharmacokinetic-pharmacodynamic integration (PK-PD) would help ensure that appropriate dosing and drug measurement techniques are applied to the discovery process, thereby producing results with relevance to therapeutics. Better integration of experimental and clinical pharmacologists early in the discovery process would allow observations in animals and patients to be quickly exchanged between the two disciplines. This non-linear approach to discovery used to be the way research proceeded, and it resulted in productivity that has never been bettered. It also follows that occasionally 'look-see' studies, a proven technique for drug discovery, deserve to be reintroduced. © 2012 The Authors. British Journal of Clinical Pharmacology © 2012 The British Pharmacological Society.
Predicting Error Bars for QSAR Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeter, Timon; Technische Universitaet Berlin, Department of Computer Science, Franklinstrasse 28/29, 10587 Berlin; Schwaighofer, Anton
2007-09-18
Unfavorable physicochemical properties often cause drug failures. It is therefore important to take lipophilicity and water solubility into account early on in lead discovery. This study presents log D{sub 7} models built using Gaussian Process regression, Support Vector Machines, decision trees and ridge regression algorithms based on 14556 drug discovery compounds of Bayer Schering Pharma. A blind test was conducted using 7013 new measurements from the last months. We also present independent evaluations using public data. Apart from accuracy, we discuss the quality of error bars that can be computed by Gaussian Process models, and ensemble and distance based techniquesmore » for the other modelling approaches.« less
Accelerating Drug Development: Antiviral Therapies for Emerging Viruses as a Model.
Everts, Maaike; Cihlar, Tomas; Bostwick, J Robert; Whitley, Richard J
2017-01-06
Drug discovery and development is a lengthy and expensive process. Although no one, simple, single solution can significantly accelerate this process, steps can be taken to avoid unnecessary delays. Using the development of antiviral therapies as a model, we describe options for acceleration that cover target selection, assay development and high-throughput screening, hit confirmation, lead identification and development, animal model evaluations, toxicity studies, regulatory issues, and the general drug discovery and development infrastructure. Together, these steps could result in accelerated timelines for bringing antiviral therapies to market so they can treat emerging infections and reduce human suffering.
Cell and small animal models for phenotypic drug discovery.
Szabo, Mihaly; Svensson Akusjärvi, Sara; Saxena, Ankur; Liu, Jianping; Chandrasekar, Gayathri; Kitambi, Satish S
2017-01-01
The phenotype-based drug discovery (PDD) approach is re-emerging as an alternative platform for drug discovery. This review provides an overview of the various model systems and technical advances in imaging and image analyses that strengthen the PDD platform. In PDD screens, compounds of therapeutic value are identified based on the phenotypic perturbations produced irrespective of target(s) or mechanism of action. In this article, examples of phenotypic changes that can be detected and quantified with relative ease in a cell-based setup are discussed. In addition, a higher order of PDD screening setup using small animal models is also explored. As PDD screens integrate physiology and multiple signaling mechanisms during the screening process, the identified hits have higher biomedical applicability. Taken together, this review highlights the advantages gained by adopting a PDD approach in drug discovery. Such a PDD platform can complement target-based systems that are currently in practice to accelerate drug discovery.
Discovery informatics in biological and biomedical sciences: research challenges and opportunities.
Honavar, Vasant
2015-01-01
New discoveries in biological, biomedical and health sciences are increasingly being driven by our ability to acquire, share, integrate and analyze, and construct and simulate predictive models of biological systems. While much attention has focused on automating routine aspects of management and analysis of "big data", realizing the full potential of "big data" to accelerate discovery calls for automating many other aspects of the scientific process that have so far largely resisted automation: identifying gaps in the current state of knowledge; generating and prioritizing questions; designing studies; designing, prioritizing, planning, and executing experiments; interpreting results; forming hypotheses; drawing conclusions; replicating studies; validating claims; documenting studies; communicating results; reviewing results; and integrating results into the larger body of knowledge in a discipline. Against this background, the PSB workshop on Discovery Informatics in Biological and Biomedical Sciences explores the opportunities and challenges of automating discovery or assisting humans in discovery through advances (i) Understanding, formalization, and information processing accounts of, the entire scientific process; (ii) Design, development, and evaluation of the computational artifacts (representations, processes) that embody such understanding; and (iii) Application of the resulting artifacts and systems to advance science (by augmenting individual or collective human efforts, or by fully automating science).
Ou-Yang, Si-sheng; Lu, Jun-yan; Kong, Xiang-qian; Liang, Zhong-jie; Luo, Cheng; Jiang, Hualiang
2012-01-01
Computational drug discovery is an effective strategy for accelerating and economizing drug discovery and development process. Because of the dramatic increase in the availability of biological macromolecule and small molecule information, the applicability of computational drug discovery has been extended and broadly applied to nearly every stage in the drug discovery and development workflow, including target identification and validation, lead discovery and optimization and preclinical tests. Over the past decades, computational drug discovery methods such as molecular docking, pharmacophore modeling and mapping, de novo design, molecular similarity calculation and sequence-based virtual screening have been greatly improved. In this review, we present an overview of these important computational methods, platforms and successful applications in this field. PMID:22922346
Computational modeling in melanoma for novel drug discovery.
Pennisi, Marzio; Russo, Giulia; Di Salvatore, Valentina; Candido, Saverio; Libra, Massimo; Pappalardo, Francesco
2016-06-01
There is a growing body of evidence highlighting the applications of computational modeling in the field of biomedicine. It has recently been applied to the in silico analysis of cancer dynamics. In the era of precision medicine, this analysis may allow the discovery of new molecular targets useful for the design of novel therapies and for overcoming resistance to anticancer drugs. According to its molecular behavior, melanoma represents an interesting tumor model in which computational modeling can be applied. Melanoma is an aggressive tumor of the skin with a poor prognosis for patients with advanced disease as it is resistant to current therapeutic approaches. This review discusses the basics of computational modeling in melanoma drug discovery and development. Discussion includes the in silico discovery of novel molecular drug targets, the optimization of immunotherapies and personalized medicine trials. Mathematical and computational models are gradually being used to help understand biomedical data produced by high-throughput analysis. The use of advanced computer models allowing the simulation of complex biological processes provides hypotheses and supports experimental design. The research in fighting aggressive cancers, such as melanoma, is making great strides. Computational models represent the key component to complement these efforts. Due to the combinatorial complexity of new drug discovery, a systematic approach based only on experimentation is not possible. Computational and mathematical models are necessary for bringing cancer drug discovery into the era of omics, big data and personalized medicine.
Reflective practice and guided discovery: clinical supervision.
Todd, G; Freshwater, D
This article explores the parallels between reflective practice as a model for clinical supervision, and guided discovery as a skill in cognitive psychotherapy. A description outlining the historical development of clinical supervision in relationship to positional papers and policies is followed by an exposé of the difficulties in developing a clear, consistent model of clinical supervision with a coherent focus; reflective practice is proposed as a model of choice for clinical supervision in nursing. The article examines the parallels and processes of a model of reflection in an individual clinical supervision session, and the use of guided discovery through Socratic dialogue with a depressed patient in cognitive psychotherapy. Extracts from both sessions are used to illuminate the subsequent discussion.
ERIC Educational Resources Information Center
Liu, Ran; Koedinger, Kenneth R.
2017-01-01
As the use of educational technology becomes more ubiquitous, an enormous amount of learning process data is being produced. Educational data mining seeks to analyze and model these data, with the ultimate goal of improving learning outcomes. The most firmly grounded and rigorous evaluation of an educational data mining discovery is whether it…
Patient-derived tumour xenografts for breast cancer drug discovery.
Cassidy, John W; Batra, Ankita S; Greenwood, Wendy; Bruna, Alejandra
2016-12-01
Despite remarkable advances in our understanding of the drivers of human malignancies, new targeted therapies often fail to show sufficient efficacy in clinical trials. Indeed, the cost of bringing a new agent to market has risen substantially in the last several decades, in part fuelled by extensive reliance on preclinical models that fail to accurately reflect tumour heterogeneity. To halt unsustainable rates of attrition in the drug discovery process, we must develop a new generation of preclinical models capable of reflecting the heterogeneity of varying degrees of complexity found in human cancers. Patient-derived tumour xenograft (PDTX) models prevail as arguably the most powerful in this regard because they capture cancer's heterogeneous nature. Herein, we review current breast cancer models and their use in the drug discovery process, before discussing best practices for developing a highly annotated cohort of PDTX models. We describe the importance of extensive multidimensional molecular and functional characterisation of models and combination drug-drug screens to identify complex biomarkers of drug resistance and response. We reflect on our own experiences and propose the use of a cost-effective intermediate pharmacogenomic platform (the PDTX-PDTC platform) for breast cancer drug and biomarker discovery. We discuss the limitations and unanswered questions of PDTX models; yet, still strongly envision that their use in basic and translational research will dramatically change our understanding of breast cancer biology and how to more effectively treat it. © 2016 The authors.
Ghaemi, Reza; Selvaganapathy, Ponnambalam R
Drug discovery is a long and expensive process, which usually takes 12-15 years and could cost up to ~$1 billion. Conventional drug discovery process starts with high throughput screening and selection of drug candidates that bind to specific target associated with a disease condition. However, this process does not consider whether the chosen candidate is optimal not only for binding but also for ease of administration, distribution in the body, effect of metabolism and associated toxicity if any. A holistic approach, using model organisms early in the drug discovery process to select drug candidates that are optimal not only in binding but also suitable for administration, distribution and are not toxic is now considered as a viable way for lowering the cost and time associated with the drug discovery process. However, the conventional drug discovery assays using Drosophila are manual and required skill operator, which makes them expensive and not suitable for high-throughput screening. Recently, microfluidics has been used to automate many of the operations (e.g. sorting, positioning, drug delivery) associated with the Drosophila drug discovery assays and thereby increase their throughput. This review highlights recent microfluidic devices that have been developed for Drosophila assays with primary application towards drug discovery for human diseases. The microfluidic devices that have been reviewed in this paper are categorized based on the stage of the Drosophila that have been used. In each category, the microfluidic technologies behind each device are described and their potential biological applications are discussed.
Pregnancy discovery and acceptance among low-income primiparous women: a multicultural exploration.
Peacock, N R; Kelley, M A; Carpenter, C; Davis, M; Burnett, G; Chavez, N; Aranda, V
2001-06-01
As part of a larger study exploring psychosocial factors that influence self-care and use of health care services during pregnancy, we investigated the process of pregnancy discovery and acceptance among a culturally diverse group of women who had given birth to their first child in the year preceding data collection. Eighty-seven low-income women from four cultural groups (African American, Mexican, Puerto Rican, and white) participated in eight focus groups held in their communities. The focus groups were ethnically homogenous and stratified by early and late entry into prenatal care. A social influence model guided the development of focus group questions, and the study followed a participatory action research model, with community members involved in all phases of the research. Issues that emerged from the focus groups as possible influences on timing of pregnancy recognition include the role of pregnancy signs and symptoms and pregnancy risk perception in the discovery process, the role of social network members in labeling and affirming the pregnancy, concerns about disclosure, "planning" status of the pregnancy, and perceived availability of choices for resolving an unintended pregnancy. The pregnancy discovery process is complex, and when protracted, can potentially result in delayed initiation of both prenatal care and healthful pregnancy behaviors. Enhancing our understanding of pregnancy discovery and acceptance has clear implications for primary and secondary prevention. Future research is needed to further explain the trajectory of pregnancy discovery and acceptance and its influence on health behaviors and pregnancy outcome.
Outsourcing drug discovery to India and China: from surviving to thriving.
Subramaniam, Swaminathan; Dugar, Sundeep
2012-10-01
Global pharmaceutical companies face an increasingly harsh environment for their primary business of selling medicines. They have to contend with a spiraling decline in the productivity of their R&D programs that is guaranteed to severely diminish their growth prospects. Outsourcing of drug discovery activities to low-cost locations is a growing response to this crisis. However, the upsides to outsourcing are capped by the failure of global pharmaceutical companies to take advantage of the full range of possibilities that this model provides. Companies that radically rethink and transform the way they conduct R&D, such as seeking the benefits of low-cost locations in India and China will be the ones that thrive in this environment. In this article we present our views on how the outsourcing model in drug discovery should go beyond increasing the efficiency of existing drug discovery processes to a fundamental rethink and re-engineering of these processes. Copyright © 2012. Published by Elsevier Ltd.
Discovery of novel drugs for promising targets.
Martell, Robert E; Brooks, David G; Wang, Yan; Wilcoxen, Keith
2013-09-01
Once a promising drug target is identified, the steps to actually discover and optimize a drug are diverse and challenging. The goal of this study was to provide a road map to navigate drug discovery. Review general steps for drug discovery and provide illustrating references. A number of approaches are available to enhance and accelerate target identification and validation. Consideration of a variety of potential mechanisms of action of potential drugs can guide discovery efforts. The hit to lead stage may involve techniques such as high-throughput screening, fragment-based screening, and structure-based design, with informatics playing an ever-increasing role. Biologically relevant screening models are discussed, including cell lines, 3-dimensional culture, and in vivo screening. The process of enabling human studies for an investigational drug is also discussed. Drug discovery is a complex process that has significantly evolved in recent years. © 2013 Elsevier HS Journals, Inc. All rights reserved.
Drug Discovery in Academia- the third way?
Frearson, Julie; Wyatt, Paul
2010-01-01
As the pharmaceutical industry continues to re-strategise and focus on low-risk, relatively short term gains for the sake of survival, we need to re-invigorate the early stages of drug discovery and rebalance efforts towards novel modes of action therapeutics and neglected genetic and tropical diseases. Academic drug discovery is one model which offers the promise of new approaches and an alternative organisational culture for drug discovery as it attempts to apply academic innovation and thought processes to the challenge of discovering drugs to address real unmet need. PMID:20922062
Purposive discovery of operations
NASA Technical Reports Server (NTRS)
Sims, Michael H.; Bresina, John L.
1992-01-01
The Generate, Prune & Prove (GPP) methodology for discovering definitions of mathematical operators is introduced. GPP is a task within the IL exploration discovery system. We developed GPP for use in the discovery of mathematical operators with a wider class of representations than was possible with the previous methods by Lenat and by Shen. GPP utilizes the purpose for which an operator is created to prune the possible definitions. The relevant search spaces are immense and there exists insufficient information for a complete evaluation of the purpose constraint, so it is necessary to perform a partial evaluation of the purpose (i.e., pruning) constraint. The constraint is first transformed so that it is operational with respect to the partial information, and then it is applied to examples in order to test the generated candidates for an operator's definition. In the GPP process, once a candidate definition survives this empirical prune, it is passed on to a theorem prover for formal verification. We describe the application of this methodology to the (re)discovery of the definition of multiplication for Conway numbers, a discovery which is difficult for human mathematicians. We successfully model this discovery process utilizing information which was reasonably available at the time of Conway's original discovery. As part of this discovery process, we reduce the size of the search space from a computationally intractable size to 3468 elements.
Computational methods in drug discovery
Leelananda, Sumudu P
2016-01-01
The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD) tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein–ligand docking, pharmacophore modeling and QSAR techniques are reviewed. PMID:28144341
Computational methods in drug discovery.
Leelananda, Sumudu P; Lindert, Steffen
2016-01-01
The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD) tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein-ligand docking, pharmacophore modeling and QSAR techniques are reviewed.
Causal discovery in the geosciences-Using synthetic data to learn how to interpret results
NASA Astrophysics Data System (ADS)
Ebert-Uphoff, Imme; Deng, Yi
2017-02-01
Causal discovery algorithms based on probabilistic graphical models have recently emerged in geoscience applications for the identification and visualization of dynamical processes. The key idea is to learn the structure of a graphical model from observed spatio-temporal data, thus finding pathways of interactions in the observed physical system. Studying those pathways allows geoscientists to learn subtle details about the underlying dynamical mechanisms governing our planet. Initial studies using this approach on real-world atmospheric data have shown great potential for scientific discovery. However, in these initial studies no ground truth was available, so that the resulting graphs have been evaluated only by whether a domain expert thinks they seemed physically plausible. The lack of ground truth is a typical problem when using causal discovery in the geosciences. Furthermore, while most of the connections found by this method match domain knowledge, we encountered one type of connection for which no explanation was found. To address both of these issues we developed a simulation framework that generates synthetic data of typical atmospheric processes (advection and diffusion). Applying the causal discovery algorithm to the synthetic data allowed us (1) to develop a better understanding of how these physical processes appear in the resulting connectivity graphs, and thus how to better interpret such connectivity graphs when obtained from real-world data; (2) to solve the mystery of the previously unexplained connections.
Perspectives on bioanalytical mass spectrometry and automation in drug discovery.
Janiszewski, John S; Liston, Theodore E; Cole, Mark J
2008-11-01
The use of high speed synthesis technologies has resulted in a steady increase in the number of new chemical entities active in the drug discovery research stream. Large organizations can have thousands of chemical entities in various stages of testing and evaluation across numerous projects on a weekly basis. Qualitative and quantitative measurements made using LC/MS are integrated throughout this process from early stage lead generation through candidate nomination. Nearly all analytical processes and procedures in modern research organizations are automated to some degree. This includes both hardware and software automation. In this review we discuss bioanalytical mass spectrometry and automation as components of the analytical chemistry infrastructure in pharma. Analytical chemists are presented as members of distinct groups with similar skillsets that build automated systems, manage test compounds, assays and reagents, and deliver data to project teams. The ADME-screening process in drug discovery is used as a model to highlight the relationships between analytical tasks in drug discovery. Emerging software and process automation tools are described that can potentially address gaps and link analytical chemistry related tasks. The role of analytical chemists and groups in modern 'industrialized' drug discovery is also discussed.
Applying Knowledge Discovery in Databases in Public Health Data Set: Challenges and Concerns
Volrathongchia, Kanittha
2003-01-01
In attempting to apply Knowledge Discovery in Databases (KDD) to generate a predictive model from a health care dataset that is currently available to the public, the first step is to pre-process the data to overcome the challenges of missing data, redundant observations, and records containing inaccurate data. This study will demonstrate how to use simple pre-processing methods to improve the quality of input data. PMID:14728545
Raffa, Robert B.; Raffa, Kenneth F.
2011-01-01
Introduction There is a pervasive and growing concern about the small number of new pharmaceutical agents. There are many proposed explanations for this trend that do not involve the drug-discovery process per se, but the discovery process itself has also come under scrutiny. If the current paradigms are indeed not working, where are novel ideas to come from? Perhaps it is time to look to novel sources. Areas covered The receptor-signaling and 2nd-messenger transduction processes present in insects are quite similar to those in mammals (involving G proteins, ion channels, etc.). However, a review of these systems reveals an unprecedented degree of high potency and receptor selectivity to an extent greater than that modeled in most current drug-discovery approaches. Expert opinion A better understanding of insect receptor pharmacology could stimulate novel theoretical and practical ideas in mammalian pharmacology (drug discovery) and, conversely, the application of pharmacology and medicinal chemistry principles could stimulate novel advances in entomology (safer and more targeted control of pest species). PMID:21984882
Flexible End2End Workflow Automation of Hit-Discovery Research.
Holzmüller-Laue, Silke; Göde, Bernd; Thurow, Kerstin
2014-08-01
The article considers a new approach of more complex laboratory automation at the workflow layer. The authors purpose the automation of end2end workflows. The combination of all relevant subprocesses-whether automated or manually performed, independently, and in which organizational unit-results in end2end processes that include all result dependencies. The end2end approach focuses on not only the classical experiments in synthesis or screening, but also on auxiliary processes such as the production and storage of chemicals, cell culturing, and maintenance as well as preparatory activities and analyses of experiments. Furthermore, the connection of control flow and data flow in the same process model leads to reducing of effort of the data transfer between the involved systems, including the necessary data transformations. This end2end laboratory automation can be realized effectively with the modern methods of business process management (BPM). This approach is based on a new standardization of the process-modeling notation Business Process Model and Notation 2.0. In drug discovery, several scientific disciplines act together with manifold modern methods, technologies, and a wide range of automated instruments for the discovery and design of target-based drugs. The article discusses the novel BPM-based automation concept with an implemented example of a high-throughput screening of previously synthesized compound libraries. © 2014 Society for Laboratory Automation and Screening.
Daily life activity routine discovery in hemiparetic rehabilitation patients using topic models.
Seiter, J; Derungs, A; Schuster-Amft, C; Amft, O; Tröster, G
2015-01-01
Monitoring natural behavior and activity routines of hemiparetic rehabilitation patients across the day can provide valuable progress information for therapists and patients and contribute to an optimized rehabilitation process. In particular, continuous patient monitoring could add type, frequency and duration of daily life activity routines and hence complement standard clinical scores that are assessed for particular tasks only. Machine learning methods have been applied to infer activity routines from sensor data. However, supervised methods require activity annotations to build recognition models and thus require extensive patient supervision. Discovery methods, including topic models could provide patient routine information and deal with variability in activity and movement performance across patients. Topic models have been used to discover characteristic activity routine patterns of healthy individuals using activity primitives recognized from supervised sensor data. Yet, the applicability of topic models for hemiparetic rehabilitation patients and techniques to derive activity primitives without supervision needs to be addressed. We investigate, 1) whether a topic model-based activity routine discovery framework can infer activity routines of rehabilitation patients from wearable motion sensor data. 2) We compare the performance of our topic model-based activity routine discovery using rule-based and clustering-based activity vocabulary. We analyze the activity routine discovery in a dataset recorded with 11 hemiparetic rehabilitation patients during up to ten full recording days per individual in an ambulatory daycare rehabilitation center using wearable motion sensors attached to both wrists and the non-affected thigh. We introduce and compare rule-based and clustering-based activity vocabulary to process statistical and frequency acceleration features to activity words. Activity words were used for activity routine pattern discovery using topic models based on Latent Dirichlet Allocation. Discovered activity routine patterns were then mapped to six categorized activity routines. Using the rule-based approach, activity routines could be discovered with an average accuracy of 76% across all patients. The rule-based approach outperformed clustering by 10% and showed less confusions for predicted activity routines. Topic models are suitable to discover daily life activity routines in hemiparetic rehabilitation patients without trained classifiers and activity annotations. Activity routines show characteristic patterns regarding activity primitives including body and extremity postures and movement. A patient-independent rule set can be derived. Including expert knowledge supports successful activity routine discovery over completely data-driven clustering.
Modelling and enhanced molecular dynamics to steer structure-based drug discovery.
Kalyaanamoorthy, Subha; Chen, Yi-Ping Phoebe
2014-05-01
The ever-increasing gap between the availabilities of the genome sequences and the crystal structures of proteins remains one of the significant challenges to the modern drug discovery efforts. The knowledge of structure-dynamics-functionalities of proteins is important in order to understand several key aspects of structure-based drug discovery, such as drug-protein interactions, drug binding and unbinding mechanisms and protein-protein interactions. This review presents a brief overview on the different state of the art computational approaches that are applied for protein structure modelling and molecular dynamics simulations of biological systems. We give an essence of how different enhanced sampling molecular dynamics approaches, together with regular molecular dynamics methods, assist in steering the structure based drug discovery processes. Copyright © 2013 Elsevier Ltd. All rights reserved.
Development of Scientific Approach Based on Discovery Learning Module
NASA Astrophysics Data System (ADS)
Ellizar, E.; Hardeli, H.; Beltris, S.; Suharni, R.
2018-04-01
Scientific Approach is a learning process, designed to make the students actively construct their own knowledge through stages of scientific method. The scientific approach in learning process can be done by using learning modules. One of the learning model is discovery based learning. Discovery learning is a learning model for the valuable things in learning through various activities, such as observation, experience, and reasoning. In fact, the students’ activity to construct their own knowledge were not optimal. It’s because the available learning modules were not in line with the scientific approach. The purpose of this study was to develop a scientific approach discovery based learning module on Acid Based, also on electrolyte and non-electrolyte solution. The developing process of this chemistry modules use the Plomp Model with three main stages. The stages are preliminary research, prototyping stage, and the assessment stage. The subject of this research was the 10th and 11th Grade of Senior High School students (SMAN 2 Padang). Validation were tested by the experts of Chemistry lecturers and teachers. Practicality of these modules had been tested through questionnaire. The effectiveness had been tested through experimental procedure by comparing student achievement between experiment and control groups. Based on the findings, it can be concluded that the developed scientific approach discovery based learning module significantly improve the students’ learning in Acid-based and Electrolyte solution. The result of the data analysis indicated that the chemistry module was valid in content, construct, and presentation. Chemistry module also has a good practicality level and also accordance with the available time. This chemistry module was also effective, because it can help the students to understand the content of the learning material. That’s proved by the result of learning student. Based on the result can conclude that chemistry module based on discovery learning and scientific approach in electrolyte and non-electrolyte solution and Acid Based for the 10th and 11th grade of senior high school students were valid, practice, and effective.
De Benedetti, Pier G; Fanelli, Francesca
2018-03-21
Simple comparative correlation analyses and quantitative structure-kinetics relationship (QSKR) models highlight the interplay of kinetic rates and binding affinity as an essential feature in drug design and discovery. The choice of the molecular series, and their structural variations, used in QSKR modeling is fundamental to understanding the mechanistic implications of ligand and/or drug-target binding and/or unbinding processes. Here, we discuss the implications of linear correlations between kinetic rates and binding affinity constants and the relevance of the computational approaches to QSKR modeling. Copyright © 2018 Elsevier Ltd. All rights reserved.
Computational biology for cardiovascular biomarker discovery.
Azuaje, Francisco; Devaux, Yvan; Wagner, Daniel
2009-07-01
Computational biology is essential in the process of translating biological knowledge into clinical practice, as well as in the understanding of biological phenomena based on the resources and technologies originating from the clinical environment. One such key contribution of computational biology is the discovery of biomarkers for predicting clinical outcomes using 'omic' information. This process involves the predictive modelling and integration of different types of data and knowledge for screening, diagnostic or prognostic purposes. Moreover, this requires the design and combination of different methodologies based on statistical analysis and machine learning. This article introduces key computational approaches and applications to biomarker discovery based on different types of 'omic' data. Although we emphasize applications in cardiovascular research, the computational requirements and advances discussed here are also relevant to other domains. We will start by introducing some of the contributions of computational biology to translational research, followed by an overview of methods and technologies used for the identification of biomarkers with predictive or classification value. The main types of 'omic' approaches to biomarker discovery will be presented with specific examples from cardiovascular research. This will include a review of computational methodologies for single-source and integrative data applications. Major computational methods for model evaluation will be described together with recommendations for reporting models and results. We will present recent advances in cardiovascular biomarker discovery based on the combination of gene expression and functional network analyses. The review will conclude with a discussion of key challenges for computational biology, including perspectives from the biosciences and clinical areas.
Discovery and Change: How Children Redraft Their Narrative Writing.
ERIC Educational Resources Information Center
Booley, Heather A.
1984-01-01
Fourteen year olds were introduced to a process model for writing and revising fictional narratives. Drafts were analyzed for evidence that they actively worked on cognitive, stylistic, and affective aspects of their narratives. Eighteen of 32 pupils made extensive or significant changes influenced by the process model. (SK)
Vandervert, Larry
2015-01-01
Following in the vein of studies that concluded that music training resulted in plastic changes in Einstein's cerebral cortex, controlled research has shown that music training (1) enhances central executive attentional processes in working memory, and (2) has also been shown to be of significant therapeutic value in neurological disorders. Within this framework of music training-induced enhancement of central executive attentional processes, the purpose of this article is to argue that: (1) The foundational basis of the central executive begins in infancy as attentional control during the establishment of working memory, (2) In accordance with Akshoomoff, Courchesne and Townsend's and Leggio and Molinari's cerebellar sequence detection and prediction models, the rigors of volitional control demands of music training can enhance voluntary manipulation of information in thought and movement, (3) The music training-enhanced blending of cerebellar internal models in working memory as can be experienced as intuition in scientific discovery (as Einstein often indicated) or, equally, as moments of therapeutic advancement toward goals in the development of voluntary control in neurological disorders, and (4) The blending of internal models as in (3) thus provides a mechanism by which music training enhances central executive processes in working memory that can lead to scientific discovery and improved therapeutic outcomes in neurological disorders. Within the framework of Leggio and Molinari's cerebellar sequence detection model, it is determined that intuitive steps forward that occur in both scientific discovery and during therapy in those with neurological disorders operate according to the same mechanism of adaptive error-driven blending of cerebellar internal models. It is concluded that the entire framework of the central executive structure of working memory is a product of the cerebrocerebellar system which can, through the learning of internal models, incorporate the multi-dimensional rigor and volitional-control demands of music training and, thereby, enhance voluntary control. It is further concluded that this cerebrocerebellar view of the music training-induced enhancement of central executive control in working memory provides a needed mechanism to explain both the highest level of scientific discovery and the efficacy of music training in the remediation of neurological impairments.
A new approach to the rationale discovery of polymeric biomaterials
Kohn, Joachim; Welsh, William J.; Knight, Doyle
2007-01-01
This paper attempts to illustrate both the need for new approaches to biomaterials discovery as well as the significant promise inherent in the use of combinatorial and computational design strategies. The key observation of this Leading Opinion Paper is that the biomaterials community has been slow to embrace advanced biomaterials discovery tools such as combinatorial methods, high throughput experimentation, and computational modeling in spite of the significant promise shown by these discovery tools in materials science, medicinal chemistry and the pharmaceutical industry. It seems that the complexity of living cells and their interactions with biomaterials has been a conceptual as well as a practical barrier to the use of advanced discovery tools in biomaterials science. However, with the continued increase in computer power, the goal of predicting the biological response of cells in contact with biomaterials surfaces is within reach. Once combinatorial synthesis, high throughput experimentation, and computational modeling are integrated into the biomaterials discovery process, a significant acceleration is possible in the pace of development of improved medical implants, tissue regeneration scaffolds, and gene/drug delivery systems. PMID:17644176
NASA Astrophysics Data System (ADS)
Agrawal, Ankit; Choudhary, Alok
2016-05-01
Our ability to collect "big data" has greatly surpassed our capability to analyze it, underscoring the emergence of the fourth paradigm of science, which is data-driven discovery. The need for data informatics is also emphasized by the Materials Genome Initiative (MGI), further boosting the emerging field of materials informatics. In this article, we look at how data-driven techniques are playing a big role in deciphering processing-structure-property-performance relationships in materials, with illustrative examples of both forward models (property prediction) and inverse models (materials discovery). Such analytics can significantly reduce time-to-insight and accelerate cost-effective materials discovery, which is the goal of MGI.
Medicinal chemistry inspired fragment-based drug discovery.
Lanter, James; Zhang, Xuqing; Sui, Zhihua
2011-01-01
Lead generation can be a very challenging phase of the drug discovery process. The two principal methods for this stage of research are blind screening and rational design. Among the rational or semirational design approaches, fragment-based drug discovery (FBDD) has emerged as a useful tool for the generation of lead structures. It is particularly powerful as a complement to high-throughput screening approaches when the latter failed to yield viable hits for further development. Engagement of medicinal chemists early in the process can accelerate the progression of FBDD efforts by incorporating drug-friendly properties in the earliest stages of the design process. Medium-chain acyl-CoA synthetase 2b and ketohexokinase are chosen as examples to illustrate the importance of close collaboration of medicinal chemists, crystallography, and modeling. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Nerita, S.; Maizeli, A.; Afza, A.
2017-09-01
Process Evaluation and Learning Outcomes of Biology subjects discusses the evaluation process in learning and application of designed and processed learning outcomes. Some problems found during this subject was the student difficult to understand the subject and the subject unavailability of learning resources that can guide and make students independent study. So, it necessary to develop a learning resource that can make active students to think and to make decisions with the guidance of the lecturer. The purpose of this study is to produce handout based on guided discovery method that match the needs of students. The research was done by using 4-D models and limited to define phase that is student requirement analysis. Data obtained from the questionnaire and analyzed descriptively. The results showed that the average requirement of students was 91,43%. Can be concluded that students need a handout based on guided discovery method in the learning process.
Rho Chi lecture. Pharmaceutical sciences in the next millennium.
Triggle, D J
1999-02-01
Even a cursory survey of this article suggests that the pharmaceutical sciences are being rapidly transformed under the influence of both the new technologies and sciences and the economic imperatives. Of particular importance are scientific and technological advances that may greatly accelerate the critical process of discovery. The possibility of a drug discovery process built around the principles of directed diversity, self-reproduction, evolution, and self-targeting suggests a new paradigm of lead discovery, one based quite directly on the paradigms of molecular biology. Coupled with the principles of nanotechnology, we may contemplate miniature molecular machines containing directed drug factories, circulating the body and capable of self-targeting against defective cells and pathways -- the ultimate "drug delivery machine." However, science and technology are not the only factors that will transform the pharmaceutical sciences in the next century. The necessary reductions in the costs of drug discovery brought about by the rapidly increasing costs of the current drug discovery paradigms means that efforts to decrease the discovery phase and to make drug development part of drug discovery will become increasingly important. This is likely to involve increasing numbers of "alliances," as well as the creation of pharmaceutical research cells -- highly mobile and entrepreneurial groups within or outside of a pharmaceutical company that are formed to carry out specific discovery processes. Some of these will be in the biotechnology industry, but an increasing number will be in universities. The linear process from basic science to applied technology that has been the Western model since Vannevar Bush's Science: The Endless Frontier has probably never been particularly linear and, in any event, is likely to be rapidly supplanted by models where science, scientific development, and technology are more intimately linked. The pharmaceutical sciences have always been an example of use-directed basic research, but the relationships between the pharmaceutical industry, small and large, and the universities seems likely to become increasingly developed in the next century. This may serve as a significant catalyst for the continued transformation of universities into the "knowledge factories" of the 21st century. Regardless, we may expect to see major changes in the research organizational structure in the pharmaceutical sciences even as pharmaceutical companies enjoy record prosperity. And this is in anticipation of tough times to come.
ERIC Educational Resources Information Center
Yang, Xi; Chen, Jin
2017-01-01
Botanical gardens (BGs) are important agencies that enhance human knowledge and attitude towards flora conservation. By following free-choice learning model, we developed a "Discovery map" and distributed the map to visitors at the Xishuangbanna Tropical Botanical Garden in Yunnan, China. Visitors, who did and did not receive discovery…
NASA Astrophysics Data System (ADS)
Riandari, F.; Susanti, R.; Suratmi
2018-05-01
This study aimed to find out the information in concerning the influence of discovery learning model application to the higher order thinking skills at the tenth grade students of Srijaya Negara senior high school Palembang on the animal kingdom subject matter. The research method used was pre-experimental with one-group pretest-posttest design. The researchconducted at Srijaya Negara senior high school Palembang academic year 2016/2017. The population sample of this research was tenth grade students of natural science 2. Purposive sampling techniquewas applied in this research. Data was collected by(1) the written test, consist of pretest to determine the initial ability and posttest to determine higher order thinking skills of students after learning by using discovery learning models. (2) Questionnaire sheet, aimed to investigate the response of the students during the learning process by using discovery learning models. The t-test result indicated there was significant increasement of higher order thinking skills students. Thus, it can be concluded that the application of discovery learning modelhad a significant effect and increased to higher order thinking skills students of Srijaya Negara senior high school Palembang on the animal kingdom subject matter.
Analysis student self efficacy in terms of using Discovery Learning model with SAVI approach
NASA Astrophysics Data System (ADS)
Sahara, Rifki; Mardiyana, S., Dewi Retno Sari
2017-12-01
Often students are unable to prove their academic achievement optimally according to their abilities. One reason is that they often feel unsure that they are capable of completing the tasks assigned to them. For students, such beliefs are necessary. The term belief has called self efficacy. Self efficacy is not something that has brought about by birth or something with permanent quality of an individual, but is the result of cognitive processes, the meaning one's self efficacy will be stimulated through learning activities. Self efficacy has developed and enhanced by a learning model that can stimulate students to foster confidence in their capabilities. One of them is by using Discovery Learning model with SAVI approach. Discovery Learning model with SAVI approach is one of learning models that involves the active participation of students in exploring and discovering their own knowledge and using it in problem solving by utilizing all the sensory devices they have. This naturalistic qualitative research aims to analyze student self efficacy in terms of use the Discovery Learning model with SAVI approach. The subjects of this study are 30 students focused on eight students who have high, medium, and low self efficacy obtained through purposive sampling technique. The data analysis of this research used three stages, that were reducing, displaying, and getting conclusion of the data. Based on the results of data analysis, it was concluded that the self efficacy appeared dominantly on the learning by using Discovery Learning model with SAVI approach is magnitude dimension.
Advanced Cell Culture Techniques for Cancer Drug Discovery
Lovitt, Carrie J.; Shelper, Todd B.; Avery, Vicky M.
2014-01-01
Human cancer cell lines are an integral part of drug discovery practices. However, modeling the complexity of cancer utilizing these cell lines on standard plastic substrata, does not accurately represent the tumor microenvironment. Research into developing advanced tumor cell culture models in a three-dimensional (3D) architecture that more prescisely characterizes the disease state have been undertaken by a number of laboratories around the world. These 3D cell culture models are particularly beneficial for investigating mechanistic processes and drug resistance in tumor cells. In addition, a range of molecular mechanisms deconstructed by studying cancer cells in 3D models suggest that tumor cells cultured in two-dimensional monolayer conditions do not respond to cancer therapeutics/compounds in a similar manner. Recent studies have demonstrated the potential of utilizing 3D cell culture models in drug discovery programs; however, it is evident that further research is required for the development of more complex models that incorporate the majority of the cellular and physical properties of a tumor. PMID:24887773
Advanced cell culture techniques for cancer drug discovery.
Lovitt, Carrie J; Shelper, Todd B; Avery, Vicky M
2014-05-30
Human cancer cell lines are an integral part of drug discovery practices. However, modeling the complexity of cancer utilizing these cell lines on standard plastic substrata, does not accurately represent the tumor microenvironment. Research into developing advanced tumor cell culture models in a three-dimensional (3D) architecture that more prescisely characterizes the disease state have been undertaken by a number of laboratories around the world. These 3D cell culture models are particularly beneficial for investigating mechanistic processes and drug resistance in tumor cells. In addition, a range of molecular mechanisms deconstructed by studying cancer cells in 3D models suggest that tumor cells cultured in two-dimensional monolayer conditions do not respond to cancer therapeutics/compounds in a similar manner. Recent studies have demonstrated the potential of utilizing 3D cell culture models in drug discovery programs; however, it is evident that further research is required for the development of more complex models that incorporate the majority of the cellular and physical properties of a tumor.
Modeling congenital disease and inborn errors of development in Drosophila melanogaster
Moulton, Matthew J.; Letsou, Anthea
2016-01-01
ABSTRACT Fly models that faithfully recapitulate various aspects of human disease and human health-related biology are being used for research into disease diagnosis and prevention. Established and new genetic strategies in Drosophila have yielded numerous substantial successes in modeling congenital disorders or inborn errors of human development, as well as neurodegenerative disease and cancer. Moreover, although our ability to generate sequence datasets continues to outpace our ability to analyze these datasets, the development of high-throughput analysis platforms in Drosophila has provided access through the bottleneck in the identification of disease gene candidates. In this Review, we describe both the traditional and newer methods that are facilitating the incorporation of Drosophila into the human disease discovery process, with a focus on the models that have enhanced our understanding of human developmental disorders and congenital disease. Enviable features of the Drosophila experimental system, which make it particularly useful in facilitating the much anticipated move from genotype to phenotype (understanding and predicting phenotypes directly from the primary DNA sequence), include its genetic tractability, the low cost for high-throughput discovery, and a genome and underlying biology that are highly evolutionarily conserved. In embracing the fly in the human disease-gene discovery process, we can expect to speed up and reduce the cost of this process, allowing experimental scales that are not feasible and/or would be too costly in higher eukaryotes. PMID:26935104
Zebrafish xenograft models of cancer and metastasis for drug discovery.
Brown, Hannah K; Schiavone, Kristina; Tazzyman, Simon; Heymann, Dominique; Chico, Timothy Ja
2017-04-01
Patients with metastatic cancer suffer the highest rate of cancer-related death, but existing animal models of metastasis have disadvantages that limit our ability to understand this process. The zebrafish is increasingly used for cancer modelling, particularly xenografting of human cancer cell lines, and drug discovery, and may provide novel scientific and therapeutic insights. However, this model system remains underexploited. Areas covered: The authors discuss the advantages and disadvantages of the zebrafish xenograft model for the study of cancer, metastasis and drug discovery. They summarise previous work investigating the metastatic cascade, such as tumour-induced angiogenesis, intravasation, extravasation, dissemination and homing, invasion at secondary sites, assessing metastatic potential and evaluation of cancer stem cells in zebrafish. Expert opinion: The practical advantages of zebrafish for basic biological study and drug discovery are indisputable. However, their ability to sufficiently reproduce and predict the behaviour of human cancer and metastasis remains unproven. For this to be resolved, novel mechanisms must to be discovered in zebrafish that are subsequently validated in humans, and for therapeutic interventions that modulate cancer favourably in zebrafish to successfully translate to human clinical studies. In the meantime, more work is required to establish the most informative methods in zebrafish.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holmes, Steve; Alber, Russ; Asner, David
2013-06-23
Particle physics has made enormous progress in understanding the nature of matter and forces at a fundamental level and has unlocked many mysteries of our world. The development of the Standard Model of particle physics has been a magnificent achievement of the field. Many deep and important questions have been answered and yet many mysteries remain. The discovery of neutrino oscillations, discrepancies in some precision measurements of Standard-Model processes, observation of matter-antimatter asymmetry, the evidence for the existence of dark matter and dark energy, all point to new physics beyond the Standard Model. The pivotal developments of our field, includingmore » the latest discovery of the Higgs Boson, have progressed within three interlocking frontiers of research – the Energy, Intensity and Cosmic frontiers – where discoveries and insights in one frontier powerfully advance the other frontiers as well.« less
Kell, Douglas B
2012-01-01
A considerable number of areas of bioscience, including gene and drug discovery, metabolic engineering for the biotechnological improvement of organisms, and the processes of natural and directed evolution, are best viewed in terms of a ‘landscape’ representing a large search space of possible solutions or experiments populated by a considerably smaller number of actual solutions that then emerge. This is what makes these problems ‘hard’, but as such these are to be seen as combinatorial optimisation problems that are best attacked by heuristic methods known from that field. Such landscapes, which may also represent or include multiple objectives, are effectively modelled in silico, with modern active learning algorithms such as those based on Darwinian evolution providing guidance, using existing knowledge, as to what is the ‘best’ experiment to do next. An awareness, and the application, of these methods can thereby enhance the scientific discovery process considerably. This analysis fits comfortably with an emerging epistemology that sees scientific reasoning, the search for solutions, and scientific discovery as Bayesian processes. PMID:22252984
Kell, Douglas B
2012-03-01
A considerable number of areas of bioscience, including gene and drug discovery, metabolic engineering for the biotechnological improvement of organisms, and the processes of natural and directed evolution, are best viewed in terms of a 'landscape' representing a large search space of possible solutions or experiments populated by a considerably smaller number of actual solutions that then emerge. This is what makes these problems 'hard', but as such these are to be seen as combinatorial optimisation problems that are best attacked by heuristic methods known from that field. Such landscapes, which may also represent or include multiple objectives, are effectively modelled in silico, with modern active learning algorithms such as those based on Darwinian evolution providing guidance, using existing knowledge, as to what is the 'best' experiment to do next. An awareness, and the application, of these methods can thereby enhance the scientific discovery process considerably. This analysis fits comfortably with an emerging epistemology that sees scientific reasoning, the search for solutions, and scientific discovery as Bayesian processes. Copyright © 2012 WILEY Periodicals, Inc.
Four disruptive strategies for removing drug discovery bottlenecks.
Ekins, Sean; Waller, Chris L; Bradley, Mary P; Clark, Alex M; Williams, Antony J
2013-03-01
Drug discovery is shifting focus from industry to outside partners and, in the process, creating new bottlenecks. Technologies like high throughput screening (HTS) have moved to a larger number of academic and institutional laboratories in the USA, with little coordination or consideration of the outputs and creating a translational gap. Although there have been collaborative public-private partnerships in Europe to share pharmaceutical data, the USA has seemingly lagged behind and this may hold it back. Sharing precompetitive data and models may accelerate discovery across the board, while finding the best collaborators, mining social media and mobile approaches to open drug discovery should be evaluated in our efforts to remove drug discovery bottlenecks. We describe four strategies to rectify the current unsustainable situation. Copyright © 2012 Elsevier Ltd. All rights reserved.
UNDERSTANDING X-RAY STARS:. The Discovery of Binary X-ray Sources
NASA Astrophysics Data System (ADS)
Schreier, E. J.; Tananbaum, H.
2000-09-01
The discovery of binary X-ray sources with UHURU introduced many new concepts to astronomy. It provided the canonical model which explained X-ray emission from a large class of galactic X-ray sources: it confirmed the existence of collapsed objects as the source of intense X-ray emission; showed that such collapsed objects existed in binary systems, with mass accretion as the energy source for the X-ray emission; and provided compelling evidence for the existence of black holes. This model also provided the basis for explaining the power source of AGNs and QSOs. The process of discovery and interpretation also established X-ray astronomy as an essential sub-discipline of astronomy, beginning its incorporation into the mainstream of astronomy.
An algorithm of discovering signatures from DNA databases on a computer cluster.
Lee, Hsiao Ping; Sheu, Tzu-Fang
2014-10-05
Signatures are short sequences that are unique and not similar to any other sequence in a database that can be used as the basis to identify different species. Even though several signature discovery algorithms have been proposed in the past, these algorithms require the entirety of databases to be loaded in the memory, thus restricting the amount of data that they can process. It makes those algorithms unable to process databases with large amounts of data. Also, those algorithms use sequential models and have slower discovery speeds, meaning that the efficiency can be improved. In this research, we are debuting the utilization of a divide-and-conquer strategy in signature discovery and have proposed a parallel signature discovery algorithm on a computer cluster. The algorithm applies the divide-and-conquer strategy to solve the problem posed to the existing algorithms where they are unable to process large databases and uses a parallel computing mechanism to effectively improve the efficiency of signature discovery. Even when run with just the memory of regular personal computers, the algorithm can still process large databases such as the human whole-genome EST database which were previously unable to be processed by the existing algorithms. The algorithm proposed in this research is not limited by the amount of usable memory and can rapidly find signatures in large databases, making it useful in applications such as Next Generation Sequencing and other large database analysis and processing. The implementation of the proposed algorithm is available at http://www.cs.pu.edu.tw/~fang/DDCSDPrograms/DDCSD.htm.
The development of high-content screening (HCS) technology and its importance to drug discovery.
Fraietta, Ivan; Gasparri, Fabio
2016-01-01
High-content screening (HCS) was introduced about twenty years ago as a promising analytical approach to facilitate some critical aspects of drug discovery. Its application has spread progressively within the pharmaceutical industry and academia to the point that it today represents a fundamental tool in supporting drug discovery and development. Here, the authors review some of significant progress in the HCS field in terms of biological models and assay readouts. They highlight the importance of high-content screening in drug discovery, as testified by its numerous applications in a variety of therapeutic areas: oncology, infective diseases, cardiovascular and neurodegenerative diseases. They also dissect the role of HCS technology in different phases of the drug discovery pipeline: target identification, primary compound screening, secondary assays, mechanism of action studies and in vitro toxicology. Recent advances in cellular assay technologies, such as the introduction of three-dimensional (3D) cultures, induced pluripotent stem cells (iPSCs) and genome editing technologies (e.g., CRISPR/Cas9), have tremendously expanded the potential of high-content assays to contribute to the drug discovery process. Increasingly predictive cellular models and readouts, together with the development of more sophisticated and affordable HCS readers, will further consolidate the role of HCS technology in drug discovery.
Mining manufacturing data for discovery of high productivity process characteristics.
Charaniya, Salim; Le, Huong; Rangwala, Huzefa; Mills, Keri; Johnson, Kevin; Karypis, George; Hu, Wei-Shou
2010-06-01
Modern manufacturing facilities for bioproducts are highly automated with advanced process monitoring and data archiving systems. The time dynamics of hundreds of process parameters and outcome variables over a large number of production runs are archived in the data warehouse. This vast amount of data is a vital resource to comprehend the complex characteristics of bioprocesses and enhance production robustness. Cell culture process data from 108 'trains' comprising production as well as inoculum bioreactors from Genentech's manufacturing facility were investigated. Each run constitutes over one-hundred on-line and off-line temporal parameters. A kernel-based approach combined with a maximum margin-based support vector regression algorithm was used to integrate all the process parameters and develop predictive models for a key cell culture performance parameter. The model was also used to identify and rank process parameters according to their relevance in predicting process outcome. Evaluation of cell culture stage-specific models indicates that production performance can be reliably predicted days prior to harvest. Strong associations between several temporal parameters at various manufacturing stages and final process outcome were uncovered. This model-based data mining represents an important step forward in establishing a process data-driven knowledge discovery in bioprocesses. Implementation of this methodology on the manufacturing floor can facilitate a real-time decision making process and thereby improve the robustness of large scale bioprocesses. 2010 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Lin, Chi-Shiou; Eschenfelder, Kristin R.
2010-01-01
This paper reports on a study of librarian initiated publications discovery (LIPD) in U.S. state digital depository programs using the OCLC Digital Archive to preserve web-based government publications for permanent public access. This paper describes a model of LIPD processes based on empirical investigations of four OCLC DA-based digital…
2001-12-01
Group 1999, Davenport and Prusak 1998). Although differences do exist, the four models are similar. In the amalgamated model , the phases of the KMLC...15 Phase 1, create, is the discovery and development of new knowledge (Despres and Chavel 1999, Gartner Group 1999). Phase 2, organize, involves...This generally entails modeling and analysis that results in one or more (re)designs for the process in question. The process, along with
ADMET in silico modelling: towards prediction paradise?
van de Waterbeemd, Han; Gifford, Eric
2003-03-01
Following studies in the late 1990s that indicated that poor pharmacokinetics and toxicity were important causes of costly late-stage failures in drug development, it has become widely appreciated that these areas should be considered as early as possible in the drug discovery process. However, in recent years, combinatorial chemistry and high-throughput screening have significantly increased the number of compounds for which early data on absorption, distribution, metabolism, excretion (ADME) and toxicity (T) are needed, which has in turn driven the development of a variety of medium and high-throughput in vitro ADMET screens. Here, we describe how in silico approaches will further increase our ability to predict and model the most relevant pharmacokinetic, metabolic and toxicity endpoints, thereby accelerating the drug discovery process.
Drusano, G L
2016-12-15
Because of our current crisis of resistance, particularly in nosocomial pathogens, the discovery and development of new antimicrobial agents has become a societal imperative. Changes in regulatory pathways by the Food and Drug Administration and the European Medicines Agency place great emphasis on the use of preclinical models coupled with pharmacokinetic/pharmacodynamic analysis to rapidly and safely move new molecular entities with activity against multi-resistant pathogens through the approval process and into the treatment of patients. In this manuscript, the use of the murine pneumonia system and the Hollow Fiber Infection Model is displayed and the way in which the mathematical analysis of the data arising from these models contributes to the robust choice of dose and schedule for Phase 3 clinical trials is shown. These data and their proper analysis act to de-risk the conduct of Phase 3 trials for anti-infective agents. These trials are the most expensive part of drug development. Further, given the seriousness of the infections treated, they represent the riskiest element for patients. Consequently, these preclinical model systems and their proper analysis have become a central part of accelerated anti-infective development. A final contention of this manuscript is that it is possible to embed these models and in particular, the Hollow Fiber Infection Model earlier in the drug discovery/development process. Examples of 'dynamic driver switching' and the impact of this phenomenon on clinical trial outcome are provided. Identifying dynamic drivers early in drug discovery may lead to improved decision making in the lead optimization process, resulting in the best molecules transitioning to clinical development. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ganzert, Steven; Guttmann, Josef; Steinmann, Daniel; Kramer, Stefan
Lung protective ventilation strategies reduce the risk of ventilator associated lung injury. To develop such strategies, knowledge about mechanical properties of the mechanically ventilated human lung is essential. This study was designed to develop an equation discovery system to identify mathematical models of the respiratory system in time-series data obtained from mechanically ventilated patients. Two techniques were combined: (i) the usage of declarative bias to reduce search space complexity and inherently providing the processing of background knowledge. (ii) A newly developed heuristic for traversing the hypothesis space with a greedy, randomized strategy analogical to the GSAT algorithm. In 96.8% of all runs the applied equation discovery system was capable to detect the well-established equation of motion model of the respiratory system in the provided data. We see the potential of this semi-automatic approach to detect more complex mathematical descriptions of the respiratory system from respiratory data.
Providing data science support for systems pharmacology and its implications to drug discovery.
Hart, Thomas; Xie, Lei
2016-01-01
The conventional one-drug-one-target-one-disease drug discovery process has been less successful in tracking multi-genic, multi-faceted complex diseases. Systems pharmacology has emerged as a new discipline to tackle the current challenges in drug discovery. The goal of systems pharmacology is to transform huge, heterogeneous, and dynamic biological and clinical data into interpretable and actionable mechanistic models for decision making in drug discovery and patient treatment. Thus, big data technology and data science will play an essential role in systems pharmacology. This paper critically reviews the impact of three fundamental concepts of data science on systems pharmacology: similarity inference, overfitting avoidance, and disentangling causality from correlation. The authors then discuss recent advances and future directions in applying the three concepts of data science to drug discovery, with a focus on proteome-wide context-specific quantitative drug target deconvolution and personalized adverse drug reaction prediction. Data science will facilitate reducing the complexity of systems pharmacology modeling, detecting hidden correlations between complex data sets, and distinguishing causation from correlation. The power of data science can only be fully realized when integrated with mechanism-based multi-scale modeling that explicitly takes into account the hierarchical organization of biological systems from nucleic acid to proteins, to molecular interaction networks, to cells, to tissues, to patients, and to populations.
Eyal-Altman, Noah; Last, Mark; Rubin, Eitan
2017-01-17
Numerous publications attempt to predict cancer survival outcome from gene expression data using machine-learning methods. A direct comparison of these works is challenging for the following reasons: (1) inconsistent measures used to evaluate the performance of different models, and (2) incomplete specification of critical stages in the process of knowledge discovery. There is a need for a platform that would allow researchers to replicate previous works and to test the impact of changes in the knowledge discovery process on the accuracy of the induced models. We developed the PCM-SABRE platform, which supports the entire knowledge discovery process for cancer outcome analysis. PCM-SABRE was developed using KNIME. By using PCM-SABRE to reproduce the results of previously published works on breast cancer survival, we define a baseline for evaluating future attempts to predict cancer outcome with machine learning. We used PCM-SABRE to replicate previous work that describe predictive models of breast cancer recurrence, and tested the performance of all possible combinations of feature selection methods and data mining algorithms that was used in either of the works. We reconstructed the work of Chou et al. observing similar trends - superior performance of Probabilistic Neural Network (PNN) and logistic regression (LR) algorithms and inconclusive impact of feature pre-selection with the decision tree algorithm on subsequent analysis. PCM-SABRE is a software tool that provides an intuitive environment for rapid development of predictive models in cancer precision medicine.
From Visual Exploration to Storytelling and Back Again.
Gratzl, S; Lex, A; Gehlenborg, N; Cosgrove, N; Streit, M
2016-06-01
The primary goal of visual data exploration tools is to enable the discovery of new insights. To justify and reproduce insights, the discovery process needs to be documented and communicated. A common approach to documenting and presenting findings is to capture visualizations as images or videos. Images, however, are insufficient for telling the story of a visual discovery, as they lack full provenance information and context. Videos are difficult to produce and edit, particularly due to the non-linear nature of the exploratory process. Most importantly, however, neither approach provides the opportunity to return to any point in the exploration in order to review the state of the visualization in detail or to conduct additional analyses. In this paper we present CLUE (Capture, Label, Understand, Explain), a model that tightly integrates data exploration and presentation of discoveries. Based on provenance data captured during the exploration process, users can extract key steps, add annotations, and author "Vistories", visual stories based on the history of the exploration. These Vistories can be shared for others to view, but also to retrace and extend the original analysis. We discuss how the CLUE approach can be integrated into visualization tools and provide a prototype implementation. Finally, we demonstrate the general applicability of the model in two usage scenarios: a Gapminder-inspired visualization to explore public health data and an example from molecular biology that illustrates how Vistories could be used in scientific journals. (see Figure 1 for visual abstract).
From Visual Exploration to Storytelling and Back Again
Gratzl, S.; Lex, A.; Gehlenborg, N.; Cosgrove, N.; Streit, M.
2016-01-01
The primary goal of visual data exploration tools is to enable the discovery of new insights. To justify and reproduce insights, the discovery process needs to be documented and communicated. A common approach to documenting and presenting findings is to capture visualizations as images or videos. Images, however, are insufficient for telling the story of a visual discovery, as they lack full provenance information and context. Videos are difficult to produce and edit, particularly due to the non-linear nature of the exploratory process. Most importantly, however, neither approach provides the opportunity to return to any point in the exploration in order to review the state of the visualization in detail or to conduct additional analyses. In this paper we present CLUE (Capture, Label, Understand, Explain), a model that tightly integrates data exploration and presentation of discoveries. Based on provenance data captured during the exploration process, users can extract key steps, add annotations, and author “Vistories”, visual stories based on the history of the exploration. These Vistories can be shared for others to view, but also to retrace and extend the original analysis. We discuss how the CLUE approach can be integrated into visualization tools and provide a prototype implementation. Finally, we demonstrate the general applicability of the model in two usage scenarios: a Gapminder-inspired visualization to explore public health data and an example from molecular biology that illustrates how Vistories could be used in scientific journals. (see Figure 1 for visual abstract) PMID:27942091
2012-01-01
Computational approaches to generate hypotheses from biomedical literature have been studied intensively in recent years. Nevertheless, it still remains a challenge to automatically discover novel, cross-silo biomedical hypotheses from large-scale literature repositories. In order to address this challenge, we first model a biomedical literature repository as a comprehensive network of biomedical concepts and formulate hypotheses generation as a process of link discovery on the concept network. We extract the relevant information from the biomedical literature corpus and generate a concept network and concept-author map on a cluster using Map-Reduce frame-work. We extract a set of heterogeneous features such as random walk based features, neighborhood features and common author features. The potential number of links to consider for the possibility of link discovery is large in our concept network and to address the scalability problem, the features from a concept network are extracted using a cluster with Map-Reduce framework. We further model link discovery as a classification problem carried out on a training data set automatically extracted from two network snapshots taken in two consecutive time duration. A set of heterogeneous features, which cover both topological and semantic features derived from the concept network, have been studied with respect to their impacts on the accuracy of the proposed supervised link discovery process. A case study of hypotheses generation based on the proposed method has been presented in the paper. PMID:22759614
Drug discovery in prostate cancer mouse models.
Valkenburg, Kenneth C; Pienta, Kenneth J
2015-01-01
The mouse is an important, though imperfect, organism with which to model human disease and to discover and test novel drugs in a preclinical setting. Many experimental strategies have been used to discover new biological and molecular targets in the mouse, with the hopes of translating these discoveries into novel drugs to treat prostate cancer in humans. Modeling prostate cancer in the mouse, however, has been challenging, and often drugs that work in mice have failed in human trials. The authors discuss the similarities and differences between mice and men; the types of mouse models that exist to model prostate cancer; practical questions one must ask when using a mouse as a model; and potential reasons that drugs do not often translate to humans. They also discuss the current value in using mouse models for drug discovery to treat prostate cancer and what needs are still unmet in field. With proper planning and following practical guidelines by the researcher, the mouse is a powerful experimental tool. The field lacks genetically engineered metastatic models, and xenograft models do not allow for the study of the immune system during the metastatic process. There remain several important limitations to discovering and testing novel drugs in mice for eventual human use, but these can often be overcome. Overall, mouse modeling is an essential part of prostate cancer research and drug discovery. Emerging technologies and better and ever-increasing forms of communication are moving the field in a hopeful direction.
What Does Galileo's Discovery of Jupiter's Moons Tell Us About the Process of Scientific Discovery?
NASA Astrophysics Data System (ADS)
Lawson, Anton E.
In 1610, Galileo Galilei discovered Jupiter''smoons with the aid of a new morepowerful telescope of his invention. Analysisof his report reveals that his discoveryinvolved the use of at least three cycles ofhypothetico-deductive reasoning. Galileofirst used hypothetico-deductive reasoning to generateand reject a fixed star hypothesis.He then generated and rejected an ad hocastronomers-made-a-mistake hypothesis.Finally, he generated, tested, and accepted a moonhypothesis. Galileo''s reasoningis modeled in terms of Piaget''s equilibration theory,Grossberg''s theory of neurologicalactivity, a neural network model proposed by Levine &Prueitt, and another proposedby Kosslyn & Koenig. Given that hypothetico-deductivereasoning has played a rolein other important scientific discoveries, thequestion is asked whether it plays a rolein all important scientific discoveries. In otherwords, is hypothetico-deductive reasoningthe essence of the scientific method? Possiblealternative scientific methods, such asBaconian induction and combinatorial analysis,are explored and rejected as viablealternatives. Educational implications of thishypothetico-deductive view of scienceare discussed.
Gozalbes, Rafael; Carbajo, Rodrigo J; Pineda-Lucena, Antonio
2010-01-01
In the last decade, fragment-based drug discovery (FBDD) has evolved from a novel approach in the search of new hits to a valuable alternative to the high-throughput screening (HTS) campaigns of many pharmaceutical companies. The increasing relevance of FBDD in the drug discovery universe has been concomitant with an implementation of the biophysical techniques used for the detection of weak inhibitors, e.g. NMR, X-ray crystallography or surface plasmon resonance (SPR). At the same time, computational approaches have also been progressively incorporated into the FBDD process and nowadays several computational tools are available. These stretch from the filtering of huge chemical databases in order to build fragment-focused libraries comprising compounds with adequate physicochemical properties, to more evolved models based on different in silico methods such as docking, pharmacophore modelling, QSAR and virtual screening. In this paper we will review the parallel evolution and complementarities of biophysical techniques and computational methods, providing some representative examples of drug discovery success stories by using FBDD.
NASA Astrophysics Data System (ADS)
Cleves, Ann E.; Jain, Ajay N.
2008-03-01
Inductive bias is the set of assumptions that a person or procedure makes in making a prediction based on data. Different methods for ligand-based predictive modeling have different inductive biases, with a particularly sharp contrast between 2D and 3D similarity methods. A unique aspect of ligand design is that the data that exist to test methodology have been largely man-made, and that this process of design involves prediction. By analyzing the molecular similarities of known drugs, we show that the inductive bias of the historic drug discovery process has a very strong 2D bias. In studying the performance of ligand-based modeling methods, it is critical to account for this issue in dataset preparation, use of computational controls, and in the interpretation of results. We propose specific strategies to explicitly address the problems posed by inductive bias considerations.
Computer-Aided Discovery Tools for Volcano Deformation Studies with InSAR and GPS
NASA Astrophysics Data System (ADS)
Pankratius, V.; Pilewskie, J.; Rude, C. M.; Li, J. D.; Gowanlock, M.; Bechor, N.; Herring, T.; Wauthier, C.
2016-12-01
We present a Computer-Aided Discovery approach that facilitates the cloud-scalable fusion of different data sources, such as GPS time series and Interferometric Synthetic Aperture Radar (InSAR), for the purpose of identifying the expansion centers and deformation styles of volcanoes. The tools currently developed at MIT allow the definition of alternatives for data processing pipelines that use various analysis algorithms. The Computer-Aided Discovery system automatically generates algorithmic and parameter variants to help researchers explore multidimensional data processing search spaces efficiently. We present first application examples of this technique using GPS data on volcanoes on the Aleutian Islands and work in progress on combined GPS and InSAR data in Hawaii. In the model search context, we also illustrate work in progress combining time series Principal Component Analysis with InSAR augmentation to constrain the space of possible model explanations on current empirical data sets and achieve a better identification of deformation patterns. This work is supported by NASA AIST-NNX15AG84G and NSF ACI-1442997 (PI: V. Pankratius).
Barutta, Joaquin; Guex, Raphael; Ibáñez, Agustín
2010-06-01
Abstract From everyday cognition to scientific discovery, analogical processes play an important role: bringing connection, integration, and interrelation of information. Recently, a PFC model of analogy has been proposed to explain many cognitive processes and integrate general functional properties of PFC. We argue here that analogical processes do not suffice to explain the cognitive processes and functions of PFC. Moreover the model does not satisfactorily integrate specific explanatory mechanisms required for the different processes involved. Its relevance would be improved if fewer cognitive phenomena were considered and more specific predictions and explanations about those processes were stated.
AutoDrug: fully automated macromolecular crystallography workflows for fragment-based drug discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsai, Yingssu; Stanford University, 333 Campus Drive, Mudd Building, Stanford, CA 94305-5080; McPhillips, Scott E.
New software has been developed for automating the experimental and data-processing stages of fragment-based drug discovery at a macromolecular crystallography beamline. A new workflow-automation framework orchestrates beamline-control and data-analysis software while organizing results from multiple samples. AutoDrug is software based upon the scientific workflow paradigm that integrates the Stanford Synchrotron Radiation Lightsource macromolecular crystallography beamlines and third-party processing software to automate the crystallography steps of the fragment-based drug-discovery process. AutoDrug screens a cassette of fragment-soaked crystals, selects crystals for data collection based on screening results and user-specified criteria and determines optimal data-collection strategies. It then collects and processes diffraction data,more » performs molecular replacement using provided models and detects electron density that is likely to arise from bound fragments. All processes are fully automated, i.e. are performed without user interaction or supervision. Samples can be screened in groups corresponding to particular proteins, crystal forms and/or soaking conditions. A single AutoDrug run is only limited by the capacity of the sample-storage dewar at the beamline: currently 288 samples. AutoDrug was developed in conjunction with RestFlow, a new scientific workflow-automation framework. RestFlow simplifies the design of AutoDrug by managing the flow of data and the organization of results and by orchestrating the execution of computational pipeline steps. It also simplifies the execution and interaction of third-party programs and the beamline-control system. Modeling AutoDrug as a scientific workflow enables multiple variants that meet the requirements of different user groups to be developed and supported. A workflow tailored to mimic the crystallography stages comprising the drug-discovery pipeline of CoCrystal Discovery Inc. has been deployed and successfully demonstrated. This workflow was run once on the same 96 samples that the group had examined manually and the workflow cycled successfully through all of the samples, collected data from the same samples that were selected manually and located the same peaks of unmodeled density in the resulting difference Fourier maps.« less
Knowledge Discovery and Data Mining in Iran's Climatic Researches
NASA Astrophysics Data System (ADS)
Karimi, Mostafa
2013-04-01
Advances in measurement technology and data collection is the database gets larger. Large databases require powerful tools for analysis data. Iterative process of acquiring knowledge from information obtained from data processing is done in various forms in all scientific fields. However, when the data volume large, and many of the problems the Traditional methods cannot respond. in the recent years, use of databases in various scientific fields, especially atmospheric databases in climatology expanded. in addition, increases in the amount of data generated by the climate models is a challenge for analysis of it for extraction of hidden pattern and knowledge. The approach to this problem has been made in recent years uses the process of knowledge discovery and data mining techniques with the use of the concepts of machine learning, artificial intelligence and expert (professional) systems is overall performance. Data manning is analytically process for manning in massive volume data. The ultimate goal of data mining is access to information and finally knowledge. climatology is a part of science that uses variety and massive volume data. Goal of the climate data manning is Achieve to information from variety and massive atmospheric and non-atmospheric data. in fact, Knowledge Discovery performs these activities in a logical and predetermined and almost automatic process. The goal of this research is study of uses knowledge Discovery and data mining technique in Iranian climate research. For Achieve This goal, study content (descriptive) analysis and classify base method and issue. The result shown that in climatic research of Iran most clustering, k-means and wards applied and in terms of issues precipitation and atmospheric circulation patterns most introduced. Although several studies in geography and climate issues with statistical techniques such as clustering and pattern extraction is done, Due to the nature of statistics and data mining, but cannot say for internal climate studies in data mining and knowledge discovery techniques are used. However, it is necessary to use the KDD Approach and DM techniques in the climatic studies, specific interpreter of climate modeling result.
From laptop to benchtop to bedside: Structure-based Drug Design on Protein Targets
Chen, Lu; Morrow, John K.; Tran, Hoang T.; Phatak, Sharangdhar S.; Du-Cuny, Lei; Zhang, Shuxing
2013-01-01
As an important aspect of computer-aided drug design, structure-based drug design brought a new horizon to pharmaceutical development. This in silico method permeates all aspects of drug discovery today, including lead identification, lead optimization, ADMET prediction and drug repurposing. Structure-based drug design has resulted in fruitful successes drug discovery targeting protein-ligand and protein-protein interactions. Meanwhile, challenges, noted by low accuracy and combinatoric issues, may also cause failures. In this review, state-of-the-art techniques for protein modeling (e.g. structure prediction, modeling protein flexibility, etc.), hit identification/optimization (e.g. molecular docking, focused library design, fragment-based design, molecular dynamic, etc.), and polypharmacology design will be discussed. We will explore how structure-based techniques can facilitate the drug discovery process and interplay with other experimental approaches. PMID:22316152
MOPED enables discoveries through consistently processed proteomics data
Higdon, Roger; Stewart, Elizabeth; Stanberry, Larissa; Haynes, Winston; Choiniere, John; Montague, Elizabeth; Anderson, Nathaniel; Yandl, Gregory; Janko, Imre; Broomall, William; Fishilevich, Simon; Lancet, Doron; Kolker, Natali; Kolker, Eugene
2014-01-01
The Model Organism Protein Expression Database (MOPED, http://moped.proteinspire.org), is an expanding proteomics resource to enable biological and biomedical discoveries. MOPED aggregates simple, standardized and consistently processed summaries of protein expression and metadata from proteomics (mass spectrometry) experiments from human and model organisms (mouse, worm and yeast). The latest version of MOPED adds new estimates of protein abundance and concentration, as well as relative (differential) expression data. MOPED provides a new updated query interface that allows users to explore information by organism, tissue, localization, condition, experiment, or keyword. MOPED supports the Human Proteome Project’s efforts to generate chromosome and diseases specific proteomes by providing links from proteins to chromosome and disease information, as well as many complementary resources. MOPED supports a new omics metadata checklist in order to harmonize data integration, analysis and use. MOPED’s development is driven by the user community, which spans 90 countries guiding future development that will transform MOPED into a multi-omics resource. MOPED encourages users to submit data in a simple format. They can use the metadata a checklist generate a data publication for this submission. As a result, MOPED will provide even greater insights into complex biological processes and systems and enable deeper and more comprehensive biological and biomedical discoveries. PMID:24350770
Wolverton, Christopher; Hattrick-Simpers, Jason; Mehta, Apurva
2018-01-01
With more than a hundred elements in the periodic table, a large number of potential new materials exist to address the technological and societal challenges we face today; however, without some guidance, searching through this vast combinatorial space is frustratingly slow and expensive, especially for materials strongly influenced by processing. We train a machine learning (ML) model on previously reported observations, parameters from physiochemical theories, and make it synthesis method–dependent to guide high-throughput (HiTp) experiments to find a new system of metallic glasses in the Co-V-Zr ternary. Experimental observations are in good agreement with the predictions of the model, but there are quantitative discrepancies in the precise compositions predicted. We use these discrepancies to retrain the ML model. The refined model has significantly improved accuracy not only for the Co-V-Zr system but also across all other available validation data. We then use the refined model to guide the discovery of metallic glasses in two additional previously unreported ternaries. Although our approach of iterative use of ML and HiTp experiments has guided us to rapid discovery of three new glass-forming systems, it has also provided us with a quantitatively accurate, synthesis method–sensitive predictor for metallic glasses that improves performance with use and thus promises to greatly accelerate discovery of many new metallic glasses. We believe that this discovery paradigm is applicable to a wider range of materials and should prove equally powerful for other materials and properties that are synthesis path–dependent and that current physiochemical theories find challenging to predict. PMID:29662953
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Fang; Ward, Logan; Williams, Travis
With more than a hundred elements in the periodic table, a large number of potential new materials exist to address the technological and societal challenges we face today; however, without some guidance, searching through this vast combinatorial space is frustratingly slow and expensive, especially for materials strongly influenced by processing. We train a machine learning (ML) model on previously reported observations, parameters from physiochemical theories, and make it synthesis method–dependent to guide high-throughput (HiTp) experiments to find a new system of metallic glasses in the Co-V-Zr ternary. Experimental observations are in good agreement with the predictions of the model, butmore » there are quantitative discrepancies in the precise compositions predicted. We use these discrepancies to retrain the ML model. The refined model has significantly improved accuracy not only for the Co-V-Zr system but also across all other available validation data. We then use the refined model to guide the discovery of metallic glasses in two additional previously unreported ternaries. Although our approach of iterative use of ML and HiTp experiments has guided us to rapid discovery of three new glass-forming systems, it has also provided us with a quantitatively accurate, synthesis method–sensitive predictor for metallic glasses that improves performance with use and thus promises to greatly accelerate discovery of many new metallic glasses. We believe that this discovery paradigm is applicable to a wider range of materials and should prove equally powerful for other materials and properties that are synthesis path–dependent and that current physiochemical theories find challenging to predict.« less
Ren, Fang; Ward, Logan; Williams, Travis; ...
2018-04-01
With more than a hundred elements in the periodic table, a large number of potential new materials exist to address the technological and societal challenges we face today; however, without some guidance, searching through this vast combinatorial space is frustratingly slow and expensive, especially for materials strongly influenced by processing. We train a machine learning (ML) model on previously reported observations, parameters from physiochemical theories, and make it synthesis method–dependent to guide high-throughput (HiTp) experiments to find a new system of metallic glasses in the Co-V-Zr ternary. Experimental observations are in good agreement with the predictions of the model, butmore » there are quantitative discrepancies in the precise compositions predicted. We use these discrepancies to retrain the ML model. The refined model has significantly improved accuracy not only for the Co-V-Zr system but also across all other available validation data. We then use the refined model to guide the discovery of metallic glasses in two additional previously unreported ternaries. Although our approach of iterative use of ML and HiTp experiments has guided us to rapid discovery of three new glass-forming systems, it has also provided us with a quantitatively accurate, synthesis method–sensitive predictor for metallic glasses that improves performance with use and thus promises to greatly accelerate discovery of many new metallic glasses. We believe that this discovery paradigm is applicable to a wider range of materials and should prove equally powerful for other materials and properties that are synthesis path–dependent and that current physiochemical theories find challenging to predict.« less
Simulating the drug discovery pipeline: a Monte Carlo approach
2012-01-01
Background The early drug discovery phase in pharmaceutical research and development marks the beginning of a long, complex and costly process of bringing a new molecular entity to market. As such, it plays a critical role in helping to maintain a robust downstream clinical development pipeline. Despite its importance, however, to our knowledge there are no published in silico models to simulate the progression of discrete virtual projects through a discovery milestone system. Results Multiple variables were tested and their impact on productivity metrics examined. Simulations predict that there is an optimum number of scientists for a given drug discovery portfolio, beyond which output in the form of preclinical candidates per year will remain flat. The model further predicts that the frequency of compounds to successfully pass the candidate selection milestone as a function of time will be irregular, with projects entering preclinical development in clusters marked by periods of low apparent productivity. Conclusions The model may be useful as a tool to facilitate analysis of historical growth and achievement over time, help gauge current working group progress against future performance expectations, and provide the basis for dialogue regarding working group best practices and resource deployment strategies. PMID:23186040
Simplified process model discovery based on role-oriented genetic mining.
Zhao, Weidong; Liu, Xi; Dai, Weihui
2014-01-01
Process mining is automated acquisition of process models from event logs. Although many process mining techniques have been developed, most of them are based on control flow. Meanwhile, the existing role-oriented process mining methods focus on correctness and integrity of roles while ignoring role complexity of the process model, which directly impacts understandability and quality of the model. To address these problems, we propose a genetic programming approach to mine the simplified process model. Using a new metric of process complexity in terms of roles as the fitness function, we can find simpler process models. The new role complexity metric of process models is designed from role cohesion and coupling, and applied to discover roles in process models. Moreover, the higher fitness derived from role complexity metric also provides a guideline for redesigning process models. Finally, we conduct case study and experiments to show that the proposed method is more effective for streamlining the process by comparing with related studies.
Air Pollution Data for Model Evaluation and Application
One objective of designing an air pollution monitoring network is to obtain data for evaluating air quality models that are used in the air quality management process and scientific discovery.1.2 A common use is to relate emissions to air quality, including assessing ...
Keenan, Martine; Alexander, Paul W; Chaplin, Jason H; Abbott, Michael J; Diao, Hugo; Wang, Zhisen; Best, Wayne M; Perez, Catherine J; Cornwall, Scott M J; Keatley, Sarah K; Thompson, R C Andrew; Charman, Susan A; White, Karen L; Ryan, Eileen; Chen, Gong; Ioset, Jean-Robert; von Geldern, Thomas W; Chatelain, Eric
2013-10-01
Inhibitors of Trypanosoma cruzi with novel mechanisms of action are urgently required to diversify the current clinical and preclinical pipelines. Increasing the number and diversity of hits available for assessment at the beginning of the discovery process will help to achieve this aim. We report the evaluation of multiple hits generated from a high-throughput screen to identify inhibitors of T. cruzi and from these studies the discovery of two novel series currently in lead optimization. Lead compounds from these series potently and selectively inhibit growth of T. cruzi in vitro and the most advanced compound is orally active in a subchronic mouse model of T. cruzi infection. High-throughput screening of novel compound collections has an important role to play in diversifying the trypanosomatid drug discovery portfolio. A new T. cruzi inhibitor series with good drug-like properties and promising in vivo efficacy has been identified through this process.
CREB and the discovery of cognitive enhancers.
Scott, Roderick; Bourtchuladze, Rusiko; Gossweiler, Scott; Dubnau, Josh; Tully, Tim
2002-01-01
In the past few years, a series of molecular-genetic, biochemical, cellular and behavioral studies in fruit flies, sea slugs and mice have confirmed a long-standing notion that long-term memory formation depends on the synthesis of new proteins. Experiments focused on the cAMP-responsive transcription factor, CREB, have established that neural activity-induced regulation of gene transcription promotes a synaptic growth process that strengthens the connections among active neurons. This process constitutes a physical basis for the engram--and CREB is a "molecular switch" to produce the engram. Helicon Therapeutics has been formed to identify drug compounds that enhance memory formation via augmentation of CREB biochemistry. Candidate compounds have been identified from a high throughput cell-based screen and are being evaluated in animal models of memory formation. A gene discovery program also seeks to identify new genes, which function downstream of CREB during memory formation, as a source for new drug discoveries in the future. Together, these drug and gene discovery efforts promise new class of pharmaceutical therapies for the treatment of various forms of cognitive dysfunction.
Biomarker discovery and development in pediatric critical care medicine
Kaplan, Jennifer M.; Wong, Hector R.
2010-01-01
Objective To frame the general process of biomarker discovery and development, and to describe a proposal for the development of a multi-biomarker based risk model for pediatric septic shock. Data Source Narrative literature review and author generated data. Main Results Biomarkers can be grouped into four broad classes, based on the intended function: diagnostic, monitoring, surrogate, and stratification. Biomarker discovery and development requires a rigorous process, which is frequently not well followed in the critical care medicine literature. Very few biomarkers have successfully transitioned from the candidate stage to the true biomarker stage. There is great interest in developing diagnostic and stratification biomarkers for sepsis. Procalcitonin is currently the most promising diagnostic biomarker for sepsis. Recent evidence suggests that interleukin-8 can be used to stratify children with septic shock having a high likelihood of survival with standard care. Currently, there is a multi-institutional effort to develop a multi-biomarker based sepsis risk model intended to predict outcome and illness severity for individual children with septic shock. Conclusions Biomarker discovery and development is an important portion of the pediatric critical care medicine translational research agenda. This effort will require collaboration across multiple institutions and investigators. Rigorous conduct of biomarker-focused research holds the promise of transforming our ability to care for individual patients and our ability to conduct clinical trials in a more effective manner. PMID:20473243
Pharmacokinetic properties and in silico ADME modeling in drug discovery.
Honório, Kathia M; Moda, Tiago L; Andricopulo, Adriano D
2013-03-01
The discovery and development of a new drug are time-consuming, difficult and expensive. This complex process has evolved from classical methods into an integration of modern technologies and innovative strategies addressed to the design of new chemical entities to treat a variety of diseases. The development of new drug candidates is often limited by initial compounds lacking reasonable chemical and biological properties for further lead optimization. Huge libraries of compounds are frequently selected for biological screening using a variety of techniques and standard models to assess potency, affinity and selectivity. In this context, it is very important to study the pharmacokinetic profile of the compounds under investigation. Recent advances have been made in the collection of data and the development of models to assess and predict pharmacokinetic properties (ADME--absorption, distribution, metabolism and excretion) of bioactive compounds in the early stages of drug discovery projects. This paper provides a brief perspective on the evolution of in silico ADME tools, addressing challenges, limitations, and opportunities in medicinal chemistry.
Early Probe and Drug Discovery in Academia: A Minireview.
Roy, Anuradha
2018-02-09
Drug discovery encompasses processes ranging from target selection and validation to the selection of a development candidate. While comprehensive drug discovery work flows are implemented predominantly in the big pharma domain, early discovery focus in academia serves to identify probe molecules that can serve as tools to study targets or pathways. Despite differences in the ultimate goals of the private and academic sectors, the same basic principles define the best practices in early discovery research. A successful early discovery program is built on strong target definition and validation using a diverse set of biochemical and cell-based assays with functional relevance to the biological system being studied. The chemicals identified as hits undergo extensive scaffold optimization and are characterized for their target specificity and off-target effects in in vitro and in animal models. While the active compounds from screening campaigns pass through highly stringent chemical and Absorption, Distribution, Metabolism, and Excretion (ADME) filters for lead identification, the probe discovery involves limited medicinal chemistry optimization. The goal of probe discovery is identification of a compound with sub-µM activity and reasonable selectivity in the context of the target being studied. The compounds identified from probe discovery can also serve as starting scaffolds for lead optimization studies.
Information analytics for healthcare service discovery.
Sun, Lily; Yamin, Mohammad; Mushi, Cleopa; Liu, Kecheng; Alsaigh, Mohammed; Chen, Fabian
2014-01-01
The concept of being 'patient-centric' is a challenge to many existing healthcare service provision practices. This paper focuses on the issue of referrals, where multiple stakeholders, such as General Practitioners (GPs) and patients, are encouraged to make a consensual decision based on patients' needs. In this paper, we present an ontology-enabled healthcare service provision, which facilitates both patients and GPs in jointly deciding upon the referral decision. In the healthcare service provision model, we define three types of profiles which represent different stakeholders' requirements. This model also comprises a set of healthcare service discovery processes: articulating a service need, matching the need with the healthcare service offerings, and deciding on a best-fit service for acceptance. As a result, the healthcare service provision can carry out coherent analysis using personalised information and iterative processes that deal with requirements which change over time.
Two stochastic models useful in petroleum exploration
NASA Technical Reports Server (NTRS)
Kaufman, G. M.; Bradley, P. G.
1972-01-01
A model of the petroleum exploration process that tests empirically the hypothesis that at an early stage in the exploration of a basin, the process behaves like sampling without replacement is proposed along with a model of the spatial distribution of petroleum reserviors that conforms to observed facts. In developing the model of discovery, the following topics are discussed: probabilitistic proportionality, likelihood function, and maximum likelihood estimation. In addition, the spatial model is described, which is defined as a stochastic process generating values of a sequence or random variables in a way that simulates the frequency distribution of areal extent, the geographic location, and shape of oil deposits
A Mars Exploration Discovery Program
NASA Astrophysics Data System (ADS)
Hansen, C. J.; Paige, D. A.
2000-07-01
The Mars Exploration Program should consider following the Discovery Program model. In the Discovery Program a team of scientists led by a PI develop the science goals of their mission, decide what payload achieves the necessary measurements most effectively, and then choose a spacecraft with the capabilities needed to carry the payload to the desired target body. The primary constraints associated with the Discovery missions are time and money. The proposer must convince reviewers that their mission has scientific merit and is feasible. Every Announcement of Opportunity has resulted in a collection of creative ideas that fit within advertised constraints. Following this model, a "Mars Discovery Program" would issue an Announcement of Opportunity for each launch opportunity with schedule constraints dictated by the launch window and fiscal constraints in accord with the program budget. All else would be left to the proposer to choose, based on the science the team wants to accomplish, consistent with the program theme of "Life, Climate and Resources". A proposer could propose a lander, an orbiter, a fleet of SCOUT vehicles or penetrators, an airplane, a balloon mission, a large rover, a small rover, etc. depending on what made the most sense for the science investigation and payload. As in the Discovery program, overall feasibility relative to cost, schedule and technology readiness would be evaluated and be part of the selection process.
A Mars Exploration Discovery Program
NASA Technical Reports Server (NTRS)
Hansen, C. J.; Paige, D. A.
2000-01-01
The Mars Exploration Program should consider following the Discovery Program model. In the Discovery Program a team of scientists led by a PI develop the science goals of their mission, decide what payload achieves the necessary measurements most effectively, and then choose a spacecraft with the capabilities needed to carry the payload to the desired target body. The primary constraints associated with the Discovery missions are time and money. The proposer must convince reviewers that their mission has scientific merit and is feasible. Every Announcement of Opportunity has resulted in a collection of creative ideas that fit within advertised constraints. Following this model, a "Mars Discovery Program" would issue an Announcement of Opportunity for each launch opportunity with schedule constraints dictated by the launch window and fiscal constraints in accord with the program budget. All else would be left to the proposer to choose, based on the science the team wants to accomplish, consistent with the program theme of "Life, Climate and Resources". A proposer could propose a lander, an orbiter, a fleet of SCOUT vehicles or penetrators, an airplane, a balloon mission, a large rover, a small rover, etc. depending on what made the most sense for the science investigation and payload. As in the Discovery program, overall feasibility relative to cost, schedule and technology readiness would be evaluated and be part of the selection process.
Beyond Love and Battle: Practicing Feminist Pedagogy.
ERIC Educational Resources Information Center
Wallace, Miriam L.
1999-01-01
Examines the ways in which authority and power operate in the classroom. Uses two metaphors to describe the poles of classroom dynamics (the love-relationship and the battlefield model). Suggests that the processes of writing and reading as interpretation and discovery can act as a more suggestive instructional model. (CMK)
Three-Dimensional in Vitro Cell Culture Models in Drug Discovery and Drug Repositioning
Langhans, Sigrid A.
2018-01-01
Drug development is a lengthy and costly process that proceeds through several stages from target identification to lead discovery and optimization, preclinical validation and clinical trials culminating in approval for clinical use. An important step in this process is high-throughput screening (HTS) of small compound libraries for lead identification. Currently, the majority of cell-based HTS is being carried out on cultured cells propagated in two-dimensions (2D) on plastic surfaces optimized for tissue culture. At the same time, compelling evidence suggests that cells cultured in these non-physiological conditions are not representative of cells residing in the complex microenvironment of a tissue. This discrepancy is thought to be a significant contributor to the high failure rate in drug discovery, where only a low percentage of drugs investigated ever make it through the gamut of testing and approval to the market. Thus, three-dimensional (3D) cell culture technologies that more closely resemble in vivo cell environments are now being pursued with intensity as they are expected to accommodate better precision in drug discovery. Here we will review common approaches to 3D culture, discuss the significance of 3D cultures in drug resistance and drug repositioning and address some of the challenges of applying 3D cell cultures to high-throughput drug discovery. PMID:29410625
GeoGebra Assist Discovery Learning Model for Problem Solving Ability and Attitude toward Mathematics
NASA Astrophysics Data System (ADS)
Murni, V.; Sariyasa, S.; Ardana, I. M.
2017-09-01
This study aims to describe the effet of GeoGebra utilization in the discovery learning model on mathematical problem solving ability and students’ attitude toward mathematics. This research was quasi experimental and post-test only control group design was used in this study. The population in this study was 181 of students. The sampling technique used was cluster random sampling, so the sample in this study was 120 students divided into 4 classes, 2 classes for the experimental class and 2 classes for the control class. Data were analyzed by using one way MANOVA. The results of data analysis showed that the utilization of GeoGebra in discovery learning can lead to solving problems and attitudes towards mathematics are better. This is because the presentation of problems using geogebra can assist students in identifying and solving problems and attracting students’ interest because geogebra provides an immediate response process to students. The results of the research are the utilization of geogebra in the discovery learning can be applied in learning and teaching wider subject matter, beside subject matter in this study.
Leonardi, Giorgio; Striani, Manuel; Quaglini, Silvana; Cavallini, Anna; Montani, Stefania
2018-05-21
Many medical information systems record data about the executed process instances in the form of an event log. In this paper, we present a framework, able to convert actions in the event log into higher level concepts, at different levels of abstraction, on the basis of domain knowledge. Abstracted traces are then provided as an input to trace comparison and semantic process discovery. Our abstraction mechanism is able to manage non trivial situations, such as interleaved actions or delays between two actions that abstract to the same concept. Trace comparison resorts to a similarity metric able to take into account abstraction phase penalties, and to deal with quantitative and qualitative temporal constraints in abstracted traces. As for process discovery, we rely on classical algorithms embedded in the framework ProM, made semantic by the capability of abstracting the actions on the basis of their conceptual meaning. The approach has been tested in stroke care, where we adopted abstraction and trace comparison to cluster event logs of different stroke units, to highlight (in)correct behavior, abstracting from details. We also provide process discovery results, showing how the abstraction mechanism allows to obtain stroke process models more easily interpretable by neurologists. Copyright © 2018. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Storms, Edmund
2010-10-01
The phenomenon called cold fusion has been studied for the last 21 years since its discovery by Profs. Fleischmann and Pons in 1989. The discovery was met with considerable skepticism, but supporting evidence has accumulated, plausible theories have been suggested, and research is continuing in at least eight countries. This paper provides a brief overview of the major discoveries and some of the attempts at an explanation. The evidence supports the claim that a nuclear reaction between deuterons to produce helium can occur in special materials without application of high energy. This reaction is found to produce clean energy at potentially useful levels without the harmful byproducts normally associated with a nuclear process. Various requirements of a model are examined.
Storms, Edmund
2010-10-01
The phenomenon called cold fusion has been studied for the last 21 years since its discovery by Profs. Fleischmann and Pons in 1989. The discovery was met with considerable skepticism, but supporting evidence has accumulated, plausible theories have been suggested, and research is continuing in at least eight countries. This paper provides a brief overview of the major discoveries and some of the attempts at an explanation. The evidence supports the claim that a nuclear reaction between deuterons to produce helium can occur in special materials without application of high energy. This reaction is found to produce clean energy at potentially useful levels without the harmful byproducts normally associated with a nuclear process. Various requirements of a model are examined.
The discovery reach of CP violation in neutrino oscillation with non-standard interaction effects
NASA Astrophysics Data System (ADS)
Rahman, Zini; Dasgupta, Arnab; Adhikari, Rathin
2015-06-01
We have studied the CP violation discovery reach in a neutrino oscillation experiment with superbeam, neutrino factory and monoenergetic neutrino beam from the electron capture process. For NSI satisfying model-dependent bound for shorter baselines (like CERN-Fréjus set-up) there is insignificant effect of NSI on the the discovery reach of CP violation due to δ. Particularly, for the superbeam and neutrino factory we have also considered relatively longer baselines for which there could be significant NSI effects on CP violation discovery reach for higher allowed values of NSI. For the monoenergetic beam only shorter baselines are considered to study CP violation with different nuclei as neutrino sources. Interestingly for non-standard interactions—{{\\varepsilon }eμ } and {{\\varepsilon }eτ } of neutrinos with matter during propagation in longer baselines in the superbeam, there is the possibility of better discovery reach of CP violation than that with only Standard Model interactions of neutrinos with matter. For complex NSI we have shown the CP violation discovery reach in the plane of Dirac phase δ and NSI phase {{φ }ij}. The CP violation due to some values of δ remain unobservable with present and near future experimental facilities in the superbeam and neutrino factory. However, in the presence of some ranges of off-diagonal NSI phase values there are some possibilities of discovering total CP violation for any {{δ }CP} value even at 5σ confidence level for neutrino factory. Our analysis indicates that for some values of NSI phases total CP violation may not be at all observable for any values of δ. Combination of shorter and longer baselines could indicate in some cases the presence of NSI. However, in general for NSIs ≲ 1 the CP violation discovery reach is better in neutrino factory set-ups. Using a neutrino beam from the electron capture process for nuclei 50110Sn and 152Yb, we have shown the discovery reach of CP violation in a neutrino oscillation experiment. Particularly for 50110Sn nuclei CP violation could be found for about 51% of the possible δ values for a baseline of 130 km with boost factor γ =500. Although the nuclei 152Yb is technically more feasible for the production of a mono-energetic beam, it is found to be unsuitable in obtaining good discovery reach of CP violation.
Systematic identification of latent disease-gene associations from PubMed articles.
Zhang, Yuji; Shen, Feichen; Mojarad, Majid Rastegar; Li, Dingcheng; Liu, Sijia; Tao, Cui; Yu, Yue; Liu, Hongfang
2018-01-01
Recent scientific advances have accumulated a tremendous amount of biomedical knowledge providing novel insights into the relationship between molecular and cellular processes and diseases. Literature mining is one of the commonly used methods to retrieve and extract information from scientific publications for understanding these associations. However, due to large data volume and complicated associations with noises, the interpretability of such association data for semantic knowledge discovery is challenging. In this study, we describe an integrative computational framework aiming to expedite the discovery of latent disease mechanisms by dissecting 146,245 disease-gene associations from over 25 million of PubMed indexed articles. We take advantage of both Latent Dirichlet Allocation (LDA) modeling and network-based analysis for their capabilities of detecting latent associations and reducing noises for large volume data respectively. Our results demonstrate that (1) the LDA-based modeling is able to group similar diseases into disease topics; (2) the disease-specific association networks follow the scale-free network property; (3) certain subnetwork patterns were enriched in the disease-specific association networks; and (4) genes were enriched in topic-specific biological processes. Our approach offers promising opportunities for latent disease-gene knowledge discovery in biomedical research.
Systematic identification of latent disease-gene associations from PubMed articles
Mojarad, Majid Rastegar; Li, Dingcheng; Liu, Sijia; Tao, Cui; Yu, Yue; Liu, Hongfang
2018-01-01
Recent scientific advances have accumulated a tremendous amount of biomedical knowledge providing novel insights into the relationship between molecular and cellular processes and diseases. Literature mining is one of the commonly used methods to retrieve and extract information from scientific publications for understanding these associations. However, due to large data volume and complicated associations with noises, the interpretability of such association data for semantic knowledge discovery is challenging. In this study, we describe an integrative computational framework aiming to expedite the discovery of latent disease mechanisms by dissecting 146,245 disease-gene associations from over 25 million of PubMed indexed articles. We take advantage of both Latent Dirichlet Allocation (LDA) modeling and network-based analysis for their capabilities of detecting latent associations and reducing noises for large volume data respectively. Our results demonstrate that (1) the LDA-based modeling is able to group similar diseases into disease topics; (2) the disease-specific association networks follow the scale-free network property; (3) certain subnetwork patterns were enriched in the disease-specific association networks; and (4) genes were enriched in topic-specific biological processes. Our approach offers promising opportunities for latent disease-gene knowledge discovery in biomedical research. PMID:29373609
CMS Physics Technical Design Report, Volume II: Physics Performance
NASA Astrophysics Data System (ADS)
CMS Collaboration
2007-06-01
CMS is a general purpose experiment, designed to study the physics of pp collisions at 14 TeV at the Large Hadron Collider (LHC). It currently involves more than 2000 physicists from more than 150 institutes and 37 countries. The LHC will provide extraordinary opportunities for particle physics based on its unprecedented collision energy and luminosity when it begins operation in 2007. The principal aim of this report is to present the strategy of CMS to explore the rich physics programme offered by the LHC. This volume demonstrates the physics capability of the CMS experiment. The prime goals of CMS are to explore physics at the TeV scale and to study the mechanism of electroweak symmetry breaking—through the discovery of the Higgs particle or otherwise. To carry out this task, CMS must be prepared to search for new particles, such as the Higgs boson or supersymmetric partners of the Standard Model particles, from the start-up of the LHC since new physics at the TeV scale may manifest itself with modest data samples of the order of a few fb -1 or less. The analysis tools that have been developed are applied to study in great detail and with all the methodology of performing an analysis on CMS data specific benchmark processes upon which to gauge the performance of CMS. These processes cover several Higgs boson decay channels, the production and decay of new particles such as Z' and supersymmetric particles, B s production and processes in heavy ion collisions. The simulation of these benchmark processes includes subtle effects such as possible detector miscalibration and misalignment. Besides these benchmark processes, the physics reach of CMS is studied for a large number of signatures arising in the Standard Model and also in theories beyond the Standard Model for integrated luminosities ranging from 1 fb -1 to 30 fb -1 . The Standard Model processes include QCD, B -physics, diffraction, detailed studies of the top quark properties, and electroweak physics topics such as the W and Z 0 boson properties. The production and decay of the Higgs particle is studied for many observable decays, and the precision with which the Higgs boson properties can be derived is determined. About ten different supersymmetry benchmark points are analysed using full simulation. The CMS discovery reach is evaluated in the SUSY parameter space covering a large variety of decay signatures. Furthermore, the discovery reach for a plethora of alternative models for new physics is explored, notably extra dimensions, new vector boson high mass states, little Higgs models, technicolour and others. Methods to discriminate between models have been investigated. This report is organized as follows. Chapter 1, the Introduction, describes the context of this document. Chapters 2 6 describe examples of full analyses, with photons, electrons, muons, jets, missing E T , B-mesons and τ's, and for quarkonia in heavy ion collisions. Chapters 7 15 describe the physics reach for Standard Model processes, Higgs discovery and searches for new physics beyond the Standard Model.
Modern drug discovery technologies: opportunities and challenges in lead discovery.
Guido, Rafael V C; Oliva, Glaucius; Andricopulo, Adriano D
2011-12-01
The identification of promising hits and the generation of high quality leads are crucial steps in the early stages of drug discovery projects. The definition and assessment of both chemical and biological space have revitalized the screening process model and emphasized the importance of exploring the intrinsic complementary nature of classical and modern methods in drug research. In this context, the widespread use of combinatorial chemistry and sophisticated screening methods for the discovery of lead compounds has created a large demand for small organic molecules that act on specific drug targets. Modern drug discovery involves the employment of a wide variety of technologies and expertise in multidisciplinary research teams. The synergistic effects between experimental and computational approaches on the selection and optimization of bioactive compounds emphasize the importance of the integration of advanced technologies in drug discovery programs. These technologies (VS, HTS, SBDD, LBDD, QSAR, and so on) are complementary in the sense that they have mutual goals, thereby the combination of both empirical and in silico efforts is feasible at many different levels of lead optimization and new chemical entity (NCE) discovery. This paper provides a brief perspective on the evolution and use of key drug design technologies, highlighting opportunities and challenges.
ISO 19115 Experiences in NASA's Earth Observing System (EOS) ClearingHOuse (ECHO)
NASA Astrophysics Data System (ADS)
Cechini, M. F.; Mitchell, A.
2011-12-01
Metadata is an important entity in the process of cataloging, discovering, and describing earth science data. As science research and the gathered data increases in complexity, so does the complexity and importance of descriptive metadata. To meet these growing needs, the metadata models required utilize richer and more mature metadata attributes. Categorizing, standardizing, and promulgating these metadata models to a politically, geographically, and scientifically diverse community is a difficult process. An integral component of metadata management within NASA's Earth Observing System Data and Information System (EOSDIS) is the Earth Observing System (EOS) ClearingHOuse (ECHO). ECHO is the core metadata repository for the EOSDIS data centers providing a centralized mechanism for metadata and data discovery and retrieval. ECHO has undertaken an internal restructuring to meet the changing needs of scientists, the consistent advancement in technology, and the advent of new standards such as ISO 19115. These improvements were based on the following tenets for data discovery and retrieval: + There exists a set of 'core' metadata fields recommended for data discovery. + There exists a set of users who will require the entire metadata record for advanced analysis. + There exists a set of users who will require a 'core' set metadata fields for discovery only. + There will never be a cessation of new formats or a total retirement of all old formats. + Users should be presented metadata in a consistent format of their choosing. In order to address the previously listed items, ECHO's new metadata processing paradigm utilizes the following approach: + Identify a cross-format set of 'core' metadata fields necessary for discovery. + Implement format-specific indexers to extract the 'core' metadata fields into an optimized query capability. + Archive the original metadata in its entirety for presentation to users requiring the full record. + Provide on-demand translation of 'core' metadata to any supported result format. Lessons learned by the ECHO team while implementing its new metadata approach to support usage of the ISO 19115 standard will be presented. These lessons learned highlight some discovered strengths and weaknesses in the ISO 19115 standard as it is introduced to an existing metadata processing system.
Hop, Cornelis E C A; Cole, Mark J; Davidson, Ralph E; Duignan, David B; Federico, James; Janiszewski, John S; Jenkins, Kelly; Krueger, Suzanne; Lebowitz, Rebecca; Liston, Theodore E; Mitchell, Walter; Snyder, Mark; Steyn, Stefan J; Soglia, John R; Taylor, Christine; Troutman, Matt D; Umland, John; West, Michael; Whalen, Kevin M; Zelesky, Veronica; Zhao, Sabrina X
2008-11-01
Evaluation and optimization of drug metabolism and pharmacokinetic data plays an important role in drug discovery and development and several reliable in vitro ADME models are available. Recently higher throughput in vitro ADME screening facilities have been established in order to be able to evaluate an appreciable fraction of synthesized compounds. The ADME screening process can be dissected in five distinct steps: (1) plate management of compounds in need of in vitro ADME data, (2) optimization of the MS/MS method for the compounds, (3) in vitro ADME experiments and sample clean up, (4) collection and reduction of the raw LC-MS/MS data and (5) archival of the processed ADME data. All steps will be described in detail and the value of the data on drug discovery projects will be discussed as well. Finally, in vitro ADME screening can generate large quantities of data obtained under identical conditions to allow building of reliable in silico models.
Mammalian cell models to advance our understanding of wound healing: a review.
Vidmar, Jerneja; Chingwaru, Constance; Chingwaru, Walter
2017-04-01
Rapid and efficient healing of damaged tissue is critical for the restoration of tissue function and avoidance of tissue defects. Many in vitro cell models have been described for wound healing studies; however, the mechanisms that underlie the process, especially in chronic or complicated wounds, are not fully understood. The identification of cell culture systems that closely simulate the physiology of damaged tissue in vivo is necessary. We describe the cell culture models that have enhanced our understanding, this far, of the wound healing process or have been used in drug discovery. Cell cultures derived from the epithelium, including corneal, renal, intestinal (IEC-8 cells and IEC-6), skin epithelial cells (keratinocytes, fibroblasts, and multipotent mesenchymal stem cells), and the endothelium (human umbilical vein endothelial cells, primary mouse endothelial cells, endodermal stem cells, human mesenchymal stem cells, and corneal endothelial cells) have played a pivotal role toward our understanding of the mechanisms of wound healing. More studies are necessary to develop co-culture cell models which closely simulate the environment of a wound in vivo. Cell culture models are invaluable tools to promote our understanding of the mechanisms that regulate the wound healing process and provide a platform for drug discovery. Copyright © 2016 Elsevier Inc. All rights reserved.
Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert
2007-12-01
We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.
Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert
2007-09-01
We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.
NASA Astrophysics Data System (ADS)
Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert
2007-12-01
We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.
NASA Astrophysics Data System (ADS)
Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert
2007-09-01
We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.
Robustness of disaggregate oil and gas discovery forecasting models
Attanasi, E.D.; Schuenemeyer, J.H.
1989-01-01
The trend in forecasting oil and gas discoveries has been to develop and use models that allow forecasts of the size distribution of future discoveries. From such forecasts, exploration and development costs can more readily be computed. Two classes of these forecasting models are the Arps-Roberts type models and the 'creaming method' models. This paper examines the robustness of the forecasts made by these models when the historical data on which the models are based have been subject to economic upheavals or when historical discovery data are aggregated from areas having widely differing economic structures. Model performance is examined in the context of forecasting discoveries for offshore Texas State and Federal areas. The analysis shows how the model forecasts are limited by information contained in the historical discovery data. Because the Arps-Roberts type models require more regularity in discovery sequence than the creaming models, prior information had to be introduced into the Arps-Roberts models to accommodate the influence of economic changes. The creaming methods captured the overall decline in discovery size but did not easily allow introduction of exogenous information to compensate for incomplete historical data. Moreover, the predictive log normal distribution associated with the creaming model methods appears to understate the importance of the potential contribution of small fields. ?? 1989.
Solution NMR Spectroscopy in Target-Based Drug Discovery.
Li, Yan; Kang, Congbao
2017-08-23
Solution NMR spectroscopy is a powerful tool to study protein structures and dynamics under physiological conditions. This technique is particularly useful in target-based drug discovery projects as it provides protein-ligand binding information in solution. Accumulated studies have shown that NMR will play more and more important roles in multiple steps of the drug discovery process. In a fragment-based drug discovery process, ligand-observed and protein-observed NMR spectroscopy can be applied to screen fragments with low binding affinities. The screened fragments can be further optimized into drug-like molecules. In combination with other biophysical techniques, NMR will guide structure-based drug discovery. In this review, we describe the possible roles of NMR spectroscopy in drug discovery. We also illustrate the challenges encountered in the drug discovery process. We include several examples demonstrating the roles of NMR in target-based drug discoveries such as hit identification, ranking ligand binding affinities, and mapping the ligand binding site. We also speculate the possible roles of NMR in target engagement based on recent processes in in-cell NMR spectroscopy.
Stau coannihilation, compressed spectrum, and SUSY discovery potential at the LHC
NASA Astrophysics Data System (ADS)
Aboubrahim, Amin; Nath, Pran; Spisak, Andrew B.
2017-06-01
The lack of observation of supersymmetry thus far implies that the weak supersymmetry scale is larger than what was thought before the LHC era. This observation is strengthened by the Higgs boson mass measurement at ˜125 GeV , which within supersymmetric models implies a large loop correction and a weak supersymmetry scale lying in the several TeV region. In addition if neutralino is the dark matter, its relic density puts further constraints on models often requiring coannihilation to reduce the neutralino relic density to be consistent with experimental observation. The coannihilation in turn implies that the mass gap between the lightest supersymmetric particle and the next to lightest supersymmetric particle will be small, leading to softer final states and making the observation of supersymmetry challenging. In this work we investigate stau coannihilation models within supergravity grand unified models and the potential of discovery of such models at the LHC in the post-Higgs boson discovery era. We utilize a variety of signal regions to optimize the discovery of supersymmetry in the stau coannihilation region. In the analysis presented we impose the relic density constraint as well as the constraint of the Higgs boson mass. The range of sparticle masses discoverable up to the optimal integrated luminosity of the HL-LHC is investigated. It is found that the mass difference between the stau and the neutralino does not exceed ˜20 GeV over the entire mass range of the models explored. Thus the discovery of a supersymmetric signal arising from the stau coannihilation region will also provide a measurement of the neutralino mass. The direct detection of neutralino dark matter is analyzed within the class of stau coannihilation models investigated. The analysis is extended to include multiparticle coannihilation where stau along with chargino and the second neutralino enter into the coannihilation process.
From QSAR to QSIIR: Searching for Enhanced Computational Toxicology Models
Zhu, Hao
2017-01-01
Quantitative Structure Activity Relationship (QSAR) is the most frequently used modeling approach to explore the dependency of biological, toxicological, or other types of activities/properties of chemicals on their molecular features. In the past two decades, QSAR modeling has been used extensively in drug discovery process. However, the predictive models resulted from QSAR studies have limited use for chemical risk assessment, especially for animal and human toxicity evaluations, due to the low predictivity of new compounds. To develop enhanced toxicity models with independently validated external prediction power, novel modeling protocols were pursued by computational toxicologists based on rapidly increasing toxicity testing data in recent years. This chapter reviews the recent effort in our laboratory to incorporate the biological testing results as descriptors in the toxicity modeling process. This effort extended the concept of QSAR to Quantitative Structure In vitro-In vivo Relationship (QSIIR). The QSIIR study examples provided in this chapter indicate that the QSIIR models that based on the hybrid (biological and chemical) descriptors are indeed superior to the conventional QSAR models that only based on chemical descriptors for several animal toxicity endpoints. We believe that the applications introduced in this review will be of interest and value to researchers working in the field of computational drug discovery and environmental chemical risk assessment. PMID:23086837
NASA Astrophysics Data System (ADS)
Kurtz, N.; Marks, N.; Cooper, S. K.
2014-12-01
Scientific ocean drilling through the International Ocean Discovery Program (IODP) has contributed extensively to our knowledge of Earth systems science. However, many of its methods and discoveries can seem abstract and complicated for students. Collaborations between scientists and educators/artists to create accurate yet engaging demonstrations and activities have been crucial to increasing understanding and stimulating interest in fascinating geological topics. One such collaboration, which came out of Expedition 345 to the Hess Deep Rift, resulted in an interactive lab to explore sampling rocks from the usually inacessible lower oceanic crust, offering an insight into the geological processes that form the structure of the Earth's crust. This Hess Deep Interactive Lab aims to explain several significant discoveries made by oceanic drilling utilizing images of actual thin sections and core samples recovered from IODP expeditions. . Participants can interact with a physical model to learn about the coring and drilling processes, and gain an understanding of seafloor structures. The collaboration of this lab developed as a need to explain fundamental notions of the ocean crust formed at fast-spreading ridges. A complementary interactive online lab can be accessed at www.joidesresolution.org for students to engage further with these concepts. This project explores the relationship between physical and on-line models to further understanding, including what we can learn from the pros and cons of each.
Efficient discovery of responses of proteins to compounds using active learning
2014-01-01
Background Drug discovery and development has been aided by high throughput screening methods that detect compound effects on a single target. However, when using focused initial screening, undesirable secondary effects are often detected late in the development process after significant investment has been made. An alternative approach would be to screen against undesired effects early in the process, but the number of possible secondary targets makes this prohibitively expensive. Results This paper describes methods for making this global approach practical by constructing predictive models for many target responses to many compounds and using them to guide experimentation. We demonstrate for the first time that by jointly modeling targets and compounds using descriptive features and using active machine learning methods, accurate models can be built by doing only a small fraction of possible experiments. The methods were evaluated by computational experiments using a dataset of 177 assays and 20,000 compounds constructed from the PubChem database. Conclusions An average of nearly 60% of all hits in the dataset were found after exploring only 3% of the experimental space which suggests that active learning can be used to enable more complete characterization of compound effects than otherwise affordable. The methods described are also likely to find widespread application outside drug discovery, such as for characterizing the effects of a large number of compounds or inhibitory RNAs on a large number of cell or tissue phenotypes. PMID:24884564
Coming In: Queer Narratives of Sexual Self-Discovery.
Rosenberg, Shoshana
2017-10-09
Many models of queer sexuality continue to depict a linear narrative of sexual development, beginning in repression/concealment and eventuating in coming out. The present study sought to challenge this by engaging in a hermeneutically informed thematic analysis of interviews with eight queer people living in Western Australia. Four themes were identified: "searching for identity," "society, stigma, and self," "sexual self-discovery," and "coming in." Interviewees discussed internalized homophobia and its impact on their life; experiences and implications of finding a community and achieving a sense of belonging; the concept of sexual self-discovery being a lifelong process; and sexuality as fluid, dynamic, and situational rather than static. The article concludes by suggesting that the idea of "coming in"-arriving at a place of acceptance of one's sexuality, regardless of its fluidity or how it is viewed by society-offers considerable analytic leverage for understanding the journeys of sexual self-discovery of queer-identified people.
Heifetz, Alexander; Barker, Oliver; Verquin, Geraldine; Wimmer, Norbert; Meutermans, Wim; Pal, Sandeep; Law, Richard J; Whittaker, Mark
2013-05-24
Obesity is an increasingly common disease. While antagonism of the melanin-concentrating hormone-1 receptor (MCH-1R) has been widely reported as a promising therapeutic avenue for obesity treatment, no MCH-1R antagonists have reached the market. Discovery and optimization of new chemical matter targeting MCH-1R is hindered by reduced HTS success rates and a lack of structural information about the MCH-1R binding site. X-ray crystallography and NMR, the major experimental sources of structural information, are very slow processes for membrane proteins and are not currently feasible for every GPCR or GPCR-ligand complex. This situation significantly limits the ability of these methods to impact the drug discovery process for GPCR targets in "real-time", and hence, there is an urgent need for other practical and cost-efficient alternatives. We present here a conceptually pioneering approach that integrates GPCR modeling with design, synthesis, and screening of a diverse library of sugar-based compounds from the VAST technology (versatile assembly on stable templates) to provide structural insights on the MCH-1R binding site. This approach creates a cost-efficient new avenue for structure-based drug discovery (SBDD) against GPCR targets. In our work, a primary VAST hit was used to construct a high-quality MCH-1R model. Following model validation, a structure-based virtual screen yielded a 14% hit rate and 10 novel chemotypes of potent MCH-1R antagonists, including EOAI3367472 (IC50 = 131 nM) and EOAI3367474 (IC50 = 213 nM).
Zheng, Chunli; Wang, Jinan; Liu, Jianling; Pei, Mengjie; Huang, Chao; Wang, Yonghua
2014-08-01
The term systems pharmacology describes a field of study that uses computational and experimental approaches to broaden the view of drug actions rooted in molecular interactions and advance the process of drug discovery. The aim of this work is to stick out the role that the systems pharmacology plays across the multi-target drug discovery from natural products for cardiovascular diseases (CVDs). Firstly, based on network pharmacology methods, we reconstructed the drug-target and target-target networks to determine the putative protein target set of multi-target drugs for CVDs treatment. Secondly, we reintegrated a compound dataset of natural products and then obtained a multi-target compounds subset by virtual-screening process. Thirdly, a drug-likeness evaluation was applied to find the ADME-favorable compounds in this subset. Finally, we conducted in vitro experiments to evaluate the reliability of the selected chemicals and targets. We found that four of the five randomly selected natural molecules can effectively act on the target set for CVDs, indicating the reasonability of our systems-based method. This strategy may serve as a new model for multi-target drug discovery of complex diseases.
Sordaria, a model system to uncover links between meiotic pairing and recombination
Zickler, Denise; Espagne, Eric
2017-01-01
The mycelial fungus Sordaria macrospora was first used as experimental system for meiotic recombination. This review shows that it provides also a powerful cytological system for dissecting chromosome dynamics in wild-type and mutant meioses. Fundamental cytogenetic findings include: (1) The identification of presynaptic alignment as a key step in pairing of homologous chromosomes. (2) The discovery that biochemical complexes that mediate recombination at the DNA level concomitantly mediate pairing of homologs. (3) This pairing process involves not only resolution but also avoidance of chromosomal entanglements and the resolution system includes dissolution of constraining DNA recombination interactions, achieved by a unique role of Mlh1. (4) Discovery that the central components of the synaptonemal complex directly mediate the re-localization of the recombination proteins from on-axis to in-between homologue axis positions. (5) Identification of putative STUbL protein Hei10 as a structure-based signal transduction molecule that coordinates progression and differentiation of recombinational interactions at multiple stages. (6) Discovery that a single interference process mediates both nucleation of the SC and designation of crossover sites, thereby ensuring even spacing of both features. (7) Discovery of local modulation of sister-chromatid cohesion at sites of crossover recombination. PMID:26877138
LULL(ed) into complacency: a perspective on licenses and stem cell translational science
2013-01-01
The US has had a very successful model for facilitating the translation of a basic discovery to a commercial application. The success of the model has hinged on providing clarity on ownership of a discovery, facilitating the licensing process, providing adequate incentive to the inventors, and developing a self-sustaining model for reinvestment. In recent years, technological, political, and regulatory changes have put strains on this model and in some cases have hindered progress rather than facilitated it. This is particularly true for the nascent field of regenerative medicine. To illustrate this, I will describe the contributing practices of several different entities, including universities, repositories, patent trolls, and service providers. It is my hope that the scientific community will be motivated to coordinate efforts against these obstacles to translation. PMID:23953837
Geeleher, Paul; Cox, Nancy J; Huang, R Stephanie
2016-09-21
We show that variability in general levels of drug sensitivity in pre-clinical cancer models confounds biomarker discovery. However, using a very large panel of cell lines, each treated with many drugs, we could estimate a general level of sensitivity to all drugs in each cell line. By conditioning on this variable, biomarkers were identified that were more likely to be effective in clinical trials than those identified using a conventional uncorrected approach. We find that differences in general levels of drug sensitivity are driven by biologically relevant processes. We developed a gene expression based method that can be used to correct for this confounder in future studies.
[Caenorhabditis elegans: a powerful tool for drug discovery].
Jia, Xi-Hua; Cao, Cheng
2009-07-01
A simple model organism Caenorhabditis elegans has contributed substantially to the fundamental researches in biology. In an era of functional genomics, nematode worm has been developed into a multi-purpose tool that can be exploited to identify disease-causing or disease-associated genes, validate potential drug targets. This, coupled with its genetic amenability, low cost experimental manipulation and compatibility with high throughput screening in an intact physiological condition, makes the model organism into an effective toolbox for drug discovery. This review shows the unique features of C. elegans, how it can play a valuable role in our understanding of the molecular mechanism of human diseases and finding drug leads in drug development process.
Clustering Words to Match Conditions: An Algorithm for Stimuli Selection in Factorial Designs
ERIC Educational Resources Information Center
Guasch, Marc; Haro, Juan; Boada, Roger
2017-01-01
With the increasing refinement of language processing models and the new discoveries about which variables can modulate these processes, stimuli selection for experiments with a factorial design is becoming a tough task. Selecting sets of words that differ in one variable, while matching these same words into dozens of other confounding variables…
Application of the Adoption Change Model in a Voluntary Non-Profit Arts Organization.
ERIC Educational Resources Information Center
Macduff, Nancy
The staff of a nonprofit music support organization plagued with low morale initiated a process of change that the executive director, with the help of a consultant/adult educator, agreed to continue. The change process included seven phases: discovery of need, the helping relationship defined, the change problem identified, goals established,…
Microscopic information processing and communication in crowd dynamics
NASA Astrophysics Data System (ADS)
Henein, Colin Marc; White, Tony
2010-11-01
Due, perhaps, to the historical division of crowd dynamics research into psychological and engineering approaches, microscopic crowd models have tended toward modelling simple interchangeable particles with an emphasis on the simulation of physical factors. Despite the fact that people have complex (non-panic) behaviours in crowd disasters, important human factors in crowd dynamics such as information discovery and processing, changing goals and communication have not yet been well integrated at the microscopic level. We use our Microscopic Human Factors methodology to fuse a microscopic simulation of these human factors with a popular microscopic crowd model. By tightly integrating human factors with the existing model we can study the effects on the physical domain (movement, force and crowd safety) when human behaviour (information processing and communication) is introduced. In a large-room egress scenario with ample exits, information discovery and processing yields a crowd of non-interchangeable individuals who, despite close proximity, have different goals due to their different beliefs. This crowd heterogeneity leads to complex inter-particle interactions such as jamming transitions in open space; at high crowd energies, we found a freezing by heating effect (reminiscent of the disaster at Central Lenin Stadium in 1982) in which a barrier formation of naïve individuals trying to reach blocked exits prevented knowledgeable ones from exiting. Communication, when introduced, reduced this barrier formation, increasing both exit rates and crowd safety.
Styczyńska-Soczka, Katarzyna; Zechini, Luigi; Zografos, Lysimachos
2017-04-01
Parkinson's disease is a growing threat to an ever-ageing population. Despite progress in our understanding of the molecular and cellular mechanisms underlying the disease, all therapeutics currently available only act to improve symptoms and do not stop the disease process. It is therefore imperative that more effective drug discovery methods and approaches are developed, validated, and used for the discovery of disease-modifying treatments for Parkinson's. Drug repurposing has been recognized as being equally as promising as de novo drug discovery in the field of neurodegeneration and Parkinson's disease specifically. In this work, we utilize a transgenic Drosophila model of Parkinson's disease, made by expressing human alpha-synuclein in the Drosophila brain, to validate two repurposed compounds: astemizole and ketoconazole. Both have been computationally predicted to have an ameliorative effect on Parkinson's disease, but neither had been tested using an in vivo model of the disease. After treating the flies in parallel, results showed that both drugs rescue the motor phenotype that is developed by the Drosophila model with age, but only ketoconazole treatment reversed the increased dopaminergic neuron death also observed in these models, which is a hallmark of Parkinson's disease. In addition to validating the predicted improvement in Parkinson's disease symptoms for both drugs and revealing the potential neuroprotective activity of ketoconazole, these results highlight the value of Drosophila models of Parkinson's disease as key tools in the context of in vivo drug discovery, drug repurposing, and prioritization of hits, especially when coupled with computational predictions.
ADDME – Avoiding Drug Development Mistakes Early: central nervous system drug discovery perspective
Tsaioun, Katya; Bottlaender, Michel; Mabondzo, Aloise
2009-01-01
The advent of early absorption, distribution, metabolism, excretion, and toxicity (ADMET) screening has increased the attrition rate of weak drug candidates early in the drug-discovery process, and decreased the proportion of compounds failing in clinical trials for ADMET reasons. This paper reviews the history of ADMET screening and its place in pharmaceutical development, and central nervous system drug discovery in particular. Assays that have been developed in response to specific needs and improvements in technology that result in higher throughput and greater accuracy of prediction of human mechanisms of absorption and toxicity are discussed. The paper concludes with the authors' forecast of new models that will better predict human efficacy and toxicity. PMID:19534730
Closed-Loop Multitarget Optimization for Discovery of New Emulsion Polymerization Recipes
2015-01-01
Self-optimization of chemical reactions enables faster optimization of reaction conditions or discovery of molecules with required target properties. The technology of self-optimization has been expanded to discovery of new process recipes for manufacture of complex functional products. A new machine-learning algorithm, specifically designed for multiobjective target optimization with an explicit aim to minimize the number of “expensive” experiments, guides the discovery process. This “black-box” approach assumes no a priori knowledge of chemical system and hence particularly suited to rapid development of processes to manufacture specialist low-volume, high-value products. The approach was demonstrated in discovery of process recipes for a semibatch emulsion copolymerization, targeting a specific particle size and full conversion. PMID:26435638
Risbrough, Victoria B; Glenn, Daniel E; Baker, Dewleen G
The use of quantitative, laboratory-based measures of threat in humans for proof-of-concept studies and target development for novel drug discovery has grown tremendously in the last 2 decades. In particular, in the field of posttraumatic stress disorder (PTSD), human models of fear conditioning have been critical in shaping our theoretical understanding of fear processes and importantly, validating findings from animal models of the neural substrates and signaling pathways required for these complex processes. Here, we will review the use of laboratory-based measures of fear processes in humans including cued and contextual conditioning, generalization, extinction, reconsolidation, and reinstatement to develop novel drug treatments for PTSD. We will primarily focus on recent advances in using behavioral and physiological measures of fear, discussing their sensitivity as biobehavioral markers of PTSD symptoms, their response to known and novel PTSD treatments, and in the case of d-cycloserine, how well these findings have translated to outcomes in clinical trials. We will highlight some gaps in the literature and needs for future research, discuss benefits and limitations of these outcome measures in designing proof-of-concept trials, and offer practical guidelines on design and interpretation when using these fear models for drug discovery.
Imbalanced target prediction with pattern discovery on clinical data repositories.
Chan, Tak-Ming; Li, Yuxi; Chiau, Choo-Chiap; Zhu, Jane; Jiang, Jie; Huo, Yong
2017-04-20
Clinical data repositories (CDR) have great potential to improve outcome prediction and risk modeling. However, most clinical studies require careful study design, dedicated data collection efforts, and sophisticated modeling techniques before a hypothesis can be tested. We aim to bridge this gap, so that clinical domain users can perform first-hand prediction on existing repository data without complicated handling, and obtain insightful patterns of imbalanced targets for a formal study before it is conducted. We specifically target for interpretability for domain users where the model can be conveniently explained and applied in clinical practice. We propose an interpretable pattern model which is noise (missing) tolerant for practice data. To address the challenge of imbalanced targets of interest in clinical research, e.g., deaths less than a few percent, the geometric mean of sensitivity and specificity (G-mean) optimization criterion is employed, with which a simple but effective heuristic algorithm is developed. We compared pattern discovery to clinically interpretable methods on two retrospective clinical datasets. They contain 14.9% deaths in 1 year in the thoracic dataset and 9.1% deaths in the cardiac dataset, respectively. In spite of the imbalance challenge shown on other methods, pattern discovery consistently shows competitive cross-validated prediction performance. Compared to logistic regression, Naïve Bayes, and decision tree, pattern discovery achieves statistically significant (p-values < 0.01, Wilcoxon signed rank test) favorable averaged testing G-means and F1-scores (harmonic mean of precision and sensitivity). Without requiring sophisticated technical processing of data and tweaking, the prediction performance of pattern discovery is consistently comparable to the best achievable performance. Pattern discovery has demonstrated to be robust and valuable for target prediction on existing clinical data repositories with imbalance and noise. The prediction results and interpretable patterns can provide insights in an agile and inexpensive way for the potential formal studies.
Translational research: understanding the continuum from bench to bedside.
Drolet, Brian C; Lorenzi, Nancy M
2011-01-01
The process of translating basic scientific discoveries to clinical applications, and ultimately to public health improvements, has emerged as an important, but difficult, objective in biomedical research. The process is best described as a "translation continuum" because various resources and actions are involved in this progression of knowledge, which advances discoveries from the bench to the bedside. The current model of this continuum focuses primarily on translational research, which is merely one component of the overall translation process. This approach is ineffective. A revised model to address the entire continuum would provide a methodology to identify and describe all translational activities (eg, implementation, adoption translational research, etc) as well their place within the continuum. This manuscript reviews and synthesizes the literature to provide an overview of the current terminology and model for translation. A modification of the existing model is proposed to create a framework called the Biomedical Research Translation Continuum, which defines the translation process and describes the progression of knowledge from laboratory to health gains. This framework clarifies translation for readers who have not followed the evolving and complicated models currently described. Authors and researchers may use the continuum to understand and describe their research better as well as the translational activities within a conceptual framework. Additionally, the framework may increase the advancement of knowledge by refining discussions of translation and allowing more precise identification of barriers to progress. Copyright © 2011 Mosby, Inc. All rights reserved.
Detangling Spaghetti: Tracking Deep Ocean Currents in the Gulf of Mexico
ERIC Educational Resources Information Center
Curran, Mary Carla; Bower, Amy S.; Furey, Heather H.
2017-01-01
Creation of physical models can help students learn science by enabling them to be more involved in the scientific process of discovery and to use multiple senses during investigations. This activity achieves these goals by having students model ocean currents in the Gulf of Mexico. In general, oceans play a key role in influencing weather…
Opening the Mind's Eye to Science.
ERIC Educational Resources Information Center
Hassard, Jack
1982-01-01
Emphasizes the importance of imagination in scientific discovery and science education and identifies three processes which increase the richness of the visualization experience: relaxing, concentrating, and seeing. Suggests topics for guided experiences and example models for earth/space, life, and physical sciences. (DC)
Aptamers as tools for target prioritization and lead identification.
Burgstaller, Petra; Girod, Anne; Blind, Michael
2002-12-15
The increasing number of potential drug target candidates has driven the development of novel technologies designed to identify functionally important targets and enhance the subsequent lead discovery process. Highly specific synthetic nucleic acid ligands--also known as aptamers--offer a new exciting route in the drug discovery process by linking target validation directly with HTS. Recently, aptamers have proven to be valuable tools for modulating the function of endogenous cellular proteins in their natural environment. A set of technologies has been developed to use these sophisticated ligands for the validation of potential drug targets in disease models. Moreover, aptamers that are specific antagonists of protein function can act as substitute interaction partners in HTS assays to facilitate the identification of small-molecule lead compounds.
NASA Astrophysics Data System (ADS)
Ganguly, A. R.; Steinbach, M.; Kumar, V.
2009-12-01
The IPCC AR4 not only provided conclusive evidence about anticipated global warming at century scales, but also indicated with a high level of certainty that the warming is caused by anthropogenic emissions. However, an outstanding knowledge-gap is to develop credible projections of climate extremes and their impacts. Climate extremes are defined in this context as extreme weather and hydrological events, as well as changes in regional hydro-meteorological patterns, especially at decadal scales. While temperature extremes from climate models have relatively better skills, hydrological variables and their extremes have significant shortcomings. Credible projections about tropical storms, sea level rise, coastal storm surge, land glacier melts, and landslides remain elusive. The next generation of climate models is expected to have higher precision. However, their ability to provide more accurate projections of climate extremes remains to be tested. Projections of observed trends into the future may not be reliable in non-stationary environments like climate change, even though functional relationships derived from physics may hold. On the other hand, assessments of climate change impacts which are useful for stakeholders and policy makers depend critically on regional and decadal scale projections of climate extremes. Thus, climate impacts scientists often need to develop qualitative inferences about the not so-well predicted climate extremes based on insights from observations (e.g., increased hurricane intensity) or conceptual understanding (e.g., relation of wildfires to regional warming or drying and hurricanes to SST). However, neither conceptual understanding nor observed trends may be reliable when extrapolating in a non-stationary environment. These urgent societal priorities offer fertile grounds for nonlinear modeling and knowledge discovery approaches. Thus, qualitative inferences on climate extremes and impacts may be transformed into quantitative predictive insights based on a combination of hypothesis-guided data analysis and relatively hypothesis-free but data-guided discovery processes. The analysis and discovery approaches need to be cognizant of climate data characteristics like nonlinear processes, low-frequency variability, long-range spatial dependence and long-memory temporal processes; the value of physically-motivated conceptual understanding and functional associations; as well as possible thresholds and tipping points in the impacted natural, engineered or human systems. Case studies focusing on new methodologies as well as novel climate insights are discussed with a focus on stakeholder requirements.
Wiley, Emily A.; Stover, Nicholas A.
2014-01-01
Use of inquiry-based research modules in the classroom has soared over recent years, largely in response to national calls for teaching that provides experience with scientific processes and methodologies. To increase the visibility of in-class studies among interested researchers and to strengthen their impact on student learning, we have extended the typical model of inquiry-based labs to include a means for targeted dissemination of student-generated discoveries. This initiative required: 1) creating a set of research-based lab activities with the potential to yield results that a particular scientific community would find useful and 2) developing a means for immediate sharing of student-generated results. Working toward these goals, we designed guides for course-based research aimed to fulfill the need for functional annotation of the Tetrahymena thermophila genome, and developed an interactive Web database that links directly to the official Tetrahymena Genome Database for immediate, targeted dissemination of student discoveries. This combination of research via the course modules and the opportunity for students to immediately “publish” their novel results on a Web database actively used by outside scientists culminated in a motivational tool that enhanced students’ efforts to engage the scientific process and pursue additional research opportunities beyond the course. PMID:24591511
Wiley, Emily A; Stover, Nicholas A
2014-01-01
Use of inquiry-based research modules in the classroom has soared over recent years, largely in response to national calls for teaching that provides experience with scientific processes and methodologies. To increase the visibility of in-class studies among interested researchers and to strengthen their impact on student learning, we have extended the typical model of inquiry-based labs to include a means for targeted dissemination of student-generated discoveries. This initiative required: 1) creating a set of research-based lab activities with the potential to yield results that a particular scientific community would find useful and 2) developing a means for immediate sharing of student-generated results. Working toward these goals, we designed guides for course-based research aimed to fulfill the need for functional annotation of the Tetrahymena thermophila genome, and developed an interactive Web database that links directly to the official Tetrahymena Genome Database for immediate, targeted dissemination of student discoveries. This combination of research via the course modules and the opportunity for students to immediately "publish" their novel results on a Web database actively used by outside scientists culminated in a motivational tool that enhanced students' efforts to engage the scientific process and pursue additional research opportunities beyond the course.
Have artificial neural networks met expectations in drug discovery as implemented in QSAR framework?
Dobchev, Dimitar; Karelson, Mati
2016-07-01
Artificial neural networks (ANNs) are highly adaptive nonlinear optimization algorithms that have been applied in many diverse scientific endeavors, ranging from economics, engineering, physics, and chemistry to medical science. Notably, in the past two decades, ANNs have been used widely in the process of drug discovery. In this review, the authors discuss advantages and disadvantages of ANNs in drug discovery as incorporated into the quantitative structure-activity relationships (QSAR) framework. Furthermore, the authors examine the recent studies, which span over a broad area with various diseases in drug discovery. In addition, the authors attempt to answer the question about the expectations of the ANNs in drug discovery and discuss the trends in this field. The old pitfalls of overtraining and interpretability are still present with ANNs. However, despite these pitfalls, the authors believe that ANNs have likely met many of the expectations of researchers and are still considered as excellent tools for nonlinear data modeling in QSAR. It is likely that ANNs will continue to be used in drug development in the future.
16 CFR 3.31A - Expert discovery.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 16 Commercial Practices 1 2013-01-01 2013-01-01 false Expert discovery. 3.31A Section 3.31A... PRACTICE FOR ADJUDICATIVE PROCEEDINGS Discovery; Compulsory Process § 3.31A Expert discovery. (a) The... later than 1 day after the close of fact discovery, meaning the close of discovery except for...
16 CFR 3.31A - Expert discovery.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 16 Commercial Practices 1 2011-01-01 2011-01-01 false Expert discovery. 3.31A Section 3.31A... PRACTICE FOR ADJUDICATIVE PROCEEDINGS Discovery; Compulsory Process § 3.31A Expert discovery. (a) The... later than 1 day after the close of fact discovery, meaning the close of discovery except for...
16 CFR 3.31A - Expert discovery.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Expert discovery. 3.31A Section 3.31A... PRACTICE FOR ADJUDICATIVE PROCEEDINGS Discovery; Compulsory Process § 3.31A Expert discovery. (a) The... later than 1 day after the close of fact discovery, meaning the close of discovery except for...
2016-01-01
Background As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. Objective To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. Methods A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. Results The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. Conclusions A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community. PMID:27986644
NASA Astrophysics Data System (ADS)
Al-Bustany, Fatin Khalil Ismail
1989-09-01
My aim in this dissertation is to develop an evolutionary conception of science based on recent studies in evolution theory, the thermodynamics of non-equilibrium and information theory, as exemplified in the works of Prigogine, Jantsch, Wicken and Gatlin. The nature of scientific change is of interest to philosophers and historians of science. Some construe it after a revolutionary model (e.g. Kuhn), others adopt an evolutionary view (e.g. Toulmin). It appears to me that it is possible to construct an evolutionary model encompassing the revolutionary mode as well. The following strategies are employed: (1) A distinction is made between two types of growth: one represents gradual change, the other designates radical transformations, and two principles underlying the process of change, one of conservation, the other of innovation. (2) Science in general, and scientific theories in particular, are looked upon as dissipative structures. These are characterised by openness, irreversibility and self-organisation. In terms of these, one may identify a state of "normal" growth and another of violent fluctuations leading to a new order (revolutionary phase). These fluctuations are generated by the flow of information coming from the observable world. The chief merits of this evolutionary model of the development of science lie in the emphasis it puts on the relation of science to its environment, in the description of scientific change as a process of interaction between internal and external elements (structural, conceptual, and cultural), in the enhancement of our understanding progress and rationality in science, and in the post Neo -Darwinian conception of evolution, stressing self-organisation, the innovativeness of the evolutionary process and the trend toward complexification. These features are also manifested in the process of discovery, which is a fundamental part of the scientific enterprise. In addition, a distinction is made between two types of discovery which serves as a criterion for delineating various episodes in the development of science. The evolutionary model further displays a complementarity mode of description on several levels: between science and its milieu, stability and instability, discovery and confirmation.
Cazzanelli, Giulia; Francisco, Rita; Azevedo, Luísa; Carvalho, Patrícia Dias; Almeida, Ana; Côrte-Real, Manuela; Oliveira, Maria José; Lucas, Cândida; Sousa, Maria João
2018-01-01
The exploitation of the yeast Saccharomyces cerevisiae as a biological model for the investigation of complex molecular processes conserved in multicellular organisms, such as humans, has allowed fundamental biological discoveries. When comparing yeast and human proteins, it is clear that both amino acid sequences and protein functions are often very well conserved. One example of the high degree of conservation between human and yeast proteins is highlighted by the members of the RAS family. Indeed, the study of the signaling pathways regulated by RAS in yeast cells led to the discovery of properties that were often found interchangeable with RAS proto-oncogenes in human pathways, and vice versa. In this work, we performed an updated critical literature review on human and yeast RAS pathways, specifically highlighting the similarities and differences between them. Moreover, we emphasized the contribution of studying yeast RAS pathways for the understanding of human RAS and how this model organism can contribute to unveil the roles of RAS oncoproteins in the regulation of mechanisms important in the tumorigenic process, like autophagy. PMID:29463063
19 CFR 210.61 - Discovery and compulsory process.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 3 2010-04-01 2010-04-01 false Discovery and compulsory process. 210.61 Section 210.61 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION INVESTIGATIONS OF UNFAIR PRACTICES IN IMPORT TRADE ADJUDICATION AND ENFORCEMENT Temporary Relief § 210.61 Discovery and compulsory...
Ormes, James D; Zhang, Dan; Chen, Alex M; Hou, Shirley; Krueger, Davida; Nelson, Todd; Templeton, Allen
2013-02-01
There has been a growing interest in amorphous solid dispersions for bioavailability enhancement in drug discovery. Spray drying, as shown in this study, is well suited to produce prototype amorphous dispersions in the Candidate Selection stage where drug supply is limited. This investigation mapped the processing window of a micro-spray dryer to achieve desired particle characteristics and optimize throughput/yield. Effects of processing variables on the properties of hypromellose acetate succinate were evaluated by a fractional factorial design of experiments. Parameters studied include solid loading, atomization, nozzle size, and spray rate. Response variables include particle size, morphology and yield. Unlike most other commercial small-scale spray dryers, the ProCepT was capable of producing particles with a relatively wide mean particle size, ca. 2-35 µm, allowing material properties to be tailored to support various applications. In addition, an optimized throughput of 35 g/hour with a yield of 75-95% was achieved, which affords to support studies from Lead-identification/Lead-optimization to early safety studies. A regression model was constructed to quantify the relationship between processing parameters and the response variables. The response surface curves provide a useful tool to design processing conditions, leading to a reduction in development time and drug usage to support drug discovery.
The Quirky Collider Signals of Folded Supersymmetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burdman, Gustavo; Chacko, Z.; Goh, Hock-Seng
2008-08-01
We investigate the collider signals associated with scalar quirks ('squirks') in folded supersymmetric models. As opposed to regular superpartners in supersymmetric models these particles are uncolored, but are instead charged under a new confining group, leading to radically different collider signals. Due to the new strong dynamics, squirks that are pair produced do not hadronize separately, but rather form a highly excited bound state. The excited 'squirkonium' loses energy to radiation before annihilating back into Standard Model particles. We calculate the branching fractions into various channels for this process, which is prompt on collider time-scales. The most promising annihilation channelmore » for discovery is W+photon which dominates for squirkonium near its ground state. We demonstrate the feasibility of the LHC search, showing that the mass peak is visible above the SM continuum background and estimate the discovery reach.« less
Generating Focused Molecule Libraries for Drug Discovery with Recurrent Neural Networks
2017-01-01
In de novo drug design, computational strategies are used to generate novel molecules with good affinity to the desired biological target. In this work, we show that recurrent neural networks can be trained as generative models for molecular structures, similar to statistical language models in natural language processing. We demonstrate that the properties of the generated molecules correlate very well with the properties of the molecules used to train the model. In order to enrich libraries with molecules active toward a given biological target, we propose to fine-tune the model with small sets of molecules, which are known to be active against that target. Against Staphylococcus aureus, the model reproduced 14% of 6051 hold-out test molecules that medicinal chemists designed, whereas against Plasmodium falciparum (Malaria), it reproduced 28% of 1240 test molecules. When coupled with a scoring function, our model can perform the complete de novo drug design cycle to generate large sets of novel molecules for drug discovery. PMID:29392184
Discovery Reconceived: Product before Process
ERIC Educational Resources Information Center
Abrahamson, Dor
2012-01-01
Motivated by the question, "What exactly about a mathematical concept should students discover, when they study it via discovery learning?", I present and demonstrate an interpretation of discovery pedagogy that attempts to address its criticism. My approach hinges on decoupling the solution process from its resultant product. Whereas theories of…
Sordaria, a model system to uncover links between meiotic pairing and recombination.
Zickler, Denise; Espagne, Eric
2016-06-01
The mycelial fungus Sordaria macrospora was first used as experimental system for meiotic recombination. This review shows that it provides also a powerful cytological system for dissecting chromosome dynamics in wild-type and mutant meioses. Fundamental cytogenetic findings include: (1) the identification of presynaptic alignment as a key step in pairing of homologous chromosomes. (2) The discovery that biochemical complexes that mediate recombination at the DNA level concomitantly mediate pairing of homologs. (3) This pairing process involves not only resolution but also avoidance of chromosomal entanglements and the resolution system includes dissolution of constraining DNA recombination interactions, achieved by a unique role of Mlh1. (4) Discovery that the central components of the synaptonemal complex directly mediate the re-localization of the recombination proteins from on-axis to in-between homologue axis positions. (5) Identification of putative STUbL protein Hei10 as a structure-based signal transduction molecule that coordinates progression and differentiation of recombinational interactions at multiple stages. (6) Discovery that a single interference process mediates both nucleation of the SC and designation of crossover sites, thereby ensuring even spacing of both features. (7) Discovery of local modulation of sister-chromatid cohesion at sites of crossover recombination. Copyright © 2016 Elsevier Ltd. All rights reserved.
General view of the Orbiter Discovery on runway 33 at ...
General view of the Orbiter Discovery on runway 33 at Kennedy Space Center shortly after landing. The orbiter is processed and prepared for being towed to the Orbiter Processing Facility for continued post flight processing and pre flight preparations for its next mission. - Space Transportation System, Orbiter Discovery (OV-103), Lyndon B. Johnson Space Center, 2101 NASA Parkway, Houston, Harris County, TX
Genomic Indicators in the blood predict drug-induced liver injury
Hepatotoxicity and other forms of liver injury stemming from exposure to toxicants and idiosyncratic drug reactions are major concerns during the drug discovery process. Animal model systems have been utilized in an attempt to extrapolate the risk of harmful agents to humans and...
Remarks to Eighth Annual State of Modeling and Simulation
1999-06-04
organization, training as well as materiel Discovery vice Verification Tolerance for Surprise Free play Red Team Iterative Process Push to failure...Account for responsive & innovative future adversaries – free play , adaptive strategies and tactics by professional red teams • Address C2 issues & human
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loehle, C.
1994-05-01
The three great myths, which form a sort of triumvirate of misunderstanding, are the Eureka! myth, the hypothesis myth, and the measurement myth. These myths are prevalent among scientists as well as among observers of science. The Eureka! myth asserts that discovery occurs as a flash of insight, and as such is not subject to investigation. This leads to the perception that discovery or deriving a hypothesis is a moment or event rather than a process. Events are singular and not subject to description. The hypothesis myth asserts that proper science is motivated by testing hypotheses, and that if somethingmore » is not experimentally testable then it is not scientific. This myth leads to absurd posturing by some workers conducting empirical descriptive studies, who dress up their study with a ``hypothesis`` to obtain funding or get it published. Methods papers are often rejected because they do not address a specific scientific problem. The fact is that many of the great breakthroughs in silence involve methods and not hypotheses or arise from largely descriptive studies. Those captured by this myth also try to block funding for those developing methods. The third myth is the measurement myth, which holds that determining what to measure is straightforward, so one doesn`t need a lot of introspection to do science. As one ecologist put it to me ``Don`t give me any of that philosophy junk, just let me out in the field. I know what to measure.`` These myths lead to difficulties for scientists who must face peer review to obtain funding and to get published. These myths also inhibit the study of science as a process. Finally, these myths inhibit creativity and suppress innovation. In this paper I first explore these myths in more detail and then propose a new model of discovery that opens the supposedly miraculous process of discovery to doser scrutiny.« less
Bayesian Modeling of Temporal Coherence in Videos for Entity Discovery and Summarization.
Mitra, Adway; Biswas, Soma; Bhattacharyya, Chiranjib
2017-03-01
A video is understood by users in terms of entities present in it. Entity Discovery is the task of building appearance model for each entity (e.g., a person), and finding all its occurrences in the video. We represent a video as a sequence of tracklets, each spanning 10-20 frames, and associated with one entity. We pose Entity Discovery as tracklet clustering, and approach it by leveraging Temporal Coherence (TC): the property that temporally neighboring tracklets are likely to be associated with the same entity. Our major contributions are the first Bayesian nonparametric models for TC at tracklet-level. We extend Chinese Restaurant Process (CRP) to TC-CRP, and further to Temporally Coherent Chinese Restaurant Franchise (TC-CRF) to jointly model entities and temporal segments using mixture components and sparse distributions. For discovering persons in TV serial videos without meta-data like scripts, these methods show considerable improvement over state-of-the-art approaches to tracklet clustering in terms of clustering accuracy, cluster purity and entity coverage. The proposed methods can perform online tracklet clustering on streaming videos unlike existing approaches, and can automatically reject false tracklets. Finally we discuss entity-driven video summarization- where temporal segments of the video are selected based on the discovered entities, to create a semantically meaningful summary.
2011-04-01
CAPE CANAVERAL, Fla. - Main engine No. 1, which was removed from space shuttle Discovery, is transported from Orbiter Processing Facility-2 to the Space Shuttle Main Engine Processing Facility at NASA's Kennedy Space Center in Florida. The removal was part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display. Photo credit: NASA/Jack Pfaller
2011-04-01
CAPE CANAVERAL, Fla. - Main engine No. 1, which was removed from space shuttle Discovery, is transported from Orbiter Processing Facility-2 to the Space Shuttle Main Engine Processing Facility at NASA's Kennedy Space Center in Florida. The removal was part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display. Photo credit: NASA/Jack Pfaller
2011-04-01
CAPE CANAVERAL, Fla. - Main engine No. 1, which was removed from space shuttle Discovery, is transported from Orbiter Processing Facility-2 to the Space Shuttle Main Engine Processing Facility at NASA's Kennedy Space Center in Florida. The removal was part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display. Photo credit: NASA/Jack Pfaller
Machine learning models for lipophilicity and their domain of applicability.
Schroeter, Timon; Schwaighofer, Anton; Mika, Sebastian; Laak, Antonius Ter; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert
2007-01-01
Unfavorable lipophilicity and water solubility cause many drug failures; therefore these properties have to be taken into account early on in lead discovery. Commercial tools for predicting lipophilicity usually have been trained on small and neutral molecules, and are thus often unable to accurately predict in-house data. Using a modern Bayesian machine learning algorithm--a Gaussian process model--this study constructs a log D7 model based on 14,556 drug discovery compounds of Bayer Schering Pharma. Performance is compared with support vector machines, decision trees, ridge regression, and four commercial tools. In a blind test on 7013 new measurements from the last months (including compounds from new projects) 81% were predicted correctly within 1 log unit, compared to only 44% achieved by commercial software. Additional evaluations using public data are presented. We consider error bars for each method (model based error bars, ensemble based, and distance based approaches), and investigate how well they quantify the domain of applicability of each model.
NASA Astrophysics Data System (ADS)
Ng, John N.; de la Puente, Alejandro; Pan, Bob Wei-Ping
2015-12-01
In this study we explore the LHC's Run II potential to the discovery of heavy Majorana neutrinos, with luminosities between 30 and 3000 fb-1 in the l ± l ± j j final state. Given that there exist many models for neutrino mass generation, even within the Type I seesaw framework, we use a simplified model approach and study two simple extensions to the Standard Model, one with a single heavy Majorana neutrino, singlet under the Standard Model gauge group, and a limiting case of the left-right symmetric model. We then extend the analysis to a future hadron collider running at 100 TeV center of mass energies. This extrapolation in energy allows us to study the relative importance of the resonant production versus gauge boson fusion processes in the study of Majorana neutrinos at hadron colliders. We analyze and propose different search strategies designed to maximize the discovery potential in either the resonant production or the gauge boson fusion modes.
Accessing external innovation in drug discovery and development.
Tufféry, Pierre
2015-06-01
A decline in the productivity of the pharmaceutical industry research and development (R&D) pipeline has highlighted the need to reconsider the classical strategies of drug discovery and development, which are based on internal resources, and to identify new means to improve the drug discovery process. Accepting that the combination of internal and external ideas can improve innovation, ways to access external innovation, that is, opening projects to external contributions, have recently been sought. In this review, the authors look at a number of external innovation opportunities. These include increased interactions with academia via academic centers of excellence/innovation centers, better communication on projects using crowdsourcing or social media and new models centered on external providers such as built-to-buy startups or virtual pharmaceutical companies. The buzz for accessing external innovation relies on the pharmaceutical industry's major challenge to improve R&D productivity, a conjuncture favorable to increase interactions with academia and new business models supporting access to external innovation. So far, access to external innovation has mostly been considered during early stages of drug development, and there is room for enhancement. First outcomes suggest that external innovation should become part of drug development in the long term. However, the balance between internal and external developments in drug discovery can vary largely depending on the company strategies.
NASA Astrophysics Data System (ADS)
Nagendra, K. N.; Bagnulo, Stefano; Centeno, Rebecca; Jesús Martínez González, María.
2015-08-01
Preface; 1. Solar and stellar surface magnetic fields; 2. Future directions in astrophysical polarimetry; 3. Physical processes; 4. Instrumentation for astronomical polarimetry; 5. Data analysis techniques for polarization observations; 6. Polarization diagnostics of atmospheres and circumstellar environments; 7. Polarimetry as a tool for discovery science; 8. Numerical modeling of polarized emission; Author index.
Quantifying the Ease of Scientific Discovery
Arbesman, Samuel
2012-01-01
It has long been known that scientific output proceeds on an exponential increase, or more properly, a logistic growth curve. The interplay between effort and discovery is clear, and the nature of the functional form has been thought to be due to many changes in the scientific process over time. Here I show a quantitative method for examining the ease of scientific progress, another necessary component in understanding scientific discovery. Using examples from three different scientific disciplines – mammalian species, chemical elements, and minor planets – I find the ease of discovery to conform to an exponential decay. In addition, I show how the pace of scientific discovery can be best understood as the outcome of both scientific output and ease of discovery. A quantitative study of the ease of scientific discovery in the aggregate, such as done here, has the potential to provide a great deal of insight into both the nature of future discoveries and the technical processes behind discoveries in science. PMID:22328796
Quantifying the Ease of Scientific Discovery.
Arbesman, Samuel
2011-02-01
It has long been known that scientific output proceeds on an exponential increase, or more properly, a logistic growth curve. The interplay between effort and discovery is clear, and the nature of the functional form has been thought to be due to many changes in the scientific process over time. Here I show a quantitative method for examining the ease of scientific progress, another necessary component in understanding scientific discovery. Using examples from three different scientific disciplines - mammalian species, chemical elements, and minor planets - I find the ease of discovery to conform to an exponential decay. In addition, I show how the pace of scientific discovery can be best understood as the outcome of both scientific output and ease of discovery. A quantitative study of the ease of scientific discovery in the aggregate, such as done here, has the potential to provide a great deal of insight into both the nature of future discoveries and the technical processes behind discoveries in science.
Lost but making progress—Where will new analgesic drugs come from?
Borsook, David; Hargreaves, Richard; Bountra, Chas; Porreca, Frank
2015-01-01
There is a critical need for effective new pharmacotherapies for pain. The paucity of new drugs successfully reaching the clinic calls for a reassessment of current analgesic drug discovery approaches. Many points early in the discovery process present significant hurdles, making it critical to exploit advances in pain neurobiology to increase the probability of success. In this review, we highlight approaches that are being pursued vigorously by the pain community for drug discovery, including innovative preclinical pain models, insights from genetics, mechanistic phenotyping of pain patients, development of biomarkers, and emerging insights into chronic pain as a disorder of both the periphery and the brain. Collaborative efforts between pharmaceutical, academic, and public entities to advance research in these areas promise to de-risk potential targets, stimulate investment, and speed evaluation and development of better pain therapies. PMID:25122640
Li, Wenxin; Li, Xiao; De Clercq, Erik; Zhan, Peng; Liu, Xinyong
2015-09-18
The poor pharmacokinetics, side effects and particularly the rapid emergence of drug resistance compromise the efficiency of the clinically used anti-HIV drugs. Therefore, the discovery of novel and effective NNRTIs is still an extremely primary mission. Arylthioacetanilide family is one of the highly active HIV-1 NNRTIs against wide-type (WT) HIV-1 and a wide range of drug-resistant mutant strains. Especially, VRX-480773 and RDEA806 have been chosen as candidates for further clinical studies. In this article, we review the discovery and development of the arylthioacetanilides, and, especially, pay much attention to the structural modifications, SARs conclusions and molecular modeling. Moreover, several medicinal chemistry strategies to overcome drug resistance involved in the optimization process of arylthioacetanilides are highlighted, providing valuable clues for further investigations. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Rodenhizer, Darren; Dean, Teresa; D'Arcangelo, Elisa; McGuigan, Alison P
2018-04-01
Cancer prognosis remains a lottery dependent on cancer type, disease stage at diagnosis, and personal genetics. While investment in research is at an all-time high, new drugs are more likely to fail in clinical trials today than in the 1970s. In this review, a summary of current survival statistics in North America is provided, followed by an overview of the modern drug discovery process, classes of models used throughout different stages, and challenges associated with drug development efficiency are highlighted. Then, an overview of the cancer hallmarks that drive clinical progression is provided, and the range of available clinical therapies within the context of these hallmarks is categorized. Specifically, it is found that historically, the development of therapies is limited to a subset of possible targets. This provides evidence for the opportunities offered by novel disease-relevant in vitro models that enable identification of novel targets that facilitate interactions between the tumor cells and their surrounding microenvironment. Next, an overview of the models currently reported in literature is provided, and the cancer biology they have been used to explore is highlighted. Finally, four priority areas are suggested for the field to accelerate adoption of in vitro tumour models for cancer drug discovery. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Overview of artificial neural networks.
Zou, Jinming; Han, Yi; So, Sung-Sau
2008-01-01
The artificial neural network (ANN), or simply neural network, is a machine learning method evolved from the idea of simulating the human brain. The data explosion in modem drug discovery research requires sophisticated analysis methods to uncover the hidden causal relationships between single or multiple responses and a large set of properties. The ANN is one of many versatile tools to meet the demand in drug discovery modeling. Compared to a traditional regression approach, the ANN is capable of modeling complex nonlinear relationships. The ANN also has excellent fault tolerance and is fast and highly scalable with parallel processing. This chapter introduces the background of ANN development and outlines the basic concepts crucially important for understanding more sophisticated ANN. Several commonly used learning methods and network setups are discussed briefly at the end of the chapter.
Ioset, Jean-Robert; Chang, Shing
2011-09-01
The Drugs for Neglected Diseases initiative (DNDi) is a patients' needs-driven organization committed to the development of new treatments for neglected diseases. Created in 2003, DNDi has delivered four improved treatments for malaria, sleeping sickness and visceral leishmaniasis. A main DNDi challenge is to build a solid R&D portfolio for neglected diseases and to deliver preclinical candidates in a timely manner using an original model based on partnership. To address this challenge DNDi has remodeled its discovery activities from a project-based academic-bound network to a fully integrated process-oriented platform in close collaboration with pharmaceutical companies. This discovery platform relies on dedicated screening capacity and lead-optimization consortia supported by a pragmatic, structured and pharmaceutical-focused compound sourcing strategy.
Concept Formation in Scientific Knowledge Discovery from a Constructivist View
NASA Astrophysics Data System (ADS)
Peng, Wei; Gero, John S.
The central goal of scientific knowledge discovery is to learn cause-effect relationships among natural phenomena presented as variables and the consequences their interactions. Scientific knowledge is normally expressed as scientific taxonomies and qualitative and quantitative laws [1]. This type of knowledge represents intrinsic regularities of the observed phenomena that can be used to explain and predict behaviors of the phenomena. It is a generalization that is abstracted and externalized from a set of contexts and applicable to a broader scope. Scientific knowledge is a type of third-person knowledge, i.e., knowledge that independent of a specific enquirer. Artificial intelligence approaches, particularly data mining algorithms that are used to identify meaningful patterns from large data sets, are approaches that aim to facilitate the knowledge discovery process [2]. A broad spectrum of algorithms has been developed in addressing classification, associative learning, and clustering problems. However, their linkages to people who use them have not been adequately explored. Issues in relation to supporting the interpretation of the patterns, the application of prior knowledge to the data mining process and addressing user interactions remain challenges for building knowledge discovery tools [3]. As a consequence, scientists rely on their experience to formulate problems, evaluate hypotheses, reason about untraceable factors and derive new problems. This type of knowledge which they have developed during their career is called “first-person” knowledge. The formation of scientific knowledge (third-person knowledge) is highly influenced by the enquirer’s first-person knowledge construct, which is a result of his or her interactions with the environment. There have been attempts to craft automatic knowledge discovery tools but these systems are limited in their capabilities to handle the dynamics of personal experience. There are now trends in developing approaches to assist scientists applying their expertise to model formation, simulation, and prediction in various domains [4], [5]. On the other hand, first-person knowledge becomes third-person theory only if it proves general by evidence and is acknowledged by a scientific community. Researchers start to focus on building interactive cooperation platforms [1] to accommodate different views into the knowledge discovery process. There are some fundamental questions in relation to scientific knowledge development. What aremajor components for knowledge construction and how do people construct their knowledge? How is this personal construct assimilated and accommodated into a scientific paradigm? How can one design a computational system to facilitate these processes? This chapter does not attempt to answer all these questions but serves as a basis to foster thinking along this line. A brief literature review about how people develop their knowledge is carried out through a constructivist view. A hydrological modeling scenario is presented to elucidate the approach.
Concept Formation in Scientific Knowledge Discovery from a Constructivist View
NASA Astrophysics Data System (ADS)
Peng, Wei; Gero, John S.
The central goal of scientific knowledge discovery is to learn cause-effect relationships among natural phenomena presented as variables and the consequences their interactions. Scientific knowledge is normally expressed as scientific taxonomies and qualitative and quantitative laws [1]. This type of knowledge represents intrinsic regularities of the observed phenomena that can be used to explain and predict behaviors of the phenomena. It is a generalization that is abstracted and externalized from a set of contexts and applicable to a broader scope. Scientific knowledge is a type of third-person knowledge, i.e., knowledge that independent of a specific enquirer. Artificial intelligence approaches, particularly data mining algorithms that are used to identify meaningful patterns from large data sets, are approaches that aim to facilitate the knowledge discovery process [2]. A broad spectrum of algorithms has been developed in addressing classification, associative learning, and clustering problems. However, their linkages to people who use them have not been adequately explored. Issues in relation to supporting the interpretation of the patterns, the application of prior knowledge to the data mining process and addressing user interactions remain challenges for building knowledge discovery tools [3]. As a consequence, scientists rely on their experience to formulate problems, evaluate hypotheses, reason about untraceable factors and derive new problems. This type of knowledge which they have developed during their career is called "first-person" knowledge. The formation of scientific knowledge (third-person knowledge) is highly influenced by the enquirer's first-person knowledge construct, which is a result of his or her interactions with the environment. There have been attempts to craft automatic knowledge discovery tools but these systems are limited in their capabilities to handle the dynamics of personal experience. There are now trends in developing approaches to assist scientists applying their expertise to model formation, simulation, and prediction in various domains [4], [5]. On the other hand, first-person knowledge becomes third-person theory only if it proves general by evidence and is acknowledged by a scientific community. Researchers start to focus on building interactive cooperation platforms [1] to accommodate different views into the knowledge discovery process. There are some fundamental questions in relation to scientific knowledge development. What aremajor components for knowledge construction and how do people construct their knowledge? How is this personal construct assimilated and accommodated into a scientific paradigm? How can one design a computational system to facilitate these processes? This chapter does not attempt to answer all these questions but serves as a basis to foster thinking along this line. A brief literature review about how people develop their knowledge is carried out through a constructivist view. A hydrological modeling scenario is presented to elucidate the approach.
In search of elementary spin 0 particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krasny, Mieczyslaw Witold, E-mail: krasny@lpnhep.in2p3.fr; Płaczek, Wiesław
2015-01-15
The Standard Model of strong and electroweak interactions uses point-like spin 1/2 particles as the building bricks of matter and point-like spin 1 particles as the force carriers. One of the most important questions to be answered by the present and future particle physics experiments is whether the elementary spin 0 particles exist, and if they do, what are their interactions with the spin 1/2 and spin 1 particles. Spin 0 particles have been searched extensively over the last decades. Several initial claims of their discoveries were finally disproved in the final experimental scrutiny process. The recent observation of themore » excess of events at the LHC in the final states involving a pair of vector bosons, or photons, is commonly interpreted as the discovery of the first elementary scalar particle, the Higgs boson. In this paper we recall examples of claims and subsequent disillusions in precedent searches spin 0 particles. We address the question if the LHC Higgs discovery can already be taken for granted, or, as it turned out important in the past, whether it requires a further experimental scrutiny before the existence of the first ever found elementary scalar particle is proven beyond any doubt. An example of the Double Drell–Yan process for which such a scrutiny is indispensable is discussed in some detail. - Highlights: • We present a short history of searches of spin 0 particles. • We construct a model of the Double Drell–Yan Process (DDYP) at the LHC. • We investigate the contribution of the DDYP to the Higgs searches background.« less
Exploring the Role of Receptor Flexibility in Structure-Based Drug Discovery
Feixas, Ferran; Lindert, Steffen; Sinko, William; McCammon, J. Andrew
2015-01-01
The proper understanding of biomolecular recognition mechanisms that take place in a drug target is of paramount importance to improve the efficiency of drug discovery and development. The intrinsic dynamic character of proteins has a strong influence on biomolecular recognition mechanisms and models such as conformational selection have been widely used to account for this dynamic association process. However, conformational changes occurring in the receptor prior and upon association with other molecules are diverse and not obvious to predict when only a few structures of the receptor are available. In view of the prominent role of protein flexibility in ligand binding and its implications for drug discovery, it is of great interest to identify receptor conformations that play a major role in biomolecular recognition before starting rational drug design efforts. In this review, we discuss a number of recent advances in computer-aided drug discovery techniques that have been proposed to incorporate receptor flexibility into structure-based drug design. The allowance for receptor flexibility provided by computational techniques such as molecular dynamics simulations or enhanced sampling techniques helps to improve the accuracy of methods used to estimate binding affinities and, thus, such methods can contribute to the discovery of novel drug leads. PMID:24332165
2003-12-09
KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, KSC employee Gene Peavler works in the wheel area on the orbiter Discovery. The vehicle has undergone Orbiter Major Modifications in the past year. Discovery is scheduled to fly on mission STS-121 to the International Space Station.
Firming-Up Core: A Collaborative Approach.
ERIC Educational Resources Information Center
McInnis, Bernadette
The Collaborative Probing Model (CPM) is a heuristic approach to writing across the disciplines that stresses discovery, process, and assessment. Faculty input will help the English department design an oral and written communication block that will be unified by a series of interdisciplinary videotaped presentations. CPM also uses flow charting…
Inferring Action Structure and Causal Relationships in Continuous Sequences of Human Action
2014-01-01
language processing literature (e.g., Brent, 1999; Venkataraman , 2001), and which were also used by Goldwater et al. (2009). Precision (P) is the...trees in oriented linear graphs. Simon Stevin: Wis-en Natuurkundig Tijdschrift, 28 , 203. Venkataraman , A. (2001). A statistical model for word discovery
The next generation of training for Arabidopsis researchers: bioinformatics and quantitative biology
USDA-ARS?s Scientific Manuscript database
It has been more than 50 years since Arabidopsis (Arabidopsis thaliana) was first introduced as a model organism to understand basic processes in plant biology. A well-organized scientific community has used this small reference plant species to make numerous fundamental plant biology discoveries (P...
Genome editing: progress and challenges for medical applications.
Carroll, Dana
2016-11-15
The development of the CRISPR-Cas platform for genome editing has greatly simplified the process of making targeted genetic modifications. Applications of genome editing are expected to have a substantial impact on human therapies through the development of better animal models, new target discovery, and direct therapeutic intervention.
NASA Astrophysics Data System (ADS)
Yerizon, Y.; Putra, A. A.; Subhan, M.
2018-04-01
Students have a low mathematical ability because they are used to learning to hear the teacher's explanation. For that students are given activities to sharpen his ability in math. One way to do that is to create discovery learning based work sheet. The development of this worksheet took into account specific student learning styles including in schools that have classified students based on multiple intelligences. The dominant learning styles in the classroom were intrapersonal and interpersonal. The purpose of this study was to discover students’ responses to the mathematics work sheets of the junior high school with a discovery learning approach suitable for students with Intrapersonal and Interpersonal Intelligence. This tool was developed using a development model adapted from the Plomp model. The development process of this tools consists of 3 phases: front-end analysis/preliminary research, development/prototype phase and assessment phase. From the results of the research, it is found that students have good response to the resulting work sheet. The worksheet was understood well by students and its helps student in understanding the concept learned.
2011-03-31
CAPE CANAVERAL, Fla. - A panoramic photo shows space shuttle Discovery during the main engine removal phase in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display. NASA/Frankie Martin
Semantic Entity Pairing for Improved Data Validation and Discovery
NASA Astrophysics Data System (ADS)
Shepherd, Adam; Chandler, Cyndy; Arko, Robert; Chen, Yanning; Krisnadhi, Adila; Hitzler, Pascal; Narock, Tom; Groman, Robert; Rauch, Shannon
2014-05-01
One of the central incentives for linked data implementations is the opportunity to leverage the rich logic inherent in structured data. The logic embedded in semantic models can strengthen capabilities for data discovery and data validation when pairing entities from distinct, contextually-related datasets. The creation of links between the two datasets broadens data discovery by using the semantic logic to help machines compare similar entities and properties that exist on different levels of granularity. This semantic capability enables appropriate entity pairing without making inaccurate assertions as to the nature of the relationship. Entity pairing also provides a context to accurately validate the correctness of an entity's property values - an exercise highly valued by data management practices who seek to ensure the quality and correctness of their data. The Biological and Chemical Oceanography Data Management Office (BCO-DMO) semantically models metadata surrounding oceanographic researchcruises, but other sources outside of BCO-DMO exist that also model metadata about these same cruises. For BCO-DMO, the process of successfully pairing its entities to these sources begins by selecting sources that are decidedly trustworthy and authoritative for the modeled concepts. In this case, the Rolling Deck to Repository (R2R) program has a well-respected reputation among the oceanographic research community, presents a data context that is uniquely different and valuable, and semantically models its cruise metadata. Where BCO-DMO exposes the processed, analyzed data products generated by researchers, R2R exposes the raw shipboard data that was collected on the same research cruises. Interlinking these cruise entities expands data discovery capabilities but also allows for validating the contextual correctness of both BCO-DMO's and R2R's cruise metadata. Assessing the potential for a link between two datasets for a similar entity consists of aligning like properties and deciding on the appropriate semantic markup to describe the link. This highlights the desire for research organizations like BCO-DMO and R2R to ensure the complete accuracy of their exposed metadata, as it directly reflects on their reputations as successful and trustworthy source of research data. Therefore, data validation reaches beyond simple syntax of property values into contextual correctness. As a human process, this is a time-intensive task that does not scale well for finite human and funding resources. Therefore, to assess contextual correctness across datasets at different levels of granularity, BCO-DMO is developing a system that employs semantic technologies to aid the human process by organizing potential links and calculating a confidence coefficient as to the correctness of the potential pairing based on the distance between certain entity property values. The system allows humans to quickly scan potential links and their confidence coefficients for asserting persistence and correcting and investigating misaligned entity property values.
Target assessment for antiparasitic drug discovery
Frearson, Julie A.; Wyatt, Paul G.; Gilbert, Ian H.; Fairlamb, Alan H.
2010-01-01
Drug discovery is a high-risk, expensive and lengthy process taking at least 12 years and costing upwards of US$500 million per drug to reach the clinic. For neglected diseases, the drug discovery process is driven by medical need and guided by pre-defined target product profiles. Assessment and prioritisation of the most promising targets for entry into screening programmes is crucial for maximising chances of success. Here we describe criteria used in our drug discovery unit for target assessment and introduce the ‘traffic light’ system as a prioritisation and management tool. We hope this brief review will stimulate basic scientists to acquire additional information necessary for drug discovery. PMID:17962072
Connecting mirror neurons and forward models.
Miall, R C
2003-12-02
Two recent developments in motor neuroscience are promising the extension of theoretical concepts from motor control towards cognitive processes, including human social interactions and understanding the intentions of others. The first of these is the discovery of what are now called mirror neurons, which code for both observed and executed actions. The second is the concept of internal models, and in particular recent proposals that forward and inverse models operate in paired modules. These two ideas will be briefly introduced, and a recent suggestion linking between the two processes of mirroring and modelling will be described which may underlie our abilities for imitating actions, for cooperation between two actors, and possibly for communication via gesture and language.
NASA Astrophysics Data System (ADS)
Oses, Corey; Isayev, Olexandr; Toher, Cormac; Curtarolo, Stefano; Tropsha, Alexander
Historically, materials discovery is driven by a laborious trial-and-error process. The growth of materials databases and emerging informatics approaches finally offer the opportunity to transform this practice into data- and knowledge-driven rational design-accelerating discovery of novel materials exhibiting desired properties. By using data from the AFLOW repository for high-throughput, ab-initio calculations, we have generated Quantitative Materials Structure-Property Relationship (QMSPR) models to predict critical materials properties, including the metal/insulator classification, band gap energy, and bulk modulus. The prediction accuracy obtained with these QMSPR models approaches training data for virtually any stoichiometric inorganic crystalline material. We attribute the success and universality of these models to the construction of new materials descriptors-referred to as the universal Property-Labeled Material Fragments (PLMF). This representation affords straightforward model interpretation in terms of simple heuristic design rules that could guide rational materials design. This proof-of-concept study demonstrates the power of materials informatics to dramatically accelerate the search for new materials.
Koon, Alex C.; Chan, Ho Yin Edwin
2017-01-01
For nearly a century, the fruit fly, Drosophila melanogaster, has proven to be a valuable tool in our understanding of fundamental biological processes, and has empowered our discoveries, particularly in the field of neuroscience. In recent years, Drosophila has emerged as a model organism for human neurodegenerative and neuromuscular disorders. In this review, we highlight a number of recent studies that utilized the Drosophila model to study repeat-expansion associated diseases (READs), such as polyglutamine diseases, fragile X-associated tremor/ataxia syndrome (FXTAS), myotonic dystrophy type 1 (DM1) and type 2 (DM2), and C9ORF72-associated amyotrophic lateral sclerosis/frontotemporal dementia (C9-ALS/FTD). Discoveries regarding the possible mechanisms of RNA toxicity will be focused here. These studies demonstrate Drosophila as an excellent in vivo model system that can reveal novel mechanistic insights into human disorders, providing the foundation for translational research and therapeutic development. PMID:28377694
Mathematical modeling for novel cancer drug discovery and development.
Zhang, Ping; Brusic, Vladimir
2014-10-01
Mathematical modeling enables: the in silico classification of cancers, the prediction of disease outcomes, optimization of therapy, identification of promising drug targets and prediction of resistance to anticancer drugs. In silico pre-screened drug targets can be validated by a small number of carefully selected experiments. This review discusses the basics of mathematical modeling in cancer drug discovery and development. The topics include in silico discovery of novel molecular drug targets, optimization of immunotherapies, personalized medicine and guiding preclinical and clinical trials. Breast cancer has been used to demonstrate the applications of mathematical modeling in cancer diagnostics, the identification of high-risk population, cancer screening strategies, prediction of tumor growth and guiding cancer treatment. Mathematical models are the key components of the toolkit used in the fight against cancer. The combinatorial complexity of new drugs discovery is enormous, making systematic drug discovery, by experimentation, alone difficult if not impossible. The biggest challenges include seamless integration of growing data, information and knowledge, and making them available for a multiplicity of analyses. Mathematical models are essential for bringing cancer drug discovery into the era of Omics, Big Data and personalized medicine.
Knowledge Discovery from Biomedical Ontologies in Cross Domains.
Shen, Feichen; Lee, Yugyung
2016-01-01
In recent years, there is an increasing demand for sharing and integration of medical data in biomedical research. In order to improve a health care system, it is required to support the integration of data by facilitating semantic interoperability systems and practices. Semantic interoperability is difficult to achieve in these systems as the conceptual models underlying datasets are not fully exploited. In this paper, we propose a semantic framework, called Medical Knowledge Discovery and Data Mining (MedKDD), that aims to build a topic hierarchy and serve the semantic interoperability between different ontologies. For the purpose, we fully focus on the discovery of semantic patterns about the association of relations in the heterogeneous information network representing different types of objects and relationships in multiple biological ontologies and the creation of a topic hierarchy through the analysis of the discovered patterns. These patterns are used to cluster heterogeneous information networks into a set of smaller topic graphs in a hierarchical manner and then to conduct cross domain knowledge discovery from the multiple biological ontologies. Thus, patterns made a greater contribution in the knowledge discovery across multiple ontologies. We have demonstrated the cross domain knowledge discovery in the MedKDD framework using a case study with 9 primary biological ontologies from Bio2RDF and compared it with the cross domain query processing approach, namely SLAP. We have confirmed the effectiveness of the MedKDD framework in knowledge discovery from multiple medical ontologies.
Knowledge Discovery from Biomedical Ontologies in Cross Domains
Shen, Feichen; Lee, Yugyung
2016-01-01
In recent years, there is an increasing demand for sharing and integration of medical data in biomedical research. In order to improve a health care system, it is required to support the integration of data by facilitating semantic interoperability systems and practices. Semantic interoperability is difficult to achieve in these systems as the conceptual models underlying datasets are not fully exploited. In this paper, we propose a semantic framework, called Medical Knowledge Discovery and Data Mining (MedKDD), that aims to build a topic hierarchy and serve the semantic interoperability between different ontologies. For the purpose, we fully focus on the discovery of semantic patterns about the association of relations in the heterogeneous information network representing different types of objects and relationships in multiple biological ontologies and the creation of a topic hierarchy through the analysis of the discovered patterns. These patterns are used to cluster heterogeneous information networks into a set of smaller topic graphs in a hierarchical manner and then to conduct cross domain knowledge discovery from the multiple biological ontologies. Thus, patterns made a greater contribution in the knowledge discovery across multiple ontologies. We have demonstrated the cross domain knowledge discovery in the MedKDD framework using a case study with 9 primary biological ontologies from Bio2RDF and compared it with the cross domain query processing approach, namely SLAP. We have confirmed the effectiveness of the MedKDD framework in knowledge discovery from multiple medical ontologies. PMID:27548262
2011-03-21
CAPE CANAVERAL, Fla. - Crews in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida remove space shuttle Discovery's right-hand inner heat shield from engine No. 1. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display.Photo credit: NASA/Jack Pfaller
2011-03-21
CAPE CANAVERAL, Fla. - Crews in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida remove space shuttle Discovery's right-hand inner heat shield from engine No. 1. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display.Photo credit: NASA/Kim Shiflett
2011-03-21
CAPE CANAVERAL, Fla. - Crews in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida remove space shuttle Discovery's right-hand inner heat shield from engine No. 1. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display.Photo credit: NASA/Jack Pfaller
2011-03-21
CAPE CANAVERAL, Fla. - Crews in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida remove space shuttle Discovery's right-hand inner heat shield from engine No. 1. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display.Photo credit: NASA/Kim Shiflett
2011-03-21
CAPE CANAVERAL, Fla. - Crews in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida remove space shuttle Discovery's right-hand inner heat shield from engine No. 1. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display.Photo credit: NASA/Jack Pfaller
2011-03-21
CAPE CANAVERAL, Fla. - Crews in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida remove space shuttle Discovery's right-hand inner heat shield from engine No. 1. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display.Photo credit: NASA/Jack Pfaller
2011-03-21
CAPE CANAVERAL, Fla. - Crews in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida remove space shuttle Discovery's right-hand inner heat shield from engine No. 1. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display.Photo credit: NASA/Jack Pfaller
2011-03-21
CAPE CANAVERAL, Fla. - Crews in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida remove space shuttle Discovery's right-hand inner heat shield from engine No. 1. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display.Photo credit: NASA/Kim Shiflett
2011-03-21
CAPE CANAVERAL, Fla. - Crews in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida remove space shuttle Discovery's right-hand inner heat shield from engine No. 1. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display.Photo credit: NASA/Kim Shiflett
2011-03-21
CAPE CANAVERAL, Fla. - Crews in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida remove space shuttle Discovery's right-hand inner heat shield from engine No. 1. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display.Photo credit: NASA/Kim Shiflett
2011-03-21
CAPE CANAVERAL, Fla. - Crews in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida remove space shuttle Discovery's right-hand inner heat shield from engine No. 1. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display.Photo credit: NASA/Kim Shiflett
2011-03-21
CAPE CANAVERAL, Fla. - Crews in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida remove space shuttle Discovery's right-hand inner heat shield from engine No. 1. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display.Photo credit: NASA/Jack Pfaller
2011-03-21
CAPE CANAVERAL, Fla. - Crews in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida remove space shuttle Discovery's right-hand inner heat shield from engine No. 1. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display.Photo credit: NASA/Kim Shiflett
2011-03-21
CAPE CANAVERAL, Fla. - Crews in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida remove space shuttle Discovery's right-hand inner heat shield from engine No. 1. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display.Photo credit: NASA/Jack Pfaller
2011-03-21
CAPE CANAVERAL, Fla. - Crews in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida remove space shuttle Discovery's right-hand inner heat shield from engine No. 1. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display.Photo credit: NASA/Kim Shiflett
2011-03-21
CAPE CANAVERAL, Fla. - Crews in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida remove space shuttle Discovery's right-hand inner heat shield from engine No. 1. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display.Photo credit: NASA/Jack Pfaller
2011-03-21
CAPE CANAVERAL, Fla. - Crews in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida remove space shuttle Discovery's right-hand inner heat shield from engine No. 1. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display.Photo credit: NASA/Jack Pfaller
2011-03-21
CAPE CANAVERAL, Fla. - Crews in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida remove space shuttle Discovery's right-hand inner heat shield from engine No. 1. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display.Photo credit: NASA/Kim Shiflett
2011-03-21
CAPE CANAVERAL, Fla. - Crews in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida remove space shuttle Discovery's right-hand inner heat shield from engine No. 1. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display.Photo credit: NASA/Jack Pfaller
Causal discovery and inference: concepts and recent methodological advances.
Spirtes, Peter; Zhang, Kun
This paper aims to give a broad coverage of central concepts and principles involved in automated causal inference and emerging approaches to causal discovery from i.i.d data and from time series. After reviewing concepts including manipulations, causal models, sample predictive modeling, causal predictive modeling, and structural equation models, we present the constraint-based approach to causal discovery, which relies on the conditional independence relationships in the data, and discuss the assumptions underlying its validity. We then focus on causal discovery based on structural equations models, in which a key issue is the identifiability of the causal structure implied by appropriately defined structural equation models: in the two-variable case, under what conditions (and why) is the causal direction between the two variables identifiable? We show that the independence between the error term and causes, together with appropriate structural constraints on the structural equation, makes it possible. Next, we report some recent advances in causal discovery from time series. Assuming that the causal relations are linear with nonGaussian noise, we mention two problems which are traditionally difficult to solve, namely causal discovery from subsampled data and that in the presence of confounding time series. Finally, we list a number of open questions in the field of causal discovery and inference.
A biological compression model and its applications.
Cao, Minh Duc; Dix, Trevor I; Allison, Lloyd
2011-01-01
A biological compression model, expert model, is presented which is superior to existing compression algorithms in both compression performance and speed. The model is able to compress whole eukaryotic genomes. Most importantly, the model provides a framework for knowledge discovery from biological data. It can be used for repeat element discovery, sequence alignment and phylogenetic analysis. We demonstrate that the model can handle statistically biased sequences and distantly related sequences where conventional knowledge discovery tools often fail.
Comparison of Caenorhabditis elegans NLP peptides with arthropod neuropeptides.
Husson, Steven J; Lindemans, Marleen; Janssen, Tom; Schoofs, Liliane
2009-04-01
Neuropeptides are small messenger molecules that can be found in all metazoans, where they govern a diverse array of physiological processes. Because neuropeptides seem to be conserved among pest species, selected peptides can be considered as attractive targets for drug discovery. Much can be learned from the model system Caenorhabditis elegans because of the availability of a sequenced genome and state-of-the-art postgenomic technologies that enable characterization of endogenous peptides derived from neuropeptide-like protein (NLP) precursors. Here, we provide an overview of the NLP peptide family in C. elegans and discuss their resemblance with arthropod neuropeptides and their relevance for anthelmintic discovery.
Organs-on-a-chip for drug discovery.
Selimović, Seila; Dokmeci, Mehmet R; Khademhosseini, Ali
2013-10-01
The current drug discovery process is arduous and costly, and a majority of the drug candidates entering clinical trials fail to make it to the marketplace. The standard static well culture approaches, although useful, do not fully capture the intricate in vivo environment. By merging the advances in microfluidics with microfabrication technologies, novel platforms are being introduced that lead to the creation of organ functions on a single chip. Within these platforms, microengineering enables precise control over the cellular microenvironment, whereas microfluidics provides an ability to perfuse the constructs on a chip and to connect individual sections with each other. This approach results in microsystems that may better represent the in vivo environment. These organ-on-a-chip platforms can be utilized for developing disease models as well as for conducting drug testing studies. In this article, we highlight several key developments in these microscale platforms for drug discovery applications. Copyright © 2013 Elsevier Ltd. All rights reserved.
Stem cells: a model for screening, discovery and development of drugs.
Kitambi, Satish Srinivas; Chandrasekar, Gayathri
2011-01-01
The identification of normal and cancerous stem cells and the recent advances made in isolation and culture of stem cells have rapidly gained attention in the field of drug discovery and regenerative medicine. The prospect of performing screens aimed at proliferation, directed differentiation, and toxicity and efficacy studies using stem cells offers a reliable platform for the drug discovery process. Advances made in the generation of induced pluripotent stem cells from normal or diseased tissue serves as a platform to perform drug screens aimed at developing cell-based therapies against conditions like Parkinson's disease and diabetes. This review discusses the application of stem cells and cancer stem cells in drug screening and their role in complementing, reducing, and replacing animal testing. In addition to this, target identification and major advances in the field of personalized medicine using induced pluripotent cells are also discussed.
The relation between prior knowledge and students' collaborative discovery learning processes
NASA Astrophysics Data System (ADS)
Gijlers, Hannie; de Jong, Ton
2005-03-01
In this study we investigate how prior knowledge influences knowledge development during collaborative discovery learning. Fifteen dyads of students (pre-university education, 15-16 years old) worked on a discovery learning task in the physics field of kinematics. The (face-to-face) communication between students was recorded and the interaction with the environment was logged. Based on students' individual judgments of the truth-value and testability of a series of domain-specific propositions, a detailed description of the knowledge configuration for each dyad was created before they entered the learning environment. Qualitative analyses of two dialogues illustrated that prior knowledge influences the discovery learning processes, and knowledge development in a pair of students. Assessments of student and dyad definitional (domain-specific) knowledge, generic (mathematical and graph) knowledge, and generic (discovery) skills were related to the students' dialogue in different discovery learning processes. Results show that a high level of definitional prior knowledge is positively related to the proportion of communication regarding the interpretation of results. Heterogeneity with respect to generic prior knowledge was positively related to the number of utterances made in the discovery process categories hypotheses generation and experimentation. Results of the qualitative analyses indicated that collaboration between extremely heterogeneous dyads is difficult when the high achiever is not willing to scaffold information and work in the low achiever's zone of proximal development.
Radiation Detection Material Discovery Initiative at PNNL
NASA Astrophysics Data System (ADS)
Milbrath, Brian
2006-05-01
Today's security threats are being met with 30-year old radiation technology. Discovery of new radiation detection materials is currently a slow and Edisonian process. With heightened concerns over nuclear proliferation, terrorism and unconventional warfare, an alternative strategy for identification and development of potential radiation detection materials must be adopted. Through the Radiation Detection Materials Discovery Initiative, PNNL focuses on the science-based discovery of next generation materials for radiation detection by addressing three ``grand challenges'': fundamental understanding of radiation detection, identification of new materials, and accelerating the discovery process. The new initiative has eight projects addressing these challenges, which will be described, including early work, paths forward and the opportunities for collaboration.
From Galileo's telescope to the Galileo spacecraft: our changing views of the Jupiter system
NASA Astrophysics Data System (ADS)
Lopes, R. M.
2008-12-01
In four centuries, we have gone from the discovery of the four large moons of Jupiter - Io, Europa, Ganymede, and Callisto - to important discoveries about these four very different worlds. Galileo's telescopic discovery was a major turning point in the understanding of science. His observations of the moons' motion around Jupiter challenged the notion of an Earth-centric Universe. A few months later, Galileo discovered the phases of Venus, which had been predicted by the heliocentric model of the Solar System. Galileo also observed the rings of Saturn (which he mistook for planets) and sunspots, and was the first person to report mountains and craters on the Moon, whose existence he deduced from the patterns of light and shadow on the Moon's surface, concluding that the surface was topographically rough. Centuries later, the Galileo spacecraft's discoveries challenged our understanding of outer planet satellites. Results included the discovery of an icy ocean underneath Europa's surface, the possibility of life on Europa, the widespread volcanism on Io, and the detection of a magnetic field around Ganymede. All four of these satellites revealed how the major geologic processes - volcanism, tectonism, impact cratering and erosion - operate in these different bodies, from the total lack of impact craters on Io to the heavily cratered, ancient surface of Callisto. The Galileo spacecraft's journey also took it to Venus and the Moon, making important scientific observations about these bodies. The spacecraft discovered the first moon orbiting around an asteroid which, had Galileo the man observed, would have been another major blow for the geocentric model of our Solar System.
Open Access Could Transform Drug Discovery: A Case Study of JQ1.
Arshad, Zeeshaan; Smith, James; Roberts, Mackenna; Lee, Wen Hwa; Davies, Ben; Bure, Kim; Hollander, Georg A; Dopson, Sue; Bountra, Chas; Brindley, David
2016-01-01
The cost to develop a new drug from target discovery to market is a staggering $1.8 billion, largely due to the very high attrition rate of drug candidates and the lengthy transition times during development. Open access is an emerging model of open innovation that places no restriction on the use of information and has the potential to accelerate the development of new drugs. To date, no quantitative assessment has yet taken place to determine the effects and viability of open access on the process of drug translation. This need is addressed within this study. The literature and intellectual property landscapes of the drug candidate JQ1, which was made available on an open access basis when discovered, and conventionally developed equivalents that were not are compared using the Web of Science and Thomson Innovation software, respectively. Results demonstrate that openly sharing the JQ1 molecule led to a greater uptake by a wider and more multi-disciplinary research community. A comparative analysis of the patent landscapes for each candidate also found that the broader scientific diaspora of the publically released JQ1 data enhanced innovation, evidenced by a greater number of downstream patents filed in relation to JQ1. The authors' findings counter the notion that open access drug discovery would leak commercial intellectual property. On the contrary, JQ1 serves as a test case to evidence that open access drug discovery can be an economic model that potentially improves efficiency and cost of drug discovery and its subsequent commercialization.
Anatomy of the Crowd4Discovery crowdfunding campaign.
Perlstein, Ethan O
2013-01-01
Crowdfunding allows the public to fund creative projects, including curiosity-driven scientific research. Last Fall, I was part of a team that raised $25,460 from an international coalition of "micropatrons" for an open, pharmacological research project called Crowd4Discovery. The goal of Crowd4Discovery is to determine the precise location of amphetamines inside mouse brain cells, and we are sharing the results of this project on the Internet as they trickle in. In this commentary, I will describe the genesis of Crowd4Discovery, our motivations for crowdfunding, an analysis of our fundraising data, and the nuts and bolts of running a crowdfunding campaign. Science crowdfunding is in its infancy but has already been successfully used by an array of scientists in academia and in the private sector as both a supplement and a substitute to grants. With traditional government sources of funding for basic scientific research contracting, an alternative model that couples fundraising and outreach - and in the process encourages more openness and accountability - may be increasingly attractive to researchers seeking to diversify their funding streams.
ERIC Educational Resources Information Center
Birnbaum, Mark J.; Picco, Jenna; Clements, Meghan; Witwicka, Hanna; Yang, Meiheng; Hoey, Margaret T.; Odgren, Paul R.
2010-01-01
A key goal of molecular/cell biology/biotechnology is to identify essential genes in virtually every physiological process to uncover basic mechanisms of cell function and to establish potential targets of drug therapy combating human disease. This article describes a semester-long, project-oriented molecular/cellular/biotechnology laboratory…
Effectiveness of discovery learning model on mathematical problem solving
NASA Astrophysics Data System (ADS)
Herdiana, Yunita; Wahyudin, Sispiyati, Ririn
2017-08-01
This research is aimed to describe the effectiveness of discovery learning model on mathematical problem solving. This research investigate the students' problem solving competency before and after learned by using discovery learning model. The population used in this research was student in grade VII in one of junior high school in West Bandung Regency. From nine classes, class VII B were randomly selected as the sample of experiment class, and class VII C as control class, which consist of 35 students every class. The method in this research was quasi experiment. The instrument in this research is pre-test, worksheet and post-test about problem solving of mathematics. Based on the research, it can be conclude that the qualification of problem solving competency of students who gets discovery learning model on level 80%, including in medium category and it show that discovery learning model effective to improve mathematical problem solving.
Lognormal field size distributions as a consequence of economic truncation
Attanasi, E.D.; Drew, L.J.
1985-01-01
The assumption of lognormal (parent) field size distributions has for a long time been applied to resource appraisal and evaluation of exploration strategy by the petroleum industry. However, frequency distributions estimated with observed data and used to justify this hypotheses are conditional. Examination of various observed field size distributions across basins and over time shows that such distributions should be regarded as the end result of an economic filtering process. Commercial discoveries depend on oil and gas prices and field development costs. Some new fields are eliminated due to location, depths, or water depths. This filtering process is called economic truncation. Economic truncation may occur when predictions of a discovery process are passed through an economic appraisal model. We demonstrate that (1) economic resource appraisals, (2) forecasts of levels of petroleum industry activity, and (3) expected benefits of developing and implementing cost reducing technology are sensitive to assumptions made about the nature of that portion of (parent) field size distribution subject to economic truncation. ?? 1985 Plenum Publishing Corporation.
From bench to patient: model systems in drug discovery
Breyer, Matthew D.; Look, A. Thomas; Cifra, Alessandra
2015-01-01
ABSTRACT Model systems, including laboratory animals, microorganisms, and cell- and tissue-based systems, are central to the discovery and development of new and better drugs for the treatment of human disease. In this issue, Disease Models & Mechanisms launches a Special Collection that illustrates the contribution of model systems to drug discovery and optimisation across multiple disease areas. This collection includes reviews, Editorials, interviews with leading scientists with a foot in both academia and industry, and original research articles reporting new and important insights into disease therapeutics. This Editorial provides a summary of the collection's current contents, highlighting the impact of multiple model systems in moving new discoveries from the laboratory bench to the patients' bedsides. PMID:26438689
Anomalous single production of the fourth generation quarks at the CERN LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciftci, R.
Possible anomalous single productions of the fourth standard model generation up and down type quarks at CERN Large Hadron Collider are studied. Namely, pp{yields}u{sub 4}(d{sub 4})X with subsequent u{sub 4}{yields}bW{sup +} process followed by the leptonic decay of the W boson and d{sub 4}{yields}b{gamma} (and its H.c.) decay channel are considered. Signatures of these processes and corresponding standard model backgrounds are discussed in detail. Discovery limits for the quark mass and achievable values of the anomalous coupling strength are determined.
Gaussian processes: a method for automatic QSAR modeling of ADME properties.
Obrezanova, Olga; Csanyi, Gabor; Gola, Joelle M R; Segall, Matthew D
2007-01-01
In this article, we discuss the application of the Gaussian Process method for the prediction of absorption, distribution, metabolism, and excretion (ADME) properties. On the basis of a Bayesian probabilistic approach, the method is widely used in the field of machine learning but has rarely been applied in quantitative structure-activity relationship and ADME modeling. The method is suitable for modeling nonlinear relationships, does not require subjective determination of the model parameters, works for a large number of descriptors, and is inherently resistant to overtraining. The performance of Gaussian Processes compares well with and often exceeds that of artificial neural networks. Due to these features, the Gaussian Processes technique is eminently suitable for automatic model generation-one of the demands of modern drug discovery. Here, we describe the basic concept of the method in the context of regression problems and illustrate its application to the modeling of several ADME properties: blood-brain barrier, hERG inhibition, and aqueous solubility at pH 7.4. We also compare Gaussian Processes with other modeling techniques.
ERIC Educational Resources Information Center
Tompo, Basman; Ahmad, Arifin; Muris, Muris
2016-01-01
The main objective of this research was to develop discovery inquiry (DI) learning model to reduce the misconceptions of Science student level of secondary school that is valid, practical, and effective. This research was an R&D (research and development). The trials of discovery inquiry (DI) learning model were carried out in two different…
Public-Private Partnerships in Lead Discovery: Overview and Case Studies.
Gottwald, Matthias; Becker, Andreas; Bahr, Inke; Mueller-Fahrnow, Anke
2016-09-01
The pharmaceutical industry is faced with significant challenges in its efforts to discover new drugs that address unmet medical needs. Safety concerns and lack of efficacy are the two main technical reasons for attrition. Improved early research tools including predictive in silico, in vitro, and in vivo models, as well as a deeper understanding of the disease biology, therefore have the potential to improve success rates. The combination of internal activities with external collaborations in line with the interests and needs of all partners is a successful approach to foster innovation and to meet the challenges. Collaboration can take place in different ways, depending on the requirements of the participants. In this review, the value of public-private partnership approaches will be discussed, using examples from the Innovative Medicines Initiative (IMI). These examples describe consortia approaches to develop tools and processes for improving target identification and validation, as well as lead identification and optimization. The project "Kinetics for Drug Discovery" (K4DD), focusing on the adoption of drug-target binding kinetics analysis in the drug discovery decision-making process, is described in more detail. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The space-time structure of oil and gas field growth in a complex depositional system
Drew, L.J.; Mast, R.F.; Schuenemeyer, J.H.
1994-01-01
Shortly after the discovery of an oil and gas field, an initial estimate is usually made of the ultimate recovery of the field. With the passage of time, this initial estimate is almost always revised upward. The phenomenon of the growth of the expected ultimate recovery of a field, which is known as "field growth," is important to resource assessment analysts for several reasons. First, field growth is the source of a large part of future additions to the inventory of proved reserves of crude oil and natural gas in most petroliferous areas of the world. Second, field growth introduces a large negative bias in the forecast of the future rates of discovery of oil and gas fields made by discovery process models. In this study, the growth in estimated ultimate recovery of oil and gas in fields made up of sandstone reservoirs formed in a complex depositional environment (Frio strand plain exploration play) is examined. The results presented here show how the growth of oil and gas fields is tied directly to the architectural element of the shoreline processes and tectonics that caused the deposition of the individual sand bodies hosting the producible hydrocarbon. ?? 1994 Oxford University Press.
Iversen, Patrick L.; Warren, Travis K.; Wells, Jay B.; Garza, Nicole L.; Mourich, Dan V.; Welch, Lisa S.; Panchal, Rekha G.; Bavari, Sina
2012-01-01
There are no currently approved treatments for filovirus infections. In this study we report the discovery process which led to the development of antisense Phosphorodiamidate Morpholino Oligomers (PMOs) AVI-6002 (composed of AVI-7357 and AVI-7539) and AVI-6003 (composed of AVI-7287 and AVI-7288) targeting Ebola virus and Marburg virus respectively. The discovery process involved identification of optimal transcript binding sites for PMO based RNA-therapeutics followed by screening for effective viral gene target in mouse and guinea pig models utilizing adapted viral isolates. An evolution of chemical modifications were tested, beginning with simple Phosphorodiamidate Morpholino Oligomers (PMO) transitioning to cell penetrating peptide conjugated PMOs (PPMO) and ending with PMOplus containing a limited number of positively charged linkages in the PMO structure. The initial lead compounds were combinations of two agents targeting separate genes. In the final analysis, a single agent for treatment of each virus was selected, AVI-7537 targeting the VP24 gene of Ebola virus and AVI-7288 targeting NP of Marburg virus, and are now progressing into late stage clinical development as the optimal therapeutic candidates. PMID:23202506
Iversen, Patrick L; Warren, Travis K; Wells, Jay B; Garza, Nicole L; Mourich, Dan V; Welch, Lisa S; Panchal, Rekha G; Bavari, Sina
2012-11-06
There are no currently approved treatments for filovirus infections. In this study we report the discovery process which led to the development of antisense Phosphorodiamidate Morpholino Oligomers (PMOs) AVI-6002 (composed of AVI-7357 and AVI-7539) and AVI-6003 (composed of AVI-7287 and AVI-7288) targeting Ebola virus and Marburg virus respectively. The discovery process involved identification of optimal transcript binding sites for PMO based RNA-therapeutics followed by screening for effective viral gene target in mouse and guinea pig models utilizing adapted viral isolates. An evolution of chemical modifications were tested, beginning with simple Phosphorodiamidate Morpholino Oligomers (PMO) transitioning to cell penetrating peptide conjugated PMOs (PPMO) and ending with PMOplus containing a limited number of positively charged linkages in the PMO structure. The initial lead compounds were combinations of two agents targeting separate genes. In the final analysis, a single agent for treatment of each virus was selected, AVI-7537 targeting the VP24 gene of Ebola virus and AVI-7288 targeting NP of Marburg virus, and are now progressing into late stage clinical development as the optimal therapeutic candidates.
Advances in microfluidics for drug discovery.
Lombardi, Dario; Dittrich, Petra S
2010-11-01
Microfluidics is considered as an enabling technology for the development of unconventional and innovative methods in the drug discovery process. The concept of micrometer-sized reaction systems in the form of continuous flow reactors, microdroplets or microchambers is intriguing, and the versatility of the technology perfectly fits with the requirements of drug synthesis, drug screening and drug testing. In this review article, we introduce key microfluidic approaches to the drug discovery process, highlighting the latest and promising achievements in this field, mainly from the years 2007 - 2010. Despite high expectations of microfluidic approaches to several stages of the drug discovery process, up to now microfluidic technology has not been able to significantly replace conventional drug discovery platforms. Our aim is to identify bottlenecks that have impeded the transfer of microfluidics into routine platforms for drug discovery and show some recent solutions to overcome these hurdles. Although most microfluidic approaches are still applied only for proof-of-concept studies, thanks to creative microfluidic research in the past years unprecedented novel capabilities of microdevices could be demonstrated, and general applicable, robust and reliable microfluidic platforms seem to be within reach.
Stone, David E; Haswell, Elizabeth S; Sztul, Elizabeth
2017-01-01
In classical Cell Biology, fundamental cellular processes are revealed empirically, one experiment at a time. While this approach has been enormously fruitful, our understanding of cells is far from complete. In fact, the more we know, the more keenly we perceive our ignorance of the profoundly complex and dynamic molecular systems that underlie cell structure and function. Thus, it has become apparent to many cell biologists that experimentation alone is unlikely to yield major new paradigms, and that empiricism must be combined with theory and computational approaches to yield major new discoveries. To facilitate those discoveries, three workshops will convene annually for one day in three successive summers (2017-2019) to promote the use of computational modeling by cell biologists currently unconvinced of its utility or unsure how to apply it. The first of these workshops was held at the University of Illinois, Chicago in July 2017. Organized to facilitate interactions between traditional cell biologists and computational modelers, it provided a unique educational opportunity: a primer on how cell biologists with little or no relevant experience can incorporate computational modeling into their research. Here, we report on the workshop and describe how it addressed key issues that cell biologists face when considering modeling including: (1) Is my project appropriate for modeling? (2) What kind of data do I need to model my process? (3) How do I find a modeler to help me in integrating modeling approaches into my work? And, perhaps most importantly, (4) why should I bother?
Luo, Wei; Phung, Dinh; Tran, Truyen; Gupta, Sunil; Rana, Santu; Karmakar, Chandan; Shilton, Alistair; Yearwood, John; Dimitrova, Nevenka; Ho, Tu Bao; Venkatesh, Svetha; Berk, Michael
2016-12-16
As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community. ©Wei Luo, Dinh Phung, Truyen Tran, Sunil Gupta, Santu Rana, Chandan Karmakar, Alistair Shilton, John Yearwood, Nevenka Dimitrova, Tu Bao Ho, Svetha Venkatesh, Michael Berk. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 16.12.2016.
Translating three states of knowledge--discovery, invention, and innovation
2010-01-01
Background Knowledge Translation (KT) has historically focused on the proper use of knowledge in healthcare delivery. A knowledge base has been created through empirical research and resides in scholarly literature. Some knowledge is amenable to direct application by stakeholders who are engaged during or after the research process, as shown by the Knowledge to Action (KTA) model. Other knowledge requires multiple transformations before achieving utility for end users. For example, conceptual knowledge generated through science or engineering may become embodied as a technology-based invention through development methods. The invention may then be integrated within an innovative device or service through production methods. To what extent is KT relevant to these transformations? How might the KTA model accommodate these additional development and production activities while preserving the KT concepts? Discussion Stakeholders adopt and use knowledge that has perceived utility, such as a solution to a problem. Achieving a technology-based solution involves three methods that generate knowledge in three states, analogous to the three classic states of matter. Research activity generates discoveries that are intangible and highly malleable like a gas; development activity transforms discoveries into inventions that are moderately tangible yet still malleable like a liquid; and production activity transforms inventions into innovations that are tangible and immutable like a solid. The paper demonstrates how the KTA model can accommodate all three types of activity and address all three states of knowledge. Linking the three activities in one model also illustrates the importance of engaging the relevant stakeholders prior to initiating any knowledge-related activities. Summary Science and engineering focused on technology-based devices or services change the state of knowledge through three successive activities. Achieving knowledge implementation requires methods that accommodate these three activities and knowledge states. Accomplishing beneficial societal impacts from technology-based knowledge involves the successful progression through all three activities, and the effective communication of each successive knowledge state to the relevant stakeholders. The KTA model appears suitable for structuring and linking these processes. PMID:20205873
Salvador-Carulla, L; Lukersmith, S; Sullivan, W
2017-04-01
Guideline methods to develop recommendations dedicate most effort around organising discovery and corroboration knowledge following the evidence-based medicine (EBM) framework. Guidelines typically use a single dimension of information, and generally discard contextual evidence and formal expert knowledge and consumer's experiences in the process. In recognition of the limitations of guidelines in complex cases, complex interventions and systems research, there has been significant effort to develop new tools, guides, resources and structures to use alongside EBM methods of guideline development. In addition to these advances, a new framework based on the philosophy of science is required. Guidelines should be defined as implementation decision support tools for improving the decision-making process in real-world practice and not only as a procedure to optimise the knowledge base of scientific discovery and corroboration. A shift from the model of the EBM pyramid of corroboration of evidence to the use of broader multi-domain perspective graphically depicted as 'Greek temple' could be considered. This model takes into account the different stages of scientific knowledge (discovery, corroboration and implementation), the sources of knowledge relevant to guideline development (experimental, observational, contextual, expert-based and experiential); their underlying inference mechanisms (deduction, induction, abduction, means-end inferences) and a more precise definition of evidence and related terms. The applicability of this broader approach is presented for the development of the Canadian Consensus Guidelines for the Primary Care of People with Developmental Disabilities.
Points, Laurie J; Taylor, James Ward; Grizou, Jonathan; Donkers, Kevin; Cronin, Leroy
2018-01-30
Protocell models are used to investigate how cells might have first assembled on Earth. Some, like oil-in-water droplets, can be seemingly simple models, while able to exhibit complex and unpredictable behaviors. How such simple oil-in-water systems can come together to yield complex and life-like behaviors remains a key question. Herein, we illustrate how the combination of automated experimentation and image processing, physicochemical analysis, and machine learning allows significant advances to be made in understanding the driving forces behind oil-in-water droplet behaviors. Utilizing >7,000 experiments collected using an autonomous robotic platform, we illustrate how smart automation cannot only help with exploration, optimization, and discovery of new behaviors, but can also be core to developing fundamental understanding of such systems. Using this process, we were able to relate droplet formulation to behavior via predicted physical properties, and to identify and predict more occurrences of a rare collective droplet behavior, droplet swarming. Proton NMR spectroscopic and qualitative pH methods enabled us to better understand oil dissolution, chemical change, phase transitions, and droplet and aqueous phase flows, illustrating the utility of the combination of smart-automation and traditional analytical chemistry techniques. We further extended our study for the simultaneous exploration of both the oil and aqueous phases using a robotic platform. Overall, this work shows that the combination of chemistry, robotics, and artificial intelligence enables discovery, prediction, and mechanistic understanding in ways that no one approach could achieve alone.
Modeling & Informatics at Vertex Pharmaceuticals Incorporated: our philosophy for sustained impact
NASA Astrophysics Data System (ADS)
McGaughey, Georgia; Patrick Walters, W.
2017-03-01
Molecular modelers and informaticians have the unique opportunity to integrate cross-functional data using a myriad of tools, methods and visuals to generate information. Using their drug discovery expertise, information is transformed to knowledge that impacts drug discovery. These insights are often times formulated locally and then applied more broadly, which influence the discovery of new medicines. This is particularly true in an organization where the members are exposed to projects throughout an organization, such as in the case of the global Modeling & Informatics group at Vertex Pharmaceuticals. From its inception, Vertex has been a leader in the development and use of computational methods for drug discovery. In this paper, we describe the Modeling & Informatics group at Vertex and the underlying philosophy, which has driven this team to sustain impact on the discovery of first-in-class transformative medicines.
Mechanistic systems modeling to guide drug discovery and development
Schmidt, Brian J.; Papin, Jason A.; Musante, Cynthia J.
2013-01-01
A crucial question that must be addressed in the drug development process is whether the proposed therapeutic target will yield the desired effect in the clinical population. Pharmaceutical and biotechnology companies place a large investment on research and development, long before confirmatory data are available from human trials. Basic science has greatly expanded the computable knowledge of disease processes, both through the generation of large omics data sets and a compendium of studies assessing cellular and systemic responses to physiologic and pathophysiologic stimuli. Given inherent uncertainties in drug development, mechanistic systems models can better inform target selection and the decision process for advancing compounds through preclinical and clinical research. PMID:22999913
Mechanistic systems modeling to guide drug discovery and development.
Schmidt, Brian J; Papin, Jason A; Musante, Cynthia J
2013-02-01
A crucial question that must be addressed in the drug development process is whether the proposed therapeutic target will yield the desired effect in the clinical population. Pharmaceutical and biotechnology companies place a large investment on research and development, long before confirmatory data are available from human trials. Basic science has greatly expanded the computable knowledge of disease processes, both through the generation of large omics data sets and a compendium of studies assessing cellular and systemic responses to physiologic and pathophysiologic stimuli. Given inherent uncertainties in drug development, mechanistic systems models can better inform target selection and the decision process for advancing compounds through preclinical and clinical research. Copyright © 2012 Elsevier Ltd. All rights reserved.
Optimizing the discovery organization for innovation.
Sams-Dodd, Frank
2005-08-01
Strategic management is the process of adapting organizational structure and management principles to fit the strategic goal of the business unit. The pharmaceutical industry has generally been expert at optimizing its organizations for drug development, but has rarely implemented different structures for the early discovery process, where the objective is innovation and the transformation of innovation into drug projects. Here, a set of strategic management methods is proposed, covering team composition, organizational structure, management principles and portfolio management, which are designed to increase the level of innovation in the early drug discovery process.
General view of the aft fuselage of the Orbiter Discovery ...
General view of the aft fuselage of the Orbiter Discovery looking forward showing Space Shuttle Main Engines (SSMEs) installed in positions one and three and an SSME on the process of being installed in position two. This photograph was taken in the Orbiter Processing Facility at the Kennedy Space Center. - Space Transportation System, Orbiter Discovery (OV-103), Lyndon B. Johnson Space Center, 2101 NASA Parkway, Houston, Harris County, TX
2011-03-31
CAPE CANAVERAL, Fla. - Technicians carefully remove main engine No. 3 from space shuttle Discovery using a specially designed engine installer, called a Hyster forklift. The work is taking place in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display. NASA/Jim Grossmann
2011-03-31
CAPE CANAVERAL, Fla. - Technicians carefully remove main engine No. 3 from space shuttle Discovery using a specially designed engine installer, called a Hyster forklift. The work is taking place in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display. NASA/Jim Grossmann
2011-03-31
CAPE CANAVERAL, Fla. - Technicians carefully remove main engine No. 3 from space shuttle Discovery using a specially designed engine installer, called a Hyster forklift. The work is taking place in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display. NASA/Jim Grossmann
2011-03-31
CAPE CANAVERAL, Fla. - Technicians carefully remove main engine No. 3 from space shuttle Discovery using a specially designed engine installer, called a Hyster forklift. The work is taking place in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display. NASA/Jim Grossmann
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Discovery. 93.512 Section 93.512 Public Health... Process § 93.512 Discovery. (a) Request to provide documents. A party may only request another party to...) Responses to a discovery request. Within 30 days of receiving a request for the production of documents, a...
What Does Galileo's Discovery of Jupiter's Moons Tell Us about the Process of Scientific Discovery?
ERIC Educational Resources Information Center
Lawson, Anton E.
2002-01-01
Given that hypothetico-deductive reasoning has played a role in other important scientific discoveries, asks the question whether it plays a role in all important scientific discoveries. Explores and rejects as viable alternatives possible alternative scientific methods such as Baconian induction and combinatorial analysis. Discusses the…
Small molecule compound logistics outsourcing--going beyond the "thought experiment".
Ramsay, Devon L; Kwasnoski, Joseph D; Caldwell, Gary W
2012-01-01
Increasing pressure on the pharmaceutical industry to reduce cost and focus internal resources on "high value" activities is driving a trend to outsource traditionally "in-house" drug discovery activities. Compound collections are typically viewed as drug discovery's "crown jewels"; however, in late 2007, Johnson & Johnson Pharmaceutical Research & Development (J PRD) took a bold step to move their entire North American compound inventory and processing capability to an external third party vendor. The authors discuss the combination model implemented, that of local compound logistics site support with an outsourced centralized processing center. Some of the lessons learned over the past five years were predictable while others were unexpected. The substantial cost savings, improved local service response and flexible platform to adjust to changing business needs resulted. Continued sustainable success relies heavily upon maintaining internal headcount dedicated to vendor management, an open collaboration approach and a solid information technology infrastructure with complete transparency and visibility.
From bench to patient: model systems in drug discovery.
Breyer, Matthew D; Look, A Thomas; Cifra, Alessandra
2015-10-01
Model systems, including laboratory animals, microorganisms, and cell- and tissue-based systems, are central to the discovery and development of new and better drugs for the treatment of human disease. In this issue, Disease Models & Mechanisms launches a Special Collection that illustrates the contribution of model systems to drug discovery and optimisation across multiple disease areas. This collection includes reviews, Editorials, interviews with leading scientists with a foot in both academia and industry, and original research articles reporting new and important insights into disease therapeutics. This Editorial provides a summary of the collection's current contents, highlighting the impact of multiple model systems in moving new discoveries from the laboratory bench to the patients' bedsides. © 2015. Published by The Company of Biologists Ltd.
Temple, Michael W; Lehmann, Christoph U; Fabbri, Daniel
2016-01-01
Discharging patients from the Neonatal Intensive Care Unit (NICU) can be delayed for non-medical reasons including the procurement of home medical equipment, parental education, and the need for children's services. We previously created a model to identify patients that will be medically ready for discharge in the subsequent 2-10 days. In this study we use Natural Language Processing to improve upon that model and discern why the model performed poorly on certain patients. We retrospectively examined the text of the Assessment and Plan section from daily progress notes of 4,693 patients (103,206 patient-days) from the NICU of a large, academic children's hospital. A matrix was constructed using words from NICU notes (single words and bigrams) to train a supervised machine learning algorithm to determine the most important words differentiating poorly performing patients compared to well performing patients in our original discharge prediction model. NLP using a bag of words (BOW) analysis revealed several cohorts that performed poorly in our original model. These included patients with surgical diagnoses, pulmonary hypertension, retinopathy of prematurity, and psychosocial issues. The BOW approach aided in cohort discovery and will allow further refinement of our original discharge model prediction. Adequately identifying patients discharged home on g-tube feeds alone could improve the AUC of our original model by 0.02. Additionally, this approach identified social issues as a major cause for delayed discharge. A BOW analysis provides a method to improve and refine our NICU discharge prediction model and could potentially avoid over 900 (0.9%) hospital days.
Hively, Lee M [Philadelphia, TN
2011-07-12
The invention relates to a method and apparatus for simultaneously processing different sources of test data into informational data and then processing different categories of informational data into knowledge-based data. The knowledge-based data can then be communicated between nodes in a system of multiple computers according to rules for a type of complex, hierarchical computer system modeled on a human brain.
Schuffenhauer, A; Popov, M; Schopfer, U; Acklin, P; Stanek, J; Jacoby, E
2004-12-01
This publication describes processes for the selection of chemical compounds for the building of a high-throughput screening (HTS) collection for drug discovery, using the currently implemented process in the Discovery Technologies Unit of the Novartis Institute for Biomedical Research, Basel Switzerland as reference. More generally, the currently existing compound acquisition models and practices are discussed. Our informatics, chemistry and biology-driven compound selection consists of two steps: 1) The individual compounds are filtered and grouped into three priority classes on the basis of their individual structural properties. Substructure filters are used to eliminate or penalize compounds based on unwanted structural properties. The similarity of the structures to reference ligands of the main proven druggable target families is computed, and drug-similar compounds are prioritized for the following diversity analysis. 2) The compounds are compared to the archive compounds and a diversity analysis is performed. This is done separately for the prioritized, regular and penalized compounds with increasingly stringent dissimilarity criterion. The process includes collecting vendor catalogues and monitoring the availability of samples together with the selection and purchase decision points. The development of a corporate vendor catalogue database is described. In addition to the selection methods on a per single molecule basis, selection criteria for scaffold and combinatorial chemistry projects in collaboration with compound vendors are discussed.
Translational neuropharmacology and the appropriate and effective use of animal models.
Green, A R; Gabrielsson, J; Fone, K C F
2011-10-01
This issue of the British Journal of Pharmacology is dedicated to reviews of the major animal models used in neuropharmacology to examine drugs for both neurological and psychiatric conditions. Almost all major conditions are reviewed. In general, regulatory authorities require evidence for the efficacy of novel compounds in appropriate animal models. However, the failure of many compounds in clinical trials following clear demonstration of efficacy in animal models has called into question both the value of the models and the discovery process in general. These matters are expertly reviewed in this issue and proposals for better models outlined. In this editorial, we further suggest that more attention be made to incorporate pharmacokinetic knowledge into the studies (quantitative pharmacology). We also suggest that more attention be made to ensure that full methodological details are published and recommend that journals should be more amenable to publishing negative data. Finally, we propose that new approaches must be used in drug discovery so that preclinical studies become more reflective of the clinical situation, and studies using animal models mimic the anticipated design of studies to be performed in humans, as closely as possible. © 2011 The Authors. British Journal of Pharmacology © 2011 The British Pharmacological Society.
Dwarf galaxies: a lab to investigate the neutron capture elements production
NASA Astrophysics Data System (ADS)
Cescutti, Gabriele
2018-06-01
In this contribution, I focus on the neutron capture elements observed in the spectra of old halo and ultra faint galaxies stars. Adopting a stochastic chemical evolution model and the Galactic halo as a benchmark, I present new constraints on the rate and time scales of r-process events, based on the discovery of the r-process rich stars in the ultra faint galaxy Reticulum 2. I also show that an s-process activated by rotation in massive stars can play an important role in the production of heavy elements.
Early patterns of commercial activity in graphene
NASA Astrophysics Data System (ADS)
Shapira, Philip; Youtie, Jan; Arora, Sanjay
2012-03-01
Graphene, a novel nanomaterial consisting of a single layer of carbon atoms, has attracted significant attention due to its distinctive properties, including great strength, electrical and thermal conductivity, lightness, and potential benefits for diverse applications. The commercialization of scientific discoveries such as graphene is inherently uncertain, with the lag time between the scientific development of a new technology and its adoption by corporate actors revealing the extent to which firms are able to absorb knowledge and engage in learning to implement applications based on the new technology. From this perspective, we test for the existence of three different corporate learning and activity patterns: (1) a linear process where patenting follows scientific discovery; (2) a double-boom phenomenon where corporate (patenting) activity is first concentrated in technological improvements and then followed by a period of technology productization; and (3) a concurrent model where scientific discovery in publications occurs in parallel with patenting. By analyzing corporate publication and patent activity across country and application lines, we find that, while graphene as a whole is experiencing concurrent scientific development and patenting growth, country- and application-specific trends offer some evidence of the linear and double-boom models.
Search for t Z' associated production induced by t c Z' couplings at the LHC
NASA Astrophysics Data System (ADS)
Hou, Wei-Shu; Kohda, Masaya; Modak, Tanmoy
2017-07-01
The P5' and RK anomalies, recently observed by the LHCb Collaboration in B →K(*) transitions, may indicate the existence of a new Z' boson, which may arise from gauged Lμ-Lτ symmetry. Flavor-changing neutral current Z' couplings, such as t c Z', can be induced by the presence of extra vector-like quarks. In this paper we study the LHC signatures of the induced right-handed t c Z' coupling that is inspired by, but not directly linked to, the B →K(*) anomalies. The specific processes studied are c g →t Z' and its conjugate process, each followed by Z'→μ+μ-. By constructing an effective theory for the t c Z' coupling, we first explore in a model-independent way the discovery potential of such a Z' at the 14 TeV LHC with 300 and 3000 fb-1 integrated luminosities. We then reinterpret the model-independent results within the gauged Lμ-Lτ model. In connection with t c Z', the model also implies the existence of a flavor-conserving c c Z' coupling, which can drive the c c ¯→Z'→μ+μ- process. Our study shows that existing LHC results for dimuon resonance searches already constrain the c c Z' coupling, and that the Z' can be discovered in either or both of the c g →t Z' and c c ¯→Z' processes. We further discuss the sensitivity to the left-handed t c Z' coupling and find that the coupling values favored by the B →K(*) anomalies lie slightly below the LHC discovery reach even with 3000 fb-1 .
PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deelman, Ewa; Carothers, Christopher; Mandal, Anirban
Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less
Brain Aggregates: An Effective In Vitro Cell Culture System Modeling Neurodegenerative Diseases.
Ahn, Misol; Kalume, Franck; Pitstick, Rose; Oehler, Abby; Carlson, George; DeArmond, Stephen J
2016-03-01
Drug discovery for neurodegenerative diseases is particularly challenging because of the discrepancies in drug effects between in vitro and in vivo studies. These discrepancies occur in part because current cell culture systems used for drug screening have many limitations. First, few cell culture systems accurately model human aging or neurodegenerative diseases. Second, drug efficacy may differ between dividing and stationary cells, the latter resembling nondividing neurons in the CNS. Brain aggregates (BrnAggs) derived from embryonic day 15 gestation mouse embryos may represent neuropathogenic processes in prion disease and reflect in vivo drug efficacy. Here, we report a new method for the production of BrnAggs suitable for drug screening and suggest that BrnAggs can model additional neurological diseases such as tauopathies. We also report a functional assay with BrnAggs by measuring electrophysiological activities. Our data suggest that BrnAggs could serve as an effective in vitro cell culture system for drug discovery for neurodegenerative diseases. © 2016 American Association of Neuropathologists, Inc. All rights reserved.
PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows
Deelman, Ewa; Carothers, Christopher; Mandal, Anirban; ...
2015-07-14
Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less
Modelling stock order flows with non-homogeneous intensities from high-frequency data
NASA Astrophysics Data System (ADS)
Gorshenin, Andrey K.; Korolev, Victor Yu.; Zeifman, Alexander I.; Shorgin, Sergey Ya.; Chertok, Andrey V.; Evstafyev, Artem I.; Korchagin, Alexander Yu.
2013-10-01
A micro-scale model is proposed for the evolution of such information system as the limit order book in financial markets. Within this model, the flows of orders (claims) are described by doubly stochastic Poisson processes taking account of the stochastic character of intensities of buy and sell orders that determine the price discovery mechanism. The proposed multiplicative model of stochastic intensities makes it possible to analyze the characteristics of the order flows as well as the instantaneous proportion of the forces of buyers and sellers, that is, the imbalance process, without modelling the external information background. The proposed model gives the opportunity to link the micro-scale (high-frequency) dynamics of the limit order book with the macro-scale models of stock price processes of the form of subordinated Wiener processes by means of limit theorems of probability theory and hence, to use the normal variance-mean mixture models of the corresponding heavy-tailed distributions. The approach can be useful in different areas with similar properties (e.g., in plasma physics).
Discovery and process development of a novel TACE inhibitor for the topical treatment of psoriasis.
Boiteau, Jean-Guy; Ouvry, Gilles; Arlabosse, Jean-Marie; Astri, Stéphanie; Beillard, Audrey; Bhurruth-Alcor, Yushma; Bonnary, Laetitia; Bouix-Peter, Claire; Bouquet, Karine; Bourotte, Marilyne; Cardinaud, Isabelle; Comino, Catherine; Deprez, Benoît; Duvert, Denis; Féret, Angélique; Hacini-Rachinel, Feriel; Harris, Craig S; Luzy, Anne-Pascale; Mathieu, Arnaud; Millois, Corinne; Orsini, Nicolas; Pascau, Jonathan; Pinto, Artur; Piwnica, David; Polge, Gaëlle; Reitz, Arnaud; Reversé, Kevin; Rodeville, Nicolas; Rossio, Patricia; Spiesse, Delphine; Tabet, Samuel; Taquet, Nathalie; Tomas, Loïc; Vial, Emmanuel; Hennequin, Laurent F
2018-02-15
Targeting the TNFα pathway is a validated approach to the treatment of psoriasis. In this pathway, TACE stands out as a druggable target and has been the focus of in-house research programs. In this article, we present the discovery of clinical candidate 26a. Starting from hits plagued with poor solubility or genotoxicity, 26a was identified through thorough multiparameter optimisation. Showing robust in vivo activity in an oxazolone-mediated inflammation model, the compound was selected for development. Following a polymorph screen, the hydrochloride salt was selected and the synthesis was efficiently developed to yield the API in 47% overall yield. Copyright © 2017. Published by Elsevier Ltd.
Thomas, Craig E; Will, Yvonne
2012-02-01
Attrition in the drug industry due to safety findings remains high and requires a shift in the current safety testing paradigm. Many companies are now positioning safety assessment at each stage of the drug development process, including discovery, where an early perspective on potential safety issues is sought, often at chemical scaffold level, using a variety of emerging technologies. Given the lengthy development time frames of drugs in the pharmaceutical industry, the authors believe that the impact of new technologies on attrition is best measured as a function of the quality and timeliness of candidate compounds entering development. The authors provide an overview of in silico and in vitro models, as well as more complex approaches such as 'omics,' and where they are best positioned within the drug discovery process. It is important to take away that not all technologies should be applied to all projects. Technologies vary widely in their validation state, throughput and cost. A thoughtful combination of validated and emerging technologies is crucial in identifying the most promising candidates to move to proof-of-concept testing in humans. In spite of the challenges inherent in applying new technologies to drug discovery, the successes and recognition that we cannot continue to rely on safety assessment practices used for decades have led to rather dramatic strategy shifts and fostered partnerships across government agencies and industry. We are optimistic that these efforts will ultimately benefit patients by delivering effective and safe medications in a timely fashion.
2015 Army Science Planning and Strategy Meeting Series: Outcomes and Conclusions
2017-12-21
modeling and nanoscale characterization tools to enable efficient design of hybridized manufacturing ; realtime, multiscale computational capability...to enable predictive analytics for expeditionary on-demand manufacturing • Discovery of design principles to enable programming advanced genetic...goals, significant research is needed to mature the fundamental materials science, processing and manufacturing sciences, design methodologies, data
2015-07-31
and make the expected decision outcomes. The scenario is based around a scripted storyboard where an organized crime network is operating in a city to...interdicted by law enforcement to disrupt the network. The scenario storyboard was used to develop a probabilistic vehicle traffic model in order to
Toward an Integrative Model for CBT: Encompassing Behavior, Cognition, Affect, and Process
ERIC Educational Resources Information Center
Mischel, Walter
2004-01-01
Dramatic changes in our science in recent years have profound implications for how psychologists conceptualize, assess, and treat people. I comment on these developments and the contributions to this special series, focusing on how they speak to new directions and challenges for the future of CBT. Discoveries about mind, brain, and behavior that…
ERIC Educational Resources Information Center
Piekny, Jeanette; Maehler, Claudia
2013-01-01
According to Klahr's (2000, 2005; Klahr & Dunbar, 1988) Scientific Discovery as Dual Search model, inquiry processes require three cognitive components: hypothesis generation, experimentation, and evidence evaluation. The aim of the present study was to investigate (a) when the ability to evaluate perfect covariation, imperfect covariation,…
ERIC Educational Resources Information Center
Wiley, Emily A.; Stover, Nicholas A.
2014-01-01
Use of inquiry-based research modules in the classroom has soared over recent years, largely in response to national calls for teaching that provides experience with scientific processes and methodologies. To increase the visibility of in-class studies among interested researchers and to strengthen their impact on student learning, we have…
Using the Moon as a Tool for Discovery-Oriented Learning.
ERIC Educational Resources Information Center
Cummins, Robert Hays; Ritger, Scott David; Myers, Christopher Adam
1992-01-01
Students test the hypothesis that the moon revolves east to west around the earth, determine by observation approximately how many degrees the moon revolves per night, and develop a scale model of the earth-sun-moon system in this laboratory exercise. Students are actively involved in the scientific process and are introduced to the importance of…
Fragment-based drug discovery and molecular docking in drug design.
Wang, Tao; Wu, Mian-Bin; Chen, Zheng-Jie; Chen, Hua; Lin, Jian-Ping; Yang, Li-Rong
2015-01-01
Fragment-based drug discovery (FBDD) has caused a revolution in the process of drug discovery and design, with many FBDD leads being developed into clinical trials or approved in the past few years. Compared with traditional high-throughput screening, it displays obvious advantages such as efficiently covering chemical space, achieving higher hit rates, and so forth. In this review, we focus on the most recent developments of FBDD for improving drug discovery, illustrating the process and the importance of FBDD. In particular, the computational strategies applied in the process of FBDD and molecular-docking programs are highlighted elaborately. In most cases, docking is used for predicting the ligand-receptor interaction modes and hit identification by structurebased virtual screening. The successful cases of typical significance and the hits identified most recently are discussed.
2011-04-01
CAPE CANAVERAL, Fla. - Technicians complete the removal of main engine No. 1 from space shuttle Discovery using a specially designed engine installer, called a Hyster forklift. The work is taking place in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display. Photo credit: NASA/Jack Pfaller
2011-04-01
CAPE CANAVERAL, Fla. - Technicians complete the removal of main engine No. 1 from space shuttle Discovery using a specially designed engine installer, called a Hyster forklift. The work is taking place in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display. Photo credit: NASA/Jack Pfaller
2011-04-01
CAPE CANAVERAL, Fla. - Technicians carefully remove main engine No. 1 from space shuttle Discovery using a specially designed engine installer, called a Hyster forklift. The work is taking place in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display. Photo credit: NASA/Jack Pfaller
2011-04-01
CAPE CANAVERAL, Fla. - Technicians complete the removal of main engine No. 1 from space shuttle Discovery using a specially designed engine installer, called a Hyster forklift. The work is taking place in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display. Photo credit: NASA/Jack Pfaller
2011-04-01
CAPE CANAVERAL, Fla. - Technicians carefully remove main engine No. 1 from space shuttle Discovery using a specially designed engine installer, called a Hyster forklift. The work is taking place in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display. Photo credit: NASA/Jack Pfaller
2011-04-01
CAPE CANAVERAL, Fla. - Technicians complete the removal of main engine No. 1 from space shuttle Discovery using a specially designed engine installer, called a Hyster forklift. The work is taking place in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display. Photo credit: NASA/Jack Pfaller
2011-04-01
CAPE CANAVERAL, Fla. - Technicians complete the removal of main engine No. 1 from space shuttle Discovery using a specially designed engine installer, called a Hyster forklift. The work is taking place in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display. Photo credit: NASA/Jack Pfaller
2011-04-01
CAPE CANAVERAL, Fla. - Technicians carefully remove main engine No. 1 from space shuttle Discovery using a specially designed engine installer, called a Hyster forklift. The work is taking place in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display. Photo credit: NASA/Jack Pfaller
2011-04-01
CAPE CANAVERAL, Fla. - Technicians carefully remove main engine No. 1 from space shuttle Discovery using a specially designed engine installer, called a Hyster forklift. The work is taking place in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display. Photo credit: NASA/Jack Pfaller
2011-03-31
CAPE CANAVERAL, Fla. - Technicians complete the removal of main engine No. 3 from space shuttle Discovery using a specially designed engine installer, called a Hyster forklift. The work is taking place in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display. NASA/Jim Grossmann
2011-04-01
CAPE CANAVERAL, Fla. - Technicians complete the removal of main engine No. 1 from space shuttle Discovery using a specially designed engine installer, called a Hyster forklift. The work is taking place in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display. Photo credit: NASA/Jack Pfaller
2011-04-01
CAPE CANAVERAL, Fla. - Technicians carefully remove main engine No. 1 from space shuttle Discovery using a specially designed engine installer, called a Hyster forklift. The work is taking place in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display. Photo credit: NASA/Jack Pfaller
2011-03-31
CAPE CANAVERAL, Fla. - Technicians complete the removal of main engine No. 3 from space shuttle Discovery using a specially designed engine installer, called a Hyster forklift. The work is taking place in Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida. The removal is part of Discovery's transition and retirement processing. Work performed on Discovery is expected to help rocket designers build next-generation spacecraft and prepare the shuttle for future public display. NASA/Jim Grossmann
Crowd computing: using competitive dynamics to develop and refine highly predictive models.
Bentzien, Jörg; Muegge, Ingo; Hamner, Ben; Thompson, David C
2013-05-01
A recent application of a crowd computing platform to develop highly predictive in silico models for use in the drug discovery process is described. The platform, Kaggle™, exploits a competitive dynamic that results in model optimization as the competition unfolds. Here, this dynamic is described in detail and compared with more-conventional modeling strategies. The complete and full structure of the underlying dataset is disclosed and some thoughts as to the broader utility of such 'gamification' approaches to the field of modeling are offered. Copyright © 2013 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crabtree, George; Glotzer, Sharon; McCurdy, Bill
This report is based on a SC Workshop on Computational Materials Science and Chemistry for Innovation on July 26-27, 2010, to assess the potential of state-of-the-art computer simulations to accelerate understanding and discovery in materials science and chemistry, with a focus on potential impacts in energy technologies and innovation. The urgent demand for new energy technologies has greatly exceeded the capabilities of today's materials and chemical processes. To convert sunlight to fuel, efficiently store energy, or enable a new generation of energy production and utilization technologies requires the development of new materials and processes of unprecedented functionality and performance. Newmore » materials and processes are critical pacing elements for progress in advanced energy systems and virtually all industrial technologies. Over the past two decades, the United States has developed and deployed the world's most powerful collection of tools for the synthesis, processing, characterization, and simulation and modeling of materials and chemical systems at the nanoscale, dimensions of a few atoms to a few hundred atoms across. These tools, which include world-leading x-ray and neutron sources, nanoscale science facilities, and high-performance computers, provide an unprecedented view of the atomic-scale structure and dynamics of materials and the molecular-scale basis of chemical processes. For the first time in history, we are able to synthesize, characterize, and model materials and chemical behavior at the length scale where this behavior is controlled. This ability is transformational for the discovery process and, as a result, confers a significant competitive advantage. Perhaps the most spectacular increase in capability has been demonstrated in high performance computing. Over the past decade, computational power has increased by a factor of a million due to advances in hardware and software. This rate of improvement, which shows no sign of abating, has enabled the development of computer simulations and models of unprecedented fidelity. We are at the threshold of a new era where the integrated synthesis, characterization, and modeling of complex materials and chemical processes will transform our ability to understand and design new materials and chemistries with predictive power. In turn, this predictive capability will transform technological innovation by accelerating the development and deployment of new materials and processes in products and manufacturing. Harnessing the potential of computational science and engineering for the discovery and development of materials and chemical processes is essential to maintaining leadership in these foundational fields that underpin energy technologies and industrial competitiveness. Capitalizing on the opportunities presented by simulation-based engineering and science in materials and chemistry will require an integration of experimental capabilities with theoretical and computational modeling; the development of a robust and sustainable infrastructure to support the development and deployment of advanced computational models; and the assembly of a community of scientists and engineers to implement this integration and infrastructure. This community must extend to industry, where incorporating predictive materials science and chemistry into design tools can accelerate the product development cycle and drive economic competitiveness. The confluence of new theories, new materials synthesis capabilities, and new computer platforms has created an unprecedented opportunity to implement a "materials-by-design" paradigm with wide-ranging benefits in technological innovation and scientific discovery. The Workshop on Computational Materials Science and Chemistry for Innovation was convened in Bethesda, Maryland, on July 26-27, 2010. Sponsored by the Department of Energy (DOE) Offices of Advanced Scientific Computing Research and Basic Energy Sciences, the workshop brought together 160 experts in materials science, chemistry, and computational science representing more than 65 universities, laboratories, and industries, and four agencies. The workshop examined seven foundational challenge areas in materials science and chemistry: materials for extreme conditions, self-assembly, light harvesting, chemical reactions, designer fluids, thin films and interfaces, and electronic structure. Each of these challenge areas is critical to the development of advanced energy systems, and each can be accelerated by the integrated application of predictive capability with theory and experiment. The workshop concluded that emerging capabilities in predictive modeling and simulation have the potential to revolutionize the development of new materials and chemical processes. Coupled with world-leading materials characterization and nanoscale science facilities, this predictive capability provides the foundation for an innovation ecosystem that can accelerate the discovery, development, and deployment of new technologies, including advanced energy systems. Delivering on the promise of this innovation ecosystem requires the following: Integration of synthesis, processing, characterization, theory, and simulation and modeling. Many of the newly established Energy Frontier Research Centers and Energy Hubs are exploiting this integration. Achieving/strengthening predictive capability in foundational challenge areas. Predictive capability in the seven foundational challenge areas described in this report is critical to the development of advanced energy technologies. Developing validated computational approaches that span vast differences in time and length scales. This fundamental computational challenge crosscuts all of the foundational challenge areas. Similarly challenging is coupling of analytical data from multiple instruments and techniques that are required to link these length and time scales. Experimental validation and quantification of uncertainty in simulation and modeling. Uncertainty quantification becomes increasingly challenging as simulations become more complex. Robust and sustainable computational infrastructure, including software and applications. For modeling and simulation, software equals infrastructure. To validate the computational tools, software is critical infrastructure that effectively translates huge arrays of experimental data into useful scientific understanding. An integrated approach for managing this infrastructure is essential. Efficient transfer and incorporation of simulation-based engineering and science in industry. Strategies for bridging the gap between research and industrial applications and for widespread industry adoption of integrated computational materials engineering are needed.« less
Andrade, E L; Bento, A F; Cavalli, J; Oliveira, S K; Freitas, C S; Marcon, R; Schwanke, R C; Siqueira, J M; Calixto, J B
2016-10-24
This review presents a historical overview of drug discovery and the non-clinical stages of the drug development process, from initial target identification and validation, through in silico assays and high throughput screening (HTS), identification of leader molecules and their optimization, the selection of a candidate substance for clinical development, and the use of animal models during the early studies of proof-of-concept (or principle). This report also discusses the relevance of validated and predictive animal models selection, as well as the correct use of animal tests concerning the experimental design, execution and interpretation, which affect the reproducibility, quality and reliability of non-clinical studies necessary to translate to and support clinical studies. Collectively, improving these aspects will certainly contribute to the robustness of both scientific publications and the translation of new substances to clinical development.
Phage display for the discovery of hydroxyapatite-associated peptides.
Jin, Hyo-Eon; Chung, Woo-Jae; Lee, Seung-Wuk
2013-01-01
In nature, proteins play a critical role in the biomineralization process. Understanding how different peptide or protein sequences selectively interact with the target crystal is of great importance. Identifying such protein structures is one of the critical steps in verifying the molecular mechanisms of biomineralization. One of the promising ways to obtain such information for a particular crystal surface is to screen combinatorial peptide libraries in a high-throughput manner. Among the many combinatorial library screening procedures, phage display is a powerful method to isolate such proteins and peptides. In this chapter, we will describe our established methods to perform phage display with inorganic crystal surfaces. Specifically, we will use hydroxyapatite as a model system for discovery of apatite-associated proteins in bone or tooth biomineralization studies. This model approach can be generalized to other desired crystal surfaces using the same experimental design principles with a little modification of the procedures. © 2013 Elsevier Inc. All rights reserved.
Executable Architectures for Modeling Command and Control Processes
2006-06-01
of introducing new NCES capabilities (such as the Federated Search ) to the ‘To Be’ model. 2 Table of Contents 1 INTRODUCTION...Conventional Method for SME Discovery ToBe.JCAS.3.2 Send Alert and/or Request OR AND ToBe.JCAS.3.4 Employ Federated Search for CAS-related Info JCAS.1.3.6.13...instant messaging, web browser, etc. • Federated Search – this capability provides a way to search enterprise contents across various search-enabled
2011-07-13
CAPE CANAVERAL, Fla. -- At NASA's Kennedy Space Center in Florida, space shuttle Discovery -- its nose encased in protective plastic, its cockpit windows covered, and strongbacks attached to its payload bay doors -- finds shelter in the Vehicle Assembly Building, or VAB, after rolling from Orbiter Processing Facility-2, or OPF-2. Discovery will be stored inside the VAB for approximately one month while shuttle Atlantis undergoes processing in OPF-2 following its final mission, STS-135. Discovery flew its 39th and final mission, STS-133, in February and March 2011, and currently is being prepared for public display at the Smithsonian's National Air and Space Museum Steven F. Udvar-Hazy Center in Virginia. For more information about Discovery's Transition and Retirement, visit www.nasa.gov/mission_pages/shuttle/launch/discovery_rss_collection_archive_1.html. Photo credit: NASA/Ken Thornsley
2011-07-13
CAPE CANAVERAL, Fla. -- At NASA's Kennedy Space Center in Florida, space shuttle Discovery -- its nose encased in protective plastic, its cockpit windows covered, and strongbacks attached to its payload bay doors -- winds its way from Orbiter Processing Facility-2, or OPF-2, to the Vehicle Assembly Building, or VAB. Discovery will be stored inside the VAB for approximately one month while shuttle Atlantis undergoes processing in OPF-2 following its final mission, STS-135. Discovery flew its 39th and final mission, STS-133, in February and March 2011, and currently is being prepared for public display at the Smithsonian's National Air and Space Museum Steven F. Udvar-Hazy Center in Virginia. For more information about Discovery's Transition and Retirement, visit www.nasa.gov/mission_pages/shuttle/launch/discovery_rss_collection_archive_1.html. Photo credit: NASA/Frankie Martin
2011-07-13
CAPE CANAVERAL, Fla. -- At NASA's Kennedy Space Center in Florida, space shuttle Discovery -- its nose encased in protective plastic, its cockpit windows covered, and strongbacks attached to its payload bay doors -- awaits entry into the Vehicle Assembly Building, or VAB, after rolling from Orbiter Processing Facility-2, or OPF-2. Discovery will be stored inside the VAB for approximately one month while shuttle Atlantis undergoes processing in OPF-2 following its final mission, STS-135. Discovery flew its 39th and final mission, STS-133, in February and March 2011, and currently is being prepared for public display at the Smithsonian's National Air and Space Museum Steven F. Udvar-Hazy Center in Virginia. For more information about Discovery's Transition and Retirement, visit www.nasa.gov/mission_pages/shuttle/launch/discovery_rss_collection_archive_1.html. Photo credit: NASA/Frankie Martin
Cancer drug discovery: recent innovative approaches to tumor modeling.
Lovitt, Carrie J; Shelper, Todd B; Avery, Vicky M
2016-09-01
Cell culture models have been at the heart of anti-cancer drug discovery programs for over half a century. Advancements in cell culture techniques have seen the rapid evolution of more complex in vitro cell culture models investigated for use in drug discovery. Three-dimensional (3D) cell culture research has become a strong focal point, as this technique permits the recapitulation of the tumor microenvironment. Biologically relevant 3D cellular models have demonstrated significant promise in advancing cancer drug discovery, and will continue to play an increasing role in the future. In this review, recent advances in 3D cell culture techniques and their application in tumor modeling and anti-cancer drug discovery programs are discussed. The topics include selection of cancer cells, 3D cell culture assays (associated endpoint measurements and analysis), 3D microfluidic systems and 3D bio-printing. Although advanced cancer cell culture models and techniques are becoming commonplace in many research groups, the use of these approaches has yet to be fully embraced in anti-cancer drug applications. Furthermore, limitations associated with analyzing information-rich biological data remain unaddressed.
Knowledge Discovery and Data Mining: An Overview
NASA Technical Reports Server (NTRS)
Fayyad, U.
1995-01-01
The process of knowledge discovery and data mining is the process of information extraction from very large databases. Its importance is described along with several techniques and considerations for selecting the most appropriate technique for extracting information from a particular data set.
Allchin's Shoehorn, or Why Science Is Hypothetico-Deductive.
ERIC Educational Resources Information Center
Lawson, Anton E.
2003-01-01
Criticizes Allchin's article about Lawson's analysis of Galileo's discovery of Jupiter's moons. Suggests that a careful analysis of the way humans spontaneously process information and reason supports a general hypothetico-deductive theory of human information processing, reasoning, and scientific discovery. (SOE)
General view from inside the payload bay of the Orbiter ...
General view from inside the payload bay of the Orbiter Discovery approximately along its centerline looking aft towards the bulkhead of the aft fuselage. Note panels and insulation removed for access to the orbiter's subsystems for inspection and post-mission processing. This photo was taken during the processing of the Orbiter Discovery after its final mission and in preparation for its transition to the National Air and Space Museum. This view was taken in the Orbiter Processing Facility at Kennedy Space Center. - Space Transportation System, Orbiter Discovery (OV-103), Lyndon B. Johnson Space Center, 2101 NASA Parkway, Houston, Harris County, TX
Direct Comparison of the Precision of the New Hologic Horizon Model With the Old Discovery Model.
Whittaker, LaTarsha G; McNamara, Elizabeth A; Vath, Savoun; Shaw, Emily; Malabanan, Alan O; Parker, Robert A; Rosen, Harold N
2017-11-22
Previous publications suggested that the precision of the new Hologic Horizon densitometer might be better than that of the previous Discovery model, but these observations were confounded by not using the same participants and technologists on both densitometers. We sought to study this issue methodically by measuring in vivo precision in both densitometers using the same patients and technologists. Precision studies for the Horizon and Discovery models were done by acquiring spine, hip, and forearm bone mineral density twice on 30 participants. The set of 4 scans on each participant (2 on the Discovery, 2 on the Horizon) was acquired by the same technologist using the same scanning mode. The pairs of data were used to calculate the least significant change according to the International Society for Clinical Densitometry guidelines. The significance of the difference between least significant changes was assessed using a Wilcoxon signed-rank test of the difference between the mean square error of the absolute value of the differences between paired measurements on the Discovery (Δ-Discovery) and the mean square error of the absolute value of the differences between paired measurements on the Horizon (Δ-Horizon). At virtually all anatomic sites, there was a nonsignificant trend for the precision to be better for the Horizon than for the Discovery. As more vertebrae were excluded from analysis, the precision deteriorated on both densitometers. The precision between densitometers was almost identical when reporting only 1 vertebral body. (1) There was a nonsignificant trend for greater precision on the new Hologic Horizon compared with the older Discovery model. (2) The difference in precision of the spine bone mineral density between the Horizon and the Discovery models decreases as fewer vertebrae are included. (3) These findings are substantially similar to previously published results which had not controlled as well for confounding from using different subjects and technologists. Copyright © 2017 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.
Faults Discovery By Using Mined Data
NASA Technical Reports Server (NTRS)
Lee, Charles
2005-01-01
Fault discovery in the complex systems consist of model based reasoning, fault tree analysis, rule based inference methods, and other approaches. Model based reasoning builds models for the systems either by mathematic formulations or by experiment model. Fault Tree Analysis shows the possible causes of a system malfunction by enumerating the suspect components and their respective failure modes that may have induced the problem. The rule based inference build the model based on the expert knowledge. Those models and methods have one thing in common; they have presumed some prior-conditions. Complex systems often use fault trees to analyze the faults. Fault diagnosis, when error occurs, is performed by engineers and analysts performing extensive examination of all data gathered during the mission. International Space Station (ISS) control center operates on the data feedback from the system and decisions are made based on threshold values by using fault trees. Since those decision-making tasks are safety critical and must be done promptly, the engineers who manually analyze the data are facing time challenge. To automate this process, this paper present an approach that uses decision trees to discover fault from data in real-time and capture the contents of fault trees as the initial state of the trees.
discovery toolset for Emulytics v. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fritz, David; Crussell, Jonathan
The discovery toolset for Emulytics enables the construction of high-fidelity emulation models of systems. The toolset consists of a set of tools and techniques to automatically go from network discovery of operational systems to emulating those complex systems. Our toolset combines data from host discovery and network mapping tools into an intermediate representation that can then be further refined. Once the intermediate representation reaches the desired state, our toolset supports emitting the Emulytics models with varying levels of specificity based on experiment needs.
Current status and future prospects for enabling chemistry technology in the drug discovery process.
Djuric, Stevan W; Hutchins, Charles W; Talaty, Nari N
2016-01-01
This review covers recent advances in the implementation of enabling chemistry technologies into the drug discovery process. Areas covered include parallel synthesis chemistry, high-throughput experimentation, automated synthesis and purification methods, flow chemistry methodology including photochemistry, electrochemistry, and the handling of "dangerous" reagents. Also featured are advances in the "computer-assisted drug design" area and the expanding application of novel mass spectrometry-based techniques to a wide range of drug discovery activities.
Mass spectrometry-driven drug discovery for development of herbal medicine.
Zhang, Aihua; Sun, Hui; Wang, Xijun
2018-05-01
Herbal medicine (HM) has made a major contribution to the drug discovery process with regard to identifying products compounds. Currently, more attention has been focused on drug discovery from natural compounds of HM. Despite the rapid advancement of modern analytical techniques, drug discovery is still a difficult and lengthy process. Fortunately, mass spectrometry (MS) can provide us with useful structural information for drug discovery, has been recognized as a sensitive, rapid, and high-throughput technology for advancing drug discovery from HM in the post-genomic era. It is essential to develop an efficient, high-quality, high-throughput screening method integrated with an MS platform for early screening of candidate drug molecules from natural products. We have developed a new chinmedomics strategy reliant on MS that is capable of capturing the candidate molecules, facilitating their identification of novel chemical structures in the early phase; chinmedomics-guided natural product discovery based on MS may provide an effective tool that addresses challenges in early screening of effective constituents of herbs against disease. This critical review covers the use of MS with related techniques and methodologies for natural product discovery, biomarker identification, and determination of mechanisms of action. It also highlights high-throughput chinmedomics screening methods suitable for lead compound discovery illustrated by recent successes. © 2016 Wiley Periodicals, Inc.
Search Pathways: Modeling GeoData Search Behavior to Support Usable Application Development
NASA Astrophysics Data System (ADS)
Yarmey, L.; Rosati, A.; Tressel, S.
2014-12-01
Recent technical advances have enabled development of new scientific data discovery systems. Metadata brokering, linked data, and other mechanisms allow users to discover scientific data of interes across growing volumes of heterogeneous content. Matching this complex content with existing discovery technologies, people looking for scientific data are presented with an ever-growing array of features to sort, filter, subset, and scan through search returns to help them find what they are looking for. This paper examines the applicability of available technologies in connecting searchers with the data of interest. What metrics can be used to track success given shifting baselines of content and technology? How well do existing technologies map to steps in user search patterns? Taking a user-driven development approach, the team behind the Arctic Data Explorer interdisciplinary data discovery application invested heavily in usability testing and user search behavior analysis. Building on earlier library community search behavior work, models were developed to better define the diverse set of thought processes and steps users took to find data of interest, here called 'search pathways'. This research builds a deeper understanding of the user community that seeks to reuse scientific data. This approach ensures that development decisions are driven by clearly articulated user needs instead of ad hoc technology trends. Initial results from this research will be presented along with lessons learned for other discovery platform development and future directions for informatics research into search pathways.
Establishing and Maintaining an Extensive Library of Patient-Derived Xenograft Models.
Mattar, Marissa; McCarthy, Craig R; Kulick, Amanda R; Qeriqi, Besnik; Guzman, Sean; de Stanchina, Elisa
2018-01-01
Patient-derived xenograft (PDX) models have recently emerged as a highly desirable platform in oncology and are expected to substantially broaden the way in vivo studies are designed and executed and to reshape drug discovery programs. However, acquisition of patient-derived samples, and propagation, annotation and distribution of PDXs are complex processes that require a high degree of coordination among clinic, surgery and laboratory personnel, and are fraught with challenges that are administrative, procedural and technical. Here, we examine in detail the major aspects of this complex process and relate our experience in establishing a PDX Core Laboratory within a large academic institution.
White, David T; Eroglu, Arife Unal; Wang, Guohua; Zhang, Liyun; Sengupta, Sumitra; Ding, Ding; Rajpurohit, Surendra K; Walker, Steven L; Ji, Hongkai; Qian, Jiang; Mumm, Jeff S
2017-01-01
The zebrafish has emerged as an important model for whole-organism small-molecule screening. However, most zebrafish-based chemical screens have achieved only mid-throughput rates. Here we describe a versatile whole-organism drug discovery platform that can achieve true high-throughput screening (HTS) capacities. This system combines our automated reporter quantification in vivo (ARQiv) system with customized robotics, and is termed ‘ARQiv-HTS’. We detail the process of establishing and implementing ARQiv-HTS: (i) assay design and optimization, (ii) calculation of sample size and hit criteria, (iii) large-scale egg production, (iv) automated compound titration, (v) dispensing of embryos into microtiter plates, and (vi) reporter quantification. We also outline what we see as best practice strategies for leveraging the power of ARQiv-HTS for zebrafish-based drug discovery, and address technical challenges of applying zebrafish to large-scale chemical screens. Finally, we provide a detailed protocol for a recently completed inaugural ARQiv-HTS effort, which involved the identification of compounds that elevate insulin reporter activity. Compounds that increased the number of insulin-producing pancreatic beta cells represent potential new therapeutics for diabetic patients. For this effort, individual screening sessions took 1 week to conclude, and sessions were performed iteratively approximately every other day to increase throughput. At the conclusion of the screen, more than a half million drug-treated larvae had been evaluated. Beyond this initial example, however, the ARQiv-HTS platform is adaptable to almost any reporter-based assay designed to evaluate the effects of chemical compounds in living small-animal models. ARQiv-HTS thus enables large-scale whole-organism drug discovery for a variety of model species and from numerous disease-oriented perspectives. PMID:27831568
3D bioprinting for drug discovery and development in pharmaceutics.
Peng, Weijie; Datta, Pallab; Ayan, Bugra; Ozbolat, Veli; Sosnoski, Donna; Ozbolat, Ibrahim T
2017-07-15
Successful launch of a commercial drug requires significant investment of time and financial resources wherein late-stage failures become a reason for catastrophic failures in drug discovery. This calls for infusing constant innovations in technologies, which can give reliable prediction of efficacy, and more importantly, toxicology of the compound early in the drug discovery process before clinical trials. Though computational advances have resulted in more rationale in silico designing, in vitro experimental studies still require gaining industry confidence and improving in vitro-in vivo correlations. In this quest, due to their ability to mimic the spatial and chemical attributes of native tissues, three-dimensional (3D) tissue models have now proven to provide better results for drug screening compared to traditional two-dimensional (2D) models. However, in vitro fabrication of living tissues has remained a bottleneck in realizing the full potential of 3D models. Recent advances in bioprinting provide a valuable tool to fabricate biomimetic constructs, which can be applied in different stages of drug discovery research. This paper presents the first comprehensive review of bioprinting techniques applied for fabrication of 3D tissue models for pharmaceutical studies. A comparative evaluation of different bioprinting modalities is performed to assess the performance and ability of fabricating 3D tissue models for pharmaceutical use as the critical selection of bioprinting modalities indeed plays a crucial role in efficacy and toxicology testing of drugs and accelerates the drug development cycle. In addition, limitations with current tissue models are discussed thoroughly and future prospects of the role of bioprinting in pharmaceutics are provided to the reader. Present advances in tissue biofabrication have crucial role to play in aiding the pharmaceutical development process achieve its objectives. Advent of three-dimensional (3D) models, in particular, is viewed with immense interest by the community due to their ability to mimic in vivo hierarchical tissue architecture and heterogeneous composition. Successful realization of 3D models will not only provide greater in vitro-in vivo correlation compared to the two-dimensional (2D) models, but also eventually replace pre-clinical animal testing, which has their own shortcomings. Amongst all fabrication techniques, bioprinting- comprising all the different modalities (extrusion-, droplet- and laser-based bioprinting), is emerging as the most viable fabrication technique to create the biomimetic tissue constructs. Notwithstanding the interest in bioprinting by the pharmaceutical development researchers, it can be seen that there is a limited availability of comparative literature which can guide the proper selection of bioprinting processes and associated considerations, such as the bioink selection for a particular pharmaceutical study. Thus, this work emphasizes these aspects of bioprinting and presents them in perspective of differential requirements of different pharmaceutical studies like in vitro predictive toxicology, high-throughput screening, drug delivery and tissue-specific efficacies. Moreover, since bioprinting techniques are mostly applied in regenerative medicine and tissue engineering, a comparative analysis of similarities and differences are also expounded to help researchers make informed decisions based on contemporary literature. Copyright © 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Nagamani, S; Gaur, A S; Tanneeru, K; Muneeswaran, G; Madugula, S S; Consortium, Mpds; Druzhilovskiy, D; Poroikov, V V; Sastry, G N
2017-11-01
Molecular property diagnostic suite (MPDS) is a Galaxy-based open source drug discovery and development platform. MPDS web portals are designed for several diseases, such as tuberculosis, diabetes mellitus, and other metabolic disorders, specifically aimed to evaluate and estimate the drug-likeness of a given molecule. MPDS consists of three modules, namely data libraries, data processing, and data analysis tools which are configured and interconnected to assist drug discovery for specific diseases. The data library module encompasses vast information on chemical space, wherein the MPDS compound library comprises 110.31 million unique molecules generated from public domain databases. Every molecule is assigned with a unique ID and card, which provides complete information for the molecule. Some of the modules in the MPDS are specific to the diseases, while others are non-specific. Importantly, a suitably altered protocol can be effectively generated for another disease-specific MPDS web portal by modifying some of the modules. Thus, the MPDS suite of web portals shows great promise to emerge as disease-specific portals of great value, integrating chemoinformatics, bioinformatics, molecular modelling, and structure- and analogue-based drug discovery approaches.
Toledo-Pereyra, Luis H
2008-01-01
I understand discovery as the essence of thinking man, or to paraphrase the notable French philosopher René Descartes, "I think, therefore I discover." In this study, I introduce discovery as the foundation of modern science. Discovery consists of six stages or elements, including: concept, belief, ability, support, proof, and protection. Each element is discussed within the context of the whole discovery enterprise. Fundamental tenets for understanding discovery are given throughout the paper, and a few examples illustrate the significance of some of the most important elements. I invite clinicians, researchers, and/or clinical researchers to integrate themselves into the active process of discovery. Remember--I think, therefore I discover.
Better cancer biomarker discovery through better study design.
Rundle, Andrew; Ahsan, Habibul; Vineis, Paolo
2012-12-01
High-throughput laboratory technologies coupled with sophisticated bioinformatics algorithms have tremendous potential for discovering novel biomarkers, or profiles of biomarkers, that could serve as predictors of disease risk, response to treatment or prognosis. We discuss methodological issues in wedding high-throughput approaches for biomarker discovery with the case-control study designs typically used in biomarker discovery studies, especially focusing on nested case-control designs. We review principles for nested case-control study design in relation to biomarker discovery studies and describe how the efficiency of biomarker discovery can be effected by study design choices. We develop a simulated prostate cancer cohort data set and a series of biomarker discovery case-control studies nested within the cohort to illustrate how study design choices can influence biomarker discovery process. Common elements of nested case-control design, incidence density sampling and matching of controls to cases are not typically factored correctly into biomarker discovery analyses, inducing bias in the discovery process. We illustrate how incidence density sampling and matching of controls to cases reduce the apparent specificity of truly valid biomarkers 'discovered' in a nested case-control study. We also propose and demonstrate a new case-control matching protocol, we call 'antimatching', that improves the efficiency of biomarker discovery studies. For a valid, but as yet undiscovered, biomarker(s) disjunctions between correctly designed epidemiologic studies and the practice of biomarker discovery reduce the likelihood that true biomarker(s) will be discovered and increases the false-positive discovery rate. © 2012 The Authors. European Journal of Clinical Investigation © 2012 Stichting European Society for Clinical Investigation Journal Foundation.
Modeling Emergence in Neuroprotective Regulatory Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanfilippo, Antonio P.; Haack, Jereme N.; McDermott, Jason E.
2013-01-05
The use of predictive modeling in the analysis of gene expression data can greatly accelerate the pace of scientific discovery in biomedical research by enabling in silico experimentation to test disease triggers and potential drug therapies. Techniques that focus on modeling emergence, such as agent-based modeling and multi-agent simulations, are of particular interest as they support the discovery of pathways that may have never been observed in the past. Thus far, these techniques have been primarily applied at the multi-cellular level, or have focused on signaling and metabolic networks. We present an approach where emergence modeling is extended to regulatorymore » networks and demonstrate its application to the discovery of neuroprotective pathways. An initial evaluation of the approach indicates that emergence modeling provides novel insights for the analysis of regulatory networks that can advance the discovery of acute treatments for stroke and other diseases.« less
Zur, Hadas; Tuller, Tamir
2016-01-01
mRNA translation is the fundamental process of decoding the information encoded in mRNA molecules by the ribosome for the synthesis of proteins. The centrality of this process in various biomedical disciplines such as cell biology, evolution and biotechnology, encouraged the development of dozens of mathematical and computational models of translation in recent years. These models aimed at capturing various biophysical aspects of the process. The objective of this review is to survey these models, focusing on those based and/or validated on real large-scale genomic data. We consider aspects such as the complexity of the models, the biophysical aspects they regard and the predictions they may provide. Furthermore, we survey the central systems biology discoveries reported on their basis. This review demonstrates the fundamental advantages of employing computational biophysical translation models in general, and discusses the relative advantages of the different approaches and the challenges in the field. PMID:27591251
Integrate Data into Scientific Workflows for Terrestrial Biosphere Model Evaluation through Brokers
NASA Astrophysics Data System (ADS)
Wei, Y.; Cook, R. B.; Du, F.; Dasgupta, A.; Poco, J.; Huntzinger, D. N.; Schwalm, C. R.; Boldrini, E.; Santoro, M.; Pearlman, J.; Pearlman, F.; Nativi, S.; Khalsa, S.
2013-12-01
Terrestrial biosphere models (TBMs) have become integral tools for extrapolating local observations and process-level understanding of land-atmosphere carbon exchange to larger regions. Model-model and model-observation intercomparisons are critical to understand the uncertainties within model outputs, to improve model skill, and to improve our understanding of land-atmosphere carbon exchange. The DataONE Exploration, Visualization, and Analysis (EVA) working group is evaluating TBMs using scientific workflows in UV-CDAT/VisTrails. This workflow-based approach promotes collaboration and improved tracking of evaluation provenance. But challenges still remain. The multi-scale and multi-discipline nature of TBMs makes it necessary to include diverse and distributed data resources in model evaluation. These include, among others, remote sensing data from NASA, flux tower observations from various organizations including DOE, and inventory data from US Forest Service. A key challenge is to make heterogeneous data from different organizations and disciplines discoverable and readily integrated for use in scientific workflows. This presentation introduces the brokering approach taken by the DataONE EVA to fill the gap between TBMs' evaluation scientific workflows and cross-organization and cross-discipline data resources. The DataONE EVA started the development of an Integrated Model Intercomparison Framework (IMIF) that leverages standards-based discovery and access brokers to dynamically discover, access, and transform (e.g. subset and resampling) diverse data products from DataONE, Earth System Grid (ESG), and other data repositories into a format that can be readily used by scientific workflows in UV-CDAT/VisTrails. The discovery and access brokers serve as an independent middleware that bridge existing data repositories and TBMs evaluation scientific workflows but introduce little overhead to either component. In the initial work, an OpenSearch-based discovery broker is leveraged to provide a consistent mechanism for data discovery. Standards-based data services, including Open Geospatial Consortium (OGC) Web Coverage Service (WCS) and THREDDS are leveraged to provide on-demand data access and transformations through the data access broker. To ease the adoption of broker services, a package of broker client VisTrails modules have been developed to be easily plugged into scientific workflows. The initial IMIF has been successfully tested in selected model evaluation scenarios involved in the NASA-funded Multi-scale Synthesis and Terrestrial Model Intercomparison Project (MsTMIP).
Potential of agricultural fungicides for antifungal drug discovery.
Jampilek, Josef
2016-01-01
While it is true that only a small fraction of fungal species are responsible for human mycoses, the increasing prevalence of fungal diseases has highlighted an urgent need to develop new antifungal drugs, especially for systemic administration. This contribution focuses on the similarities between agricultural fungicides and drugs. Inorganic, organometallic and organic compounds can be found amongst agricultural fungicides. Furthermore, fungicides are designed and developed in a similar fashion to drugs based on similar rules and guidelines, with fungicides also having to meet similar criteria of lead-likeness and/or drug-likeness. Modern approved specific-target fungicides are well-characterized entities with a proposed structure-activity relationships hypothesis and a defined mode of action. Extensive toxicological evaluation, including mammalian toxicology assays, is performed during the whole discovery and development process. Thus modern agrochemical research (design of modern agrochemicals) comes close to drug design, discovery and development. Therefore, modern specific-target fungicides represent excellent lead-like structures/models for novel drug design and development.
Repurposing High-Throughput Image Assays Enables Biological Activity Prediction for Drug Discovery.
Simm, Jaak; Klambauer, Günter; Arany, Adam; Steijaert, Marvin; Wegner, Jörg Kurt; Gustin, Emmanuel; Chupakhin, Vladimir; Chong, Yolanda T; Vialard, Jorge; Buijnsters, Peter; Velter, Ingrid; Vapirev, Alexander; Singh, Shantanu; Carpenter, Anne E; Wuyts, Roel; Hochreiter, Sepp; Moreau, Yves; Ceulemans, Hugo
2018-05-17
In both academia and the pharmaceutical industry, large-scale assays for drug discovery are expensive and often impractical, particularly for the increasingly important physiologically relevant model systems that require primary cells, organoids, whole organisms, or expensive or rare reagents. We hypothesized that data from a single high-throughput imaging assay can be repurposed to predict the biological activity of compounds in other assays, even those targeting alternate pathways or biological processes. Indeed, quantitative information extracted from a three-channel microscopy-based screen for glucocorticoid receptor translocation was able to predict assay-specific biological activity in two ongoing drug discovery projects. In these projects, repurposing increased hit rates by 50- to 250-fold over that of the initial project assays while increasing the chemical structure diversity of the hits. Our results suggest that data from high-content screens are a rich source of information that can be used to predict and replace customized biological assays. Copyright © 2018 Elsevier Ltd. All rights reserved.
2011-07-13
CAPE CANAVERAL, Fla. -- At NASA's Kennedy Space Center in Florida, space shuttle Discovery ventures out in public seemingly "undressed" -- its nose encased in protective plastic, its cockpit windows covered, and strongbacks attached to its payload bay doors. The shuttle is rolling from Orbiter Processing Facility-2, or OPF-2, to the Vehicle Assembly Building, or VAB. Discovery will be stored inside the VAB for approximately one month while shuttle Atlantis undergoes processing in OPF-2 following its final mission, STS-135. Discovery flew its 39th and final mission, STS-133, in February and March 2011, and currently is being prepared for public display at the Smithsonian's National Air and Space Museum Steven F. Udvar-Hazy Center in Virginia. For more information about Discovery's Transition and Retirement, visit www.nasa.gov/mission_pages/shuttle/launch/discovery_rss_collection_archive_1.html. Photo credit: NASA/Jim Grossmann
2011-07-13
CAPE CANAVERAL, Fla. -- At NASA's Kennedy Space Center in Florida, space shuttle Discovery -- its nose encased in protective plastic, its cockpit windows covered, and strongbacks attached to its payload bay doors -- has arrived at the door of the Vehicle Assembly Building, or VAB, from Orbiter Processing Facility-2, or OPF-2, in the background. Discovery will be stored inside the VAB for approximately one month while shuttle Atlantis undergoes processing in OPF-2 following its final mission, STS-135. Discovery flew its 39th and final mission, STS-133, in February and March 2011, and currently is being prepared for public display at the Smithsonian's National Air and Space Museum Steven F. Udvar-Hazy Center in Virginia. For more information about Discovery's Transition and Retirement, visit www.nasa.gov/mission_pages/shuttle/launch/discovery_rss_collection_archive_1.html. Photo credit: NASA/Jim Grossmann
2011-07-13
CAPE CANAVERAL, Fla. -- At NASA's Kennedy Space Center in Florida, space shuttle Discovery -- its nose encased in protective plastic, its cockpit windows covered, and strongbacks attached to its payload bay doors -- has arrived at the door of the Vehicle Assembly Building, or VAB, from Orbiter Processing Facility-2, or OPF-2. Discovery will be stored inside the VAB for approximately one month while shuttle Atlantis undergoes processing in OPF-2 following its final mission, STS-135. Discovery flew its 39th and final mission, STS-133, in February and March 2011, and currently is being prepared for public display at the Smithsonian's National Air and Space Museum Steven F. Udvar-Hazy Center in Virginia. For more information about Discovery's Transition and Retirement, visit www.nasa.gov/mission_pages/shuttle/launch/discovery_rss_collection_archive_1.html. Photo credit: NASA/Frankie Martin
2011-07-13
CAPE CANAVERAL, Fla. -- At NASA's Kennedy Space Center in Florida, space shuttle Discovery -- its nose encased in protective plastic, its cockpit windows covered, and strongbacks attached to its payload bay doors -- rolls past Orbiter Processing Facility-3, or OPF-3, at right, on its way from OPF-2 to the Vehicle Assembly Building, or VAB. Discovery will be stored inside the VAB for approximately one month while shuttle Atlantis undergoes processing in OPF-2 following its final mission, STS-135. Discovery flew its 39th and final mission, STS-133, in February and March 2011, and currently is being prepared for public display at the Smithsonian's National Air and Space Museum Steven F. Udvar-Hazy Center in Virginia. For more information about Discovery's Transition and Retirement, visit www.nasa.gov/mission_pages/shuttle/launch/discovery_rss_collection_archive_1.html. Photo credit: NASA/Frankie Martin
2011-07-13
CAPE CANAVERAL, Fla. -- At NASA's Kennedy Space Center in Florida, space shuttle Discovery -- its nose encased in protective plastic, its cockpit windows covered, and strongbacks attached to its payload bay doors -- rolls past the Thermal Protection System Facility, at right, on its way from Orbiter Processing Facility-2, or OPF-2, to the Vehicle Assembly Building, or VAB. Discovery will be stored inside the VAB for approximately one month while shuttle Atlantis undergoes processing in OPF-2 following its final mission, STS-135. Discovery flew its 39th and final mission, STS-133, in February and March 2011, and currently is being prepared for public display at the Smithsonian's National Air and Space Museum Steven F. Udvar-Hazy Center in Virginia. For more information about Discovery's Transition and Retirement, visit www.nasa.gov/mission_pages/shuttle/launch/discovery_rss_collection_archive_1.html. Photo credit: NASA/Frankie Martin
2011-07-13
CAPE CANAVERAL, Fla. -- At NASA's Kennedy Space Center in Florida, space shuttle Discovery, as it is seldom seen in public -- its nose encased in protective plastic, its cockpit windows covered, and strongbacks attached to its payload bay doors -- rolls out of Orbiter Processing Facility-2, or OPF-2, on its way to the Vehicle Assembly Building, or VAB. Discovery will be stored inside the VAB for approximately one month while shuttle Atlantis undergoes processing in OPF-2 following its final mission, STS-135. Discovery flew its 39th and final mission, STS-133, in February and March 2011, and currently is being prepared for public display at the Smithsonian's National Air and Space Museum Steven F. Udvar-Hazy Center in Virginia. For more information about Discovery's Transition and Retirement, visit www.nasa.gov/mission_pages/shuttle/launch/discovery_rss_collection_archive_1.html. Photo credit: NASA/Jim Grossmann
2011-07-13
CAPE CANAVERAL, Fla. -- At NASA's Kennedy Space Center in Florida, space shuttle Discovery -- its nose encased in protective plastic, its cockpit windows covered, and strongbacks attached to its payload bay doors -- rolls past the Thermal Protection System Facility, at right, on its way from Orbiter Processing Facility-2, or OPF-2, to the Vehicle Assembly Building, or VAB. Discovery will be stored inside the VAB for approximately one month while shuttle Atlantis undergoes processing in OPF-2 following its final mission, STS-135. Discovery flew its 39th and final mission, STS-133, in February and March 2011, and currently is being prepared for public display at the Smithsonian's National Air and Space Museum Steven F. Udvar-Hazy Center in Virginia. For more information about Discovery's Transition and Retirement, visit www.nasa.gov/mission_pages/shuttle/launch/discovery_rss_collection_archive_1.html. Photo credit: NASA/Jim Grossmann
2011-07-13
CAPE CANAVERAL, Fla. -- At NASA's Kennedy Space Center in Florida, space shuttle Discovery -- its nose encased in protective plastic, its cockpit windows covered, and strongbacks attached to its payload bay doors -- is welcomed into the Vehicle Assembly Building, or VAB, after its roll from Orbiter Processing Facility-2, or OPF-2. Discovery will be stored inside the VAB for approximately one month while shuttle Atlantis undergoes processing in OPF-2 following its final mission, STS-135. Discovery flew its 39th and final mission, STS-133, in February and March 2011, and currently is being prepared for public display at the Smithsonian's National Air and Space Museum Steven F. Udvar-Hazy Center in Virginia. For more information about Discovery's Transition and Retirement, visit www.nasa.gov/mission_pages/shuttle/launch/discovery_rss_collection_archive_1.html. Photo credit: NASA/Ken Thornsley
2011-07-13
CAPE CANAVERAL, Fla. -- At NASA's Kennedy Space Center in Florida, space shuttle Discovery -- its nose encased in protective plastic, its cockpit windows covered, and strongbacks attached to its payload bay doors -- rolls out of Orbiter Processing Facility-2, or OPF-2, on its move to the Vehicle Assembly Building, or VAB. Discovery will be stored inside the VAB for approximately one month while shuttle Atlantis undergoes processing in OPF-2 following its final mission, STS-135. Discovery flew its 39th and final mission, STS-133, in February and March 2011, and currently is being prepared for public display at the Smithsonian's National Air and Space Museum Steven F. Udvar-Hazy Center in Virginia. For more information about Discovery's Transition and Retirement, visit www.nasa.gov/mission_pages/shuttle/launch/discovery_rss_collection_archive_1.html. Photo credit: NASA/Ken Thornsley
Web service discovery among large service pools utilising semantic similarity and clustering
NASA Astrophysics Data System (ADS)
Chen, Fuzan; Li, Minqiang; Wu, Harris; Xie, Lingli
2017-03-01
With the rapid development of electronic business, Web services have attracted much attention in recent years. Enterprises can combine individual Web services to provide new value-added services. An emerging challenge is the timely discovery of close matches to service requests among large service pools. In this study, we first define a new semantic similarity measure combining functional similarity and process similarity. We then present a service discovery mechanism that utilises the new semantic similarity measure for service matching. All the published Web services are pre-grouped into functional clusters prior to the matching process. For a user's service request, the discovery mechanism first identifies matching services clusters and then identifies the best matching Web services within these matching clusters. Experimental results show that the proposed semantic discovery mechanism performs better than a conventional lexical similarity-based mechanism.
Closeup view of the aft fuselage of the Orbiter Discovery ...
Close-up view of the aft fuselage of the Orbiter Discovery on the starboard side looking forward. This view is of the attach surface for the Orbiter Maneuvering System/Reaction Control System (OMS/RCS) Pod. The OMS/RCS pods are removed for processing and reconditioning at another facility. This view was taken from a service platform in the Orbiter Processing Facility at Kennedy Space Center. - Space Transportation System, Orbiter Discovery (OV-103), Lyndon B. Johnson Space Center, 2101 NASA Parkway, Houston, Harris County, TX
Connecting the virtual world of computers to the real world of medicinal chemistry.
Glen, Robert C
2011-03-01
Drug discovery involves the simultaneous optimization of chemical and biological properties, usually in a single small molecule, which modulates one of nature's most complex systems: the balance between human health and disease. The increased use of computer-aided methods is having a significant impact on all aspects of the drug-discovery and development process and with improved methods and ever faster computers, computer-aided molecular design will be ever more central to the discovery process.
Current status and future prospects for enabling chemistry technology in the drug discovery process
Djuric, Stevan W.; Hutchins, Charles W.; Talaty, Nari N.
2016-01-01
This review covers recent advances in the implementation of enabling chemistry technologies into the drug discovery process. Areas covered include parallel synthesis chemistry, high-throughput experimentation, automated synthesis and purification methods, flow chemistry methodology including photochemistry, electrochemistry, and the handling of “dangerous” reagents. Also featured are advances in the “computer-assisted drug design” area and the expanding application of novel mass spectrometry-based techniques to a wide range of drug discovery activities. PMID:27781094
Discovery of the Self through the Writing Process: Autobiography as a Heuristic of Identity.
ERIC Educational Resources Information Center
Pitts, Mary Ellen
Although the recent thrust toward writing as interaction with a text has led to de-emphasis of personal-experience writing per se, autobiography, if approached in the context of textuality (in Roland Barthes's sense), can provide a model for writing as a means of discovering one's identity--of interacting with life as text and with the written…
From drug to protein: using yeast genetics for high-throughput target discovery.
Armour, Christopher D; Lum, Pek Yee
2005-02-01
The budding yeast Saccharomyces cerevisiae has long been an effective eukaryotic model system for understanding basic cellular processes. The genetic tractability and ease of manipulation in the laboratory make yeast well suited for large-scale chemical and genetic screens. Several recent studies describing the use of yeast genetics for high-throughput drug target identification are discussed in this review.
ERIC Educational Resources Information Center
Thirioux, Berangere; Jorland, Gerard; Bret, Michel; Tramus, Marie-Helene; Berthoz, Alain
2009-01-01
Researchers have recently reintroduced the own-body in the center of the social interaction theory. From the discovery of the mirror neurons in the ventral premotor cortex of the monkey's brain, a human "embodied" model of interindividual relationship based on simulation processes has been advanced, according to which we tend to embody…
6-D, A Process Framework for the Design and Development of Web-based Systems.
ERIC Educational Resources Information Center
Christian, Phillip
2001-01-01
Explores how the 6-D framework can form the core of a comprehensive systemic strategy and help provide a supporting structure for more robust design and development while allowing organizations to support whatever methods and models best suit their purpose. 6-D stands for the phases of Web design and development: Discovery, Definition, Design,…
Structure-based discovery and binding site analysis of histamine receptor ligands.
Kiss, Róbert; Keserű, György M
2016-12-01
The application of structure-based drug discovery in histamine receptor projects was previously hampered by the lack of experimental structures. The publication of the first X-ray structure of the histamine H1 receptor has been followed by several successful virtual screens and binding site analysis studies of H1-antihistamines. This structure together with several other recently solved aminergic G-protein coupled receptors (GPCRs) enabled the development of more realistic homology models for H2, H3 and H4 receptors. Areas covered: In this paper, the authors review the development of histamine receptor models and their application in drug discovery. Expert opinion: In the authors' opinion, the application of atomistic histamine receptor models has played a significant role in understanding key ligand-receptor interactions as well as in the discovery of novel chemical starting points. The recently solved H1 receptor structure is a major milestone in structure-based drug discovery; however, our analysis also demonstrates that for building H3 and H4 receptor homology models, other GPCRs may be more suitable as templates. For these receptors, the authors envisage that the development of higher quality homology models will significantly contribute to the discovery and optimization of novel H3 and H4 ligands.
Physics Guided Data Science in the Earth Sciences
NASA Astrophysics Data System (ADS)
Ganguly, A. R.
2017-12-01
Even as the geosciences are becoming relatively data-rich owing to remote sensing and archived model simulations, established physical understanding and process knowledge cannot be ignored. The ability to leverage both physics and data-intensive sciences may lead to new discoveries and predictive insights. A principled approach to physics guided data science, where physics informs feature selection, output constraints, and even the architecture of the learning models, is motivated. The possibility of hybrid physics and data science models at the level of component processes is discussed. The challenges and opportunities, as well as the relations to other approaches such as data assimilation - which also bring physics and data together - are discussed. Case studies are presented in climate, hydrology and meteorology.
The high road to success: how investing in ethics enhances corporate objectives.
Dashefsky, Richard
2003-01-01
There is a growing gap between the tidal wave of information emerging from the Human Genome Project and other molecular biology initiatives, and the clinical research needed to transform these discoveries into new diagnostics and therapeutics. While genomics-based technologies are being rapidly integrated into pharmaceutical R&D, many steps in the experimental process are still reliant on traditional surrogate model systems whose predictive power about human disease is incomplete or inaccurate. There is a growing trend underway in the research community to introduce actual human disease understanding as early as possible into discovery, thereby improving accuracy of results throughout the R&D continuum. Such an approach (known as clinical genomics: the large scale study of genes in the context of actual human disease) requires the availability of large quantities of ethically and legally sourced, high-quality human tissues with associated clinical information.Heretofore, no source could meet all of these requirements. Ardais Corporation was the first to address this need by pioneering a systematized, standardized network for the collection, processing, dissemination and research application of human tissue and associated clinical information, all of which rest on the highest ethical standards. Based on a novel model of collaboration between industry and the academic/medical community, Ardais has created procedures, structures, technologies, and information tools that collectively compromise a new paradigm in the application of human disease to biomedical research. Ardais now serves as a clinical genomics resource to dozens of academic researchers and biopharmaceutical companies, providing products and services to accelerate and improve drug discovery and development.
19 CFR 210.27 - General provisions governing discovery.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 3 2010-04-01 2010-04-01 false General provisions governing discovery. 210.27 Section 210.27 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION INVESTIGATIONS OF UNFAIR PRACTICES IN IMPORT TRADE ADJUDICATION AND ENFORCEMENT Discovery and Compulsory Process § 210.27 General...
2004-01-22
KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, Stephanie Stilson, NASA vehicle manager for Discovery, stands in front of a leading edge on the wing of Discovery. She is being filmed for a special feature on the KSC Web about the recent Orbiter Major Modification period on Discovery, which included inspection, modifications and reservicing of most systems onboard, plus installation of a Multifunction Electronic Display Subsystem (MEDS) - a state-of-the-art “glass cockpit.” The orbiter is now being prepared for eventual launch on a future mission.
Flood AI: An Intelligent Systems for Discovery and Communication of Disaster Knowledge
NASA Astrophysics Data System (ADS)
Demir, I.; Sermet, M. Y.
2017-12-01
Communities are not immune from extreme events or natural disasters that can lead to large-scale consequences for the nation and public. Improving resilience to better prepare, plan, recover, and adapt to disasters is critical to reduce the impacts of extreme events. The National Research Council (NRC) report discusses the topic of how to increase resilience to extreme events through a vision of resilient nation in the year 2030. The report highlights the importance of data, information, gaps and knowledge challenges that needs to be addressed, and suggests every individual to access the risk and vulnerability information to make their communities more resilient. This project presents an intelligent system, Flood AI, for flooding to improve societal preparedness by providing a knowledge engine using voice recognition, artificial intelligence, and natural language processing based on a generalized ontology for disasters with a primary focus on flooding. The knowledge engine utilizes the flood ontology and concepts to connect user input to relevant knowledge discovery channels on flooding by developing a data acquisition and processing framework utilizing environmental observations, forecast models, and knowledge bases. Communication channels of the framework includes web-based systems, agent-based chat bots, smartphone applications, automated web workflows, and smart home devices, opening the knowledge discovery for flooding to many unique use cases.
Knowledge Discovery from Vibration Measurements
Li, Jian; Wang, Daoyao
2014-01-01
The framework as well as the particular algorithms of pattern recognition process is widely adopted in structural health monitoring (SHM). However, as a part of the overall process of knowledge discovery from data bases (KDD), the results of pattern recognition are only changes and patterns of changes of data features. In this paper, based on the similarity between KDD and SHM and considering the particularity of SHM problems, a four-step framework of SHM is proposed which extends the final goal of SHM from detecting damages to extracting knowledge to facilitate decision making. The purposes and proper methods of each step of this framework are discussed. To demonstrate the proposed SHM framework, a specific SHM method which is composed by the second order structural parameter identification, statistical control chart analysis, and system reliability analysis is then presented. To examine the performance of this SHM method, real sensor data measured from a lab size steel bridge model structure are used. The developed four-step framework of SHM has the potential to clarify the process of SHM to facilitate the further development of SHM techniques. PMID:24574933
Comparison of methods used to estimate conventional undiscovered petroleum resources: World examples
Ahlbrandt, T.S.; Klett, T.R.
2005-01-01
Various methods for assessing undiscovered oil, natural gas, and natural gas liquid resources were compared in support of the USGS World Petroleum Assessment 2000. Discovery process, linear fractal, parabolic fractal, engineering estimates, PETRIMES, Delphi, and the USGS 2000 methods were compared. Three comparisons of these methods were made in: (1) the Neuquen Basin province, Argentina (different assessors, same input data); (2) provinces in North Africa, Oman, and Yemen (same assessors, different methods); and (3) the Arabian Peninsula, Arabian (Persian) Gulf, and North Sea (different assessors, different methods). A fourth comparison (same assessors, same assessment methods but different geologic models), between results from structural and stratigraphic assessment units in the North Sea used only the USGS 2000 method, and hence compared the type of assessment unit rather than the method. In comparing methods, differences arise from inherent differences in assumptions regarding: (1) the underlying distribution of the parent field population (all fields, discovered and undiscovered), (2) the population of fields being estimated; that is, the entire parent distribution or the undiscovered resource distribution, (3) inclusion or exclusion of large outlier fields; (4) inclusion or exclusion of field (reserve) growth, (5) deterministic or probabilistic models, (6) data requirements, and (7) scale and time frame of the assessment. Discovery process, Delphi subjective consensus, and the USGS 2000 method yield comparable results because similar procedures are employed. In mature areas such as the Neuquen Basin province in Argentina, the linear and parabolic fractal and engineering methods were conservative compared to the other five methods and relative to new reserve additions there since 1995. The PETRIMES method gave the most optimistic estimates in the Neuquen Basin. In less mature areas, the linear fractal method yielded larger estimates relative to other methods. A geologically based model, such as one using the total petroleum system approach, is preferred in that it combines the elements of petroleum source, reservoir, trap and seal with the tectono-stratigraphic history of basin evolution with petroleum resource potential. Care must be taken to demonstrate that homogeneous populations in terms of geology, geologic risk, exploration, and discovery processes are used in the assessment process. The USGS 2000 method (7th Approximation Model, EMC computational program) is robust; that is, it can be used in both mature and immature areas, and provides comparable results when using different geologic models (e.g. stratigraphic or structural) with differing amounts of subdivisions, assessment units, within the total petroleum system. ?? 2005 International Association for Mathematical Geology.
Bioprinting towards Physiologically Relevant Tissue Models for Pharmaceutics.
Peng, Weijie; Unutmaz, Derya; Ozbolat, Ibrahim T
2016-09-01
Improving the ability to predict the efficacy and toxicity of drug candidates earlier in the drug discovery process will speed up the introduction of new drugs into clinics. 3D in vitro systems have significantly advanced the drug screening process as 3D tissue models can closely mimic native tissues and, in some cases, the physiological response to drugs. Among various in vitro systems, bioprinting is a highly promising technology possessing several advantages such as tailored microarchitecture, high-throughput capability, coculture ability, and low risk of cross-contamination. In this opinion article, we discuss the currently available tissue models in pharmaceutics along with their limitations and highlight the possibilities of bioprinting physiologically relevant tissue models, which hold great potential in drug testing, high-throughput screening, and disease modeling. Copyright © 2016 Elsevier Ltd. All rights reserved.
THE ART OF DATA MINING THE MINEFIELDS OF TOXICITY ...
Toxicity databases have a special role in predictive toxicology, providing ready access to historical information throughout the workflow of discovery, development, and product safety processes in drug development as well as in review by regulatory agencies. To provide accurate information within a hypothesesbuilding environment, the content of the databases needs to be rigorously modeled using standards and controlled vocabulary. The utilitarian purposes of databases widely vary, ranging from a source for (Q)SAR datasets for modelers to a basis for
Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J
2014-01-01
Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model.
2005-12-14
KENNEDY SPACE CENTER, FLA. -- United Space Alliance technician Dell Chapman applies tape to hold the gap filler in place on the orbiter Discovery while the glue dries. Looking on is quality inspector Travis Schlingman. Discovery is being processed in Orbiter Processing Facility Bay 3 at NASA’s Kennedy Space Center. This work is being performed due to two gap fillers that were protruding from the underside of Discovery on the first Return to Flight mission, STS-114. New installation procedures have been developed to ensure the gap fillers stay in place and do not pose any hazard during the shuttle's re-entry to the atmosphere. Discovery is the scheduled orbiter for the second space shuttle mission in the return-to-flight sequence.
Building Cognition: The Construction of Computational Representations for Scientific Discovery
ERIC Educational Resources Information Center
Chandrasekharan, Sanjay; Nersessian, Nancy J.
2015-01-01
Novel computational representations, such as simulation models of complex systems and video games for scientific discovery (Foldit, EteRNA etc.), are dramatically changing the way discoveries emerge in science and engineering. The cognitive roles played by such computational representations in discovery are not well understood. We present a…
Dynamical and Physical Models of Ecliptic Comets
NASA Astrophysics Data System (ADS)
Dones, L.; Boyce, D. C.; Levison, H. F.; Duncan, M. J.
2005-08-01
In most simulations of the dynamical evolution of the cometary reservoirs, a comet is removed from the computer only if it is thrown from the Solar System or strikes the Sun or a planet. However, ejection or collision is probably not the fate of most active comets. Some, like 3D/Biela, disintegrate for no apparent reason, and others, such as the Sun-grazers, 16P/Brooks 2, and D/1993 F2 Shoemaker-Levy 9, are pulled apart by the Sun or a planet. Still others, like 107P/Wilson Harrington and D/1819 W1 Blanpain, are lost and then rediscovered as asteroids. Historically, amateurs discovered most comets. However, robotic surveys now dominate the discovery of comets (http://www.comethunter.de/). These surveys include large numbers of comets observed in a standard way, so the process of discovery is amenable to modeling. Understanding the selection effects for discovery of comets is a key problem in constructing models of cometary origin. To address this issue, we are starting new orbital integrations that will provide the best model to date of the population of ecliptic comets as a function of location in the Solar System and the size of the cometary nucleus, which we expect will vary with location. The integrations include the gravitational effects of the terrestrial and giant planets and, in some cases, nongravitational jetting forces. We will incorporate simple parameterizations for mantling and mass loss based upon detailed physical models. This approach will enable us to estimate the fraction of comets in different states (active, extinct, dormant, or disintegrated) and to track how the cometary size distribution changes as a function of distance from the Sun. We will compare the results of these simulations with bias-corrected models of the orbital and absolute magnitude distributions of Jupiter-family comets and Centaurs.
An integrative model for in-silico clinical-genomics discovery science.
Lussier, Yves A; Sarkar, Indra Nell; Cantor, Michael
2002-01-01
Human Genome discovery research has set the pace for Post-Genomic Discovery Research. While post-genomic fields focused at the molecular level are intensively pursued, little effort is being deployed in the later stages of molecular medicine discovery research, such as clinical-genomics. The objective of this study is to demonstrate the relevance and significance of integrating mainstream clinical informatics decision support systems to current bioinformatics genomic discovery science. This paper is a feasibility study of an original model enabling novel "in-silico" clinical-genomic discovery science and that demonstrates its feasibility. This model is designed to mediate queries among clinical and genomic knowledge bases with relevant bioinformatic analytic tools (e.g. gene clustering). Briefly, trait-disease-gene relationships were successfully illustrated using QMR, OMIM, SNOMED-RT, GeneCluster and TreeView. The analyses were visualized as two-dimensional dendrograms of clinical observations clustered around genes. To our knowledge, this is the first study using knowledge bases of clinical decision support systems for genomic discovery. Although this study is a proof of principle, it provides a framework for the development of clinical decision-support-system driven, high-throughput clinical-genomic technologies which could potentially unveil significant high-level functions of genes.
19 CFR 210.61 - Discovery and compulsory process.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 19 Customs Duties 3 2014-04-01 2014-04-01 false Discovery and compulsory process. 210.61 Section 210.61 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION INVESTIGATIONS OF UNFAIR PRACTICES... matter relevant to the motion for temporary relief and the responses thereto, including the issues of...
McGrath, John A
2017-05-01
The discovery of pathogenic mutations in inherited skin diseases represents one of the major landmarks of late 20th century molecular genetics. Mutation data can provide accurate diagnoses, improve genetic counseling, help define disease mechanisms, establish disease models, and provide a basis for translational research and testing of novel therapeutics. The process of detecting disease mutations, however, has not always been straightforward. Traditional approaches using genetic linkage or candidate gene analysis have often been limited, costly, and slow to yield new insights, but the advent of next-generation sequencing (NGS) technologies has altered the landscape of current gene discovery and mutation detection approaches. Copyright © 2017 The Author. Published by Elsevier Inc. All rights reserved.
Automated Knowledge Discovery from Simulators
NASA Technical Reports Server (NTRS)
Burl, Michael C.; DeCoste, D.; Enke, B. L.; Mazzoni, D.; Merline, W. J.; Scharenbroich, L.
2006-01-01
In this paper, we explore one aspect of knowledge discovery from simulators, the landscape characterization problem, where the aim is to identify regions in the input/ parameter/model space that lead to a particular output behavior. Large-scale numerical simulators are in widespread use by scientists and engineers across a range of government agencies, academia, and industry; in many cases, simulators provide the only means to examine processes that are infeasible or impossible to study otherwise. However, the cost of simulation studies can be quite high, both in terms of the time and computational resources required to conduct the trials and the manpower needed to sift through the resulting output. Thus, there is strong motivation to develop automated methods that enable more efficient knowledge extraction.
A literature review on business process modelling: new frontiers of reusability
NASA Astrophysics Data System (ADS)
Aldin, Laden; de Cesare, Sergio
2011-08-01
Business process modelling (BPM) has become fundamental for modern enterprises due to the increasing rate of organisational change. As a consequence, business processes need to be continuously (re-)designed as well as subsequently aligned with the corresponding enterprise information systems. One major problem associated with the design of business processes is reusability. Reuse of business process models has the potential of increasing the efficiency and effectiveness of BPM. This article critically surveys the existing literature on the problem of BPM reusability and more specifically on that State-of-the-Art research that can provide or suggest the 'elements' required for the development of a methodology aimed at discovering reusable conceptual artefacts in the form of patterns. The article initially clarifies the definitions of business process and business process model; then, it sets out to explore the previous research conducted in areas that have an impact on reusability in BPM. The article concludes by distilling directions for future research towards the development of apatterns-based approach to BPM; an approach that brings together the contributions made by the research community in the areas of process mining and discovery, declarative approaches and ontologies.
A New System To Support Knowledge Discovery: Telemakus.
ERIC Educational Resources Information Center
Revere, Debra; Fuller, Sherrilynne S.; Bugni, Paul F.; Martin, George M.
2003-01-01
The Telemakus System builds on the areas of concept representation, schema theory, and information visualization to enhance knowledge discovery from scientific literature. This article describes the underlying theories and an overview of a working implementation designed to enhance the knowledge discovery process through retrieval, visual and…
78 FR 35812 - Revisions to Procedural Rules
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-14
... generally id. at 5-11. Within this framework, the Postal Service offers alternatives for reforming discovery processes in N-Cases. Id. at 12- 20. These alternatives include Commission-led discovery, as opposed to participant-led discovery; limits on the number of interrogatories; and clearer and stricter boundaries for...
High-throughput strategies for the discovery and engineering of enzymes for biocatalysis.
Jacques, Philippe; Béchet, Max; Bigan, Muriel; Caly, Delphine; Chataigné, Gabrielle; Coutte, François; Flahaut, Christophe; Heuson, Egon; Leclère, Valérie; Lecouturier, Didier; Phalip, Vincent; Ravallec, Rozenn; Dhulster, Pascal; Froidevaux, Rénato
2017-02-01
Innovations in novel enzyme discoveries impact upon a wide range of industries for which biocatalysis and biotransformations represent a great challenge, i.e., food industry, polymers and chemical industry. Key tools and technologies, such as bioinformatics tools to guide mutant library design, molecular biology tools to create mutants library, microfluidics/microplates, parallel miniscale bioreactors and mass spectrometry technologies to create high-throughput screening methods and experimental design tools for screening and optimization, allow to evolve the discovery, development and implementation of enzymes and whole cells in (bio)processes. These technological innovations are also accompanied by the development and implementation of clean and sustainable integrated processes to meet the growing needs of chemical, pharmaceutical, environmental and biorefinery industries. This review gives an overview of the benefits of high-throughput screening approach from the discovery and engineering of biocatalysts to cell culture for optimizing their production in integrated processes and their extraction/purification.
Applying Cognitive Fusion to Space Situational Awareness
NASA Astrophysics Data System (ADS)
Ingram, S.; Shaw, M.; Chan, M.
With recent increases in capability and frequency of rocket launches from countries across the world, maintaining a state-of-the-art Space Situational Awareness model is all the more necessary. We propose a fusion of real-time, natural language processing capability provided by IBM cognitive services with ground-based sensor data of positions and trajectories of satellites in all earth orbits. We believe such insight provided by cognitive services could help determine context to missile launches, help predict when a satellite of interest could be in danger, either by accident or by intent, and could alert interested parties to the perceived threat. We seek to implement an improved Space Situational Awareness model by developing a dynamic factor graph model informed by the fusion of ground-based ”structured” sensor data with ”unstructured” data from the public domain, such as news articles, blogs, and social media, in real time. To this end, we employ IBM’s Cognitive services, specifically, Watson Discovery. Watson Discovery allows real-time natural language processing of text including entity extraction, keyword search, taxonomy classification, concept tagging, relation extraction, sentiment analysis, and emotion analysis. We present various scenarios that demonstrate the utility of this new Space Situational Awareness model, each of which combine past structured information with related open source data. We demonstrate that should the model come to estimate a satellite is ”of interest”, it will indicate it as so, based on the most pertinent data, such as a reading from a sensor or by information available online. We present and discuss the most recent iterations of the model for satellites currently available on Space-Track.org.
Alam, Fahmida; Islam, Md Asiful; Kamal, M A; Gan, Siew Hua
2016-08-13
Over the years, natural products have shown success as antidiabetics in vitro, in vivo and in clinical trials. Because natural product-derived drugs are more affordable and effective with fewer side-effects compared to conventional therapies, pharmaceutical research is increasingly leaning towards the discovery of new antidiabetic drugs from natural products targeting pathways or components associated with type 2 diabetes mellitus (T2DM) pathophysiology. However, the drug discovery process is very lengthy and costly with significant challenges. Therefore, various techniques are currently being developed for the preclinical research phase of drug discovery with the aim of drug development with less time and efforts from natural products. In this review, we have provided an update on natural products including fruits, vegetables, spices, nuts, beverages and mushrooms with potential antidiabetic activities from in vivo, in vitro and clinical studies. Synergistic interactions between natural products and antidiabetic drugs; and potential antidiabetic active compounds from natural products are also documented to pave the way for combination treatment and new drug discovery, respectively. Additionally, a brief idea of the drug discovery process along with the challenges that arise during drug development from natural products and the methods to conquer those challenges are discussed to create a more convenient future drug discovery process.
Terminology model discovery using natural language processing and visualization techniques.
Zhou, Li; Tao, Ying; Cimino, James J; Chen, Elizabeth S; Liu, Hongfang; Lussier, Yves A; Hripcsak, George; Friedman, Carol
2006-12-01
Medical terminologies are important for unambiguous encoding and exchange of clinical information. The traditional manual method of developing terminology models is time-consuming and limited in the number of phrases that a human developer can examine. In this paper, we present an automated method for developing medical terminology models based on natural language processing (NLP) and information visualization techniques. Surgical pathology reports were selected as the testing corpus for developing a pathology procedure terminology model. The use of a general NLP processor for the medical domain, MedLEE, provides an automated method for acquiring semantic structures from a free text corpus and sheds light on a new high-throughput method of medical terminology model development. The use of an information visualization technique supports the summarization and visualization of the large quantity of semantic structures generated from medical documents. We believe that a general method based on NLP and information visualization will facilitate the modeling of medical terminologies.
GENESI-DR: Discovery, Access and on-Demand Processing in Federated Repositories
NASA Astrophysics Data System (ADS)
Cossu, Roberto; Pacini, Fabrizio; Parrini, Andrea; Santi, Eliana Li; Fusco, Luigi
2010-05-01
GENESI-DR (Ground European Network for Earth Science Interoperations - Digital Repositories) is a European Commission (EC)-funded project, kicked-off early 2008 lead by ESA; partners include Space Agencies (DLR, ASI, CNES), both space and no-space data providers such as ENEA (I), Infoterra (UK), K-SAT (N), NILU (N), JRC (EU) and industry as Elsag Datamat (I), CS (F) and TERRADUE (I). GENESI-DR intends to meet the challenge of facilitating "time to science" from different Earth Science disciplines in discovery, access and use (combining, integrating, processing, …) of historical and recent Earth-related data from space, airborne and in-situ sensors, which are archived in large distributed repositories. In fact, a common dedicated infrastructure such as the GENESI-DR one permits the Earth Science communities to derive objective information and to share knowledge in all environmental sensitive domains over a continuum of time and a variety of geographical scales so addressing urgent challenges such as Global Change. GENESI-DR federates data, information and knowledge for the management of our fragile planet in line with one of the major goals of the many international environmental programmes such as GMES, GEO/GEOSS. As of today, 12 different Digital Repositories hosting more than 60 heterogeneous dataset series are federated in GENESI-DR. Series include satellite data, in situ data, images acquired by airborne sensors, digital elevation models and model outputs. ESA has started providing access to: Category-1 data systematically available on Internet; level 3 data (e.g., GlobCover map, MERIS Global Vegetation Index); ASAR products available in ESA Virtual Archive and related to the Supersites initiatives. In all cases, existing data policies and security constraints are fully respected. GENESI-DR also gives access to Grid and Cloud computing resources allowing authorized users to run a number of different processing services on the available data. The GENESI-DR operational platform is currently being validated against several applications from different domains, such as: automatic orthorectification of SPOT data; SAR Interferometry; GlobModel results visualization and verification by comparison with satellite observations; ozone estimation from ERS-GOME products and comparison with in-situ LIDAR measures; access to ocean-related heterogeneous data and on-the-fly generated products. The project is adopting, ISO 19115, ISO 19139 and OGC standards for geospatial metadata discovery and processing, is compliant with the basis of INSPIRE Implementing Rules for Metadata and Discovery, and uses the OpenSearch protocol with Geo extensions for data and services discovery. OpenSearch is now considered by OGC a mass-market standard to provide machine accessible search interface to data repositories. GENESI-DR is gaining momentum in the Earth Science community thanks to the active participation to the GEO task force "Data Integration and Analysis Systems" and to the several collaborations with EC projects. It is now extending international cooperation agreements specifically with the NASA (Goddard Earth Sciences Data Information Services), with CEODE (the Center of Earth Observation for Digital Earth of Beijing), with the APN (Asia-Pacific Network), with University of Tokyo (Japanese GeoGrid and Data Integration and Analysis System).
Visual Links: Discovery in Art and Science.
ERIC Educational Resources Information Center
Dake, Dennis M.
Some specific aspects of the process of discovery are explored as they are experienced in the visual arts and the physical sciences. Both fields use the same visual/brain processing system, and both disciplines share an imaginative and productive interest in the disciplined use of imagistic thinking. Many productive interactions between visual…
Collaborative Assessment for Employment Planning: Transition Assessment and the Discovery Process
ERIC Educational Resources Information Center
Stevenson, Bradley S.; Fowler, Catherine H.
2016-01-01
As the Workforce Innovation and Opportunities Act (WIOA) is implemented across the nation, special education and vocational rehabilitation professionals will need to increase their level of collaboration. One area of potential collaboration is assessment--transition assessment for the field of special education and the discovery process for adult…
Knowledge Discovery as an Aid to Organizational Creativity.
ERIC Educational Resources Information Center
Siau, Keng
2000-01-01
This article presents the concept of knowledge discovery, a process of searching for associations in large volumes of computer data, as an aid to creativity. It then discusses the various techniques in knowledge discovery. Mednick's associative theory of creative thought serves as the theoretical foundation for this research. (Contains…
19 CFR 210.33 - Failure to make or cooperate in discovery; sanctions.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 3 2010-04-01 2010-04-01 false Failure to make or cooperate in discovery; sanctions. 210.33 Section 210.33 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION INVESTIGATIONS OF UNFAIR PRACTICES IN IMPORT TRADE ADJUDICATION AND ENFORCEMENT Discovery and Compulsory Process...
36 CFR 800.13 - Post-review discoveries.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Post-review discoveries. 800... PROTECTION OF HISTORIC PROPERTIES The section 106 Process § 800.13 Post-review discoveries. (a) Planning for subsequent discoveries—(1) Using a programmatic agreement. An agency official may develop a programmatic...
36 CFR § 800.13 - Post-review discoveries.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 36 Parks, Forests, and Public Property 3 2013-07-01 2012-07-01 true Post-review discoveries. Â... PROTECTION OF HISTORIC PROPERTIES The section 106 Process § 800.13 Post-review discoveries. (a) Planning for subsequent discoveries—(1) Using a programmatic agreement. An agency official may develop a programmatic...
36 CFR 800.13 - Post-review discoveries.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 36 Parks, Forests, and Public Property 3 2012-07-01 2012-07-01 false Post-review discoveries. 800... PROTECTION OF HISTORIC PROPERTIES The section 106 Process § 800.13 Post-review discoveries. (a) Planning for subsequent discoveries—(1) Using a programmatic agreement. An agency official may develop a programmatic...
36 CFR 800.13 - Post-review discoveries.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 36 Parks, Forests, and Public Property 3 2014-07-01 2014-07-01 false Post-review discoveries. 800... PROTECTION OF HISTORIC PROPERTIES The section 106 Process § 800.13 Post-review discoveries. (a) Planning for subsequent discoveries—(1) Using a programmatic agreement. An agency official may develop a programmatic...
36 CFR 800.13 - Post-review discoveries.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 36 Parks, Forests, and Public Property 3 2011-07-01 2011-07-01 false Post-review discoveries. 800... PROTECTION OF HISTORIC PROPERTIES The section 106 Process § 800.13 Post-review discoveries. (a) Planning for subsequent discoveries—(1) Using a programmatic agreement. An agency official may develop a programmatic...
Quantifying Learning in Young Infants: Tracking Leg Actions During a Discovery-learning Task.
Sargent, Barbara; Reimann, Hendrik; Kubo, Masayoshi; Fetters, Linda
2015-06-01
Task-specific actions emerge from spontaneous movement during infancy. It has been proposed that task-specific actions emerge through a discovery-learning process. Here a method is described in which 3-4 month old infants learn a task by discovery and their leg movements are captured to quantify the learning process. This discovery-learning task uses an infant activated mobile that rotates and plays music based on specified leg action of infants. Supine infants activate the mobile by moving their feet vertically across a virtual threshold. This paradigm is unique in that as infants independently discover that their leg actions activate the mobile, the infants' leg movements are tracked using a motion capture system allowing for the quantification of the learning process. Specifically, learning is quantified in terms of the duration of mobile activation, the position variance of the end effectors (feet) that activate the mobile, changes in hip-knee coordination patterns, and changes in hip and knee muscle torque. This information describes infant exploration and exploitation at the interplay of person and environmental constraints that support task-specific action. Subsequent research using this method can investigate how specific impairments of different populations of infants at risk for movement disorders influence the discovery-learning process for task-specific action.
DataHub: Knowledge-based data management for data discovery
NASA Astrophysics Data System (ADS)
Handley, Thomas H.; Li, Y. Philip
1993-08-01
Currently available database technology is largely designed for business data-processing applications, and seems inadequate for scientific applications. The research described in this paper, the DataHub, will address the issues associated with this shortfall in technology utilization and development. The DataHub development is addressing the key issues in scientific data management of scientific database models and resource sharing in a geographically distributed, multi-disciplinary, science research environment. Thus, the DataHub will be a server between the data suppliers and data consumers to facilitate data exchanges, to assist science data analysis, and to provide as systematic approach for science data management. More specifically, the DataHub's objectives are to provide support for (1) exploratory data analysis (i.e., data driven analysis); (2) data transformations; (3) data semantics capture and usage; analysis-related knowledge capture and usage; and (5) data discovery, ingestion, and extraction. Applying technologies that vary from deductive databases, semantic data models, data discovery, knowledge representation and inferencing, exploratory data analysis techniques and modern man-machine interfaces, DataHub will provide a prototype, integrated environement to support research scientists' needs in multiple disciplines (i.e. oceanography, geology, and atmospheric) while addressing the more general science data management issues. Additionally, the DataHub will provide data management services to exploratory data analysis applications such as LinkWinds and NCSA's XIMAGE.
Discovery stories in the science classroom
NASA Astrophysics Data System (ADS)
Arya, Diana Jaleh
School science has been criticized for its lack of emphasis on the tentative, dynamic nature of science as a process of learning more about our world. This criticism is the guiding force for this present body of work, which focuses on the question: what are the educational benefits for middle school students of reading texts that highlight the process of science in the form of a discovery narrative? This dissertation traces my journey through a review of theoretical perspectives of narrative, an analysis of first-hand accounts of scientific discovery, the complex process of developing age-appropriate, cohesive and engaging science texts for middle school students, and a comparison study (N=209) that seeks to determine the unique benefits of the scientific discovery narrative for the interest in and retained understanding of conceptual information presented in middle school science texts. A total of 209 middle school participants in nine different classrooms from two different schools participated in the experimental study. Each subject read two science texts that differed in topic (the qualities of and uses for radioactive elements and the use of telescopic technology to see planets in space) and genre (the discovery narrative and the "conceptually known exposition" comparison text). The differences between the SDN and CKE versions for each topic were equivalent in all possible ways (initial introduction, overall conceptual accuracy, elements of human interest, coherence and readability level), save for the unique components of the discovery narrative (i.e., love for their work, acknowledgement of the known, identification of the unknown and the explorative or experimental process to discovery). Participants generally chose the discovery narrative version as the more interesting of the two texts. Additional findings from the experimental study suggest that science texts in the form of SDNs elicit greater long-term retention of key conceptual information, especially when the readers have little prior knowledge of a given topic. Further, ethnic minority groups of lower socio-economic level (i.e., Latin and African-American origins) demonstrated an even greater benefit from the SDN texts, suggesting that a scientist's story of discovery can help to close the gap in academic performance in science.
Knowledge-Based Topic Model for Unsupervised Object Discovery and Localization.
Niu, Zhenxing; Hua, Gang; Wang, Le; Gao, Xinbo
Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object instances from a given image collection without any supervision. Previous work has attempted to tackle this problem with vanilla topic models, such as latent Dirichlet allocation (LDA). However, in those methods no prior knowledge for the given image collection is exploited to facilitate object discovery. On the other hand, the topic models used in those methods suffer from the topic coherence issue-some inferred topics do not have clear meaning, which limits the final performance of object discovery. In this paper, prior knowledge in terms of the so-called must-links are exploited from Web images on the Internet. Furthermore, a novel knowledge-based topic model, called LDA with mixture of Dirichlet trees, is proposed to incorporate the must-links into topic modeling for object discovery. In particular, to better deal with the polysemy phenomenon of visual words, the must-link is re-defined as that one must-link only constrains one or some topic(s) instead of all topics, which leads to significantly improved topic coherence. Moreover, the must-links are built and grouped with respect to specific object classes, thus the must-links in our approach are semantic-specific , which allows to more efficiently exploit discriminative prior knowledge from Web images. Extensive experiments validated the efficiency of our proposed approach on several data sets. It is shown that our method significantly improves topic coherence and outperforms the unsupervised methods for object discovery and localization. In addition, compared with discriminative methods, the naturally existing object classes in the given image collection can be subtly discovered, which makes our approach well suited for realistic applications of unsupervised object discovery.Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object instances from a given image collection without any supervision. Previous work has attempted to tackle this problem with vanilla topic models, such as latent Dirichlet allocation (LDA). However, in those methods no prior knowledge for the given image collection is exploited to facilitate object discovery. On the other hand, the topic models used in those methods suffer from the topic coherence issue-some inferred topics do not have clear meaning, which limits the final performance of object discovery. In this paper, prior knowledge in terms of the so-called must-links are exploited from Web images on the Internet. Furthermore, a novel knowledge-based topic model, called LDA with mixture of Dirichlet trees, is proposed to incorporate the must-links into topic modeling for object discovery. In particular, to better deal with the polysemy phenomenon of visual words, the must-link is re-defined as that one must-link only constrains one or some topic(s) instead of all topics, which leads to significantly improved topic coherence. Moreover, the must-links are built and grouped with respect to specific object classes, thus the must-links in our approach are semantic-specific , which allows to more efficiently exploit discriminative prior knowledge from Web images. Extensive experiments validated the efficiency of our proposed approach on several data sets. It is shown that our method significantly improves topic coherence and outperforms the unsupervised methods for object discovery and localization. In addition, compared with discriminative methods, the naturally existing object classes in the given image collection can be subtly discovered, which makes our approach well suited for realistic applications of unsupervised object discovery.
Natural Products for Drug Discovery in the 21st Century: Innovations for Novel Drug Discovery.
Thomford, Nicholas Ekow; Senthebane, Dimakatso Alice; Rowe, Arielle; Munro, Daniella; Seele, Palesa; Maroyi, Alfred; Dzobo, Kevin
2018-05-25
The therapeutic properties of plants have been recognised since time immemorial. Many pathological conditions have been treated using plant-derived medicines. These medicines are used as concoctions or concentrated plant extracts without isolation of active compounds. Modern medicine however, requires the isolation and purification of one or two active compounds. There are however a lot of global health challenges with diseases such as cancer, degenerative diseases, HIV/AIDS and diabetes, of which modern medicine is struggling to provide cures. Many times the isolation of "active compound" has made the compound ineffective. Drug discovery is a multidimensional problem requiring several parameters of both natural and synthetic compounds such as safety, pharmacokinetics and efficacy to be evaluated during drug candidate selection. The advent of latest technologies that enhance drug design hypotheses such as Artificial Intelligence, the use of 'organ-on chip' and microfluidics technologies, means that automation has become part of drug discovery. This has resulted in increased speed in drug discovery and evaluation of the safety, pharmacokinetics and efficacy of candidate compounds whilst allowing novel ways of drug design and synthesis based on natural compounds. Recent advances in analytical and computational techniques have opened new avenues to process complex natural products and to use their structures to derive new and innovative drugs. Indeed, we are in the era of computational molecular design, as applied to natural products. Predictive computational softwares have contributed to the discovery of molecular targets of natural products and their derivatives. In future the use of quantum computing, computational softwares and databases in modelling molecular interactions and predicting features and parameters needed for drug development, such as pharmacokinetic and pharmacodynamics, will result in few false positive leads in drug development. This review discusses plant-based natural product drug discovery and how innovative technologies play a role in next-generation drug discovery.
A computational account of the development of the generalization of shape information.
Doumas, Leonidas A A; Hummel, John E
2010-05-01
Abecassis, Sera, Yonas, and Schwade (2001) showed that young children represent shapes more metrically, and perhaps more holistically, than do older children and adults. How does a child transition from representing objects and events as undifferentiated wholes to representing them explicitly in terms of their attributes? According to RBC (Recognition-by-Components theory; Biederman, 1987), objects are represented as collections of categorical geometric parts ("geons") in particular categorical spatial relations. We propose that the transition from holistic to more categorical visual shape processing is a function of the development of geon-like representations via a process of progressive intersection discovery. We present an account of this transition in terms of DORA (Doumas, Hummel, & Sandhofer, 2008), a model of the discovery of relational concepts. We demonstrate that DORA can learn representations of single geons by comparing objects composed of multiple geons. In addition, as DORA is learning it follows the same performance trajectory as children, originally generalizing shape more metrically/holistically and eventually generalizing categorically. Copyright © 2010 Cognitive Science Society, Inc.
Is Open Science the Future of Drug Development?
Shaw, Daniel L
2017-03-01
Traditional drug development models are widely perceived as opaque and inefficient, with the cost of research and development continuing to rise even as production of new drugs stays constant. Searching for strategies to improve the drug discovery process, the biomedical research field has begun to embrace open strategies. The resulting changes are starting to reshape the industry. Open science-an umbrella term for diverse strategies that seek external input and public engagement-has become an essential tool with researchers, who are increasingly turning to collaboration, crowdsourcing, data sharing, and open sourcing to tackle some of the most pressing problems in medicine. Notable examples of such open drug development include initiatives formed around malaria and tropical disease. Open practices have found their way into the drug discovery process, from target identification and compound screening to clinical trials. This perspective argues that while open science poses some risks-which include the management of collaboration and the protection of proprietary data-these strategies are, in many cases, the more efficient and ethical way to conduct biomedical research.
Is there a best strategy for drug discovery?--SMR Meeting. 13 March 2003, London, UK.
Lunec, Anna
2003-05-01
This gathering of members from academia and industry allowed the sharing of ideas and techniques or the acceleration of drug discovery, and it was clear that there is a need for a more streamlined approach to discovery and development. Clearly, new technologies will aid in the discovery process, but the abilities of the human brain to analyze and interpret data should not be overlooked, as many discoveries have been made by chance or as the result of a hunch, and it would be a shame if the advent of artificial intelligence quashed that inquisitive aspect of drug discovery.
How Formal Methods Impels Discovery: A Short History of an Air Traffic Management Project
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Hagen, George; Maddalon, Jeffrey M.; Munoz, Cesar A.; Narkawicz, Anthony; Dowek, Gilles
2010-01-01
In this paper we describe a process of algorithmic discovery that was driven by our goal of achieving complete, mechanically verified algorithms that compute conflict prevention bands for use in en route air traffic management. The algorithms were originally defined in the PVS specification language and subsequently have been implemented in Java and C++. We do not present the proofs in this paper: instead, we describe the process of discovery and the key ideas that enabled the final formal proof of correctness
2003-12-09
KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, KSC employee Joel Smith prepares an area on the orbiter Discovery for blanket installation. The blankets are part of the Orbiter Thermal Protection System, thermal shields to protect against temperatures as high as 3,000° Fahrenheit, which are produced during descent for landing. Discovery is scheduled to fly on mission STS-121 to the International Space Station.
2003-12-09
KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, KSC employee Nadine Phillips prepares an area on the orbiter Discovery for blanket installation. The blankets are part of the Orbiter Thermal Protection System, thermal shields to protect against temperatures as high as 3,000° Fahrenheit, which are produced during descent for landing. Discovery is scheduled to fly on mission STS-121 to the International Space Station.
"Seeing is believing": perspectives of applying imaging technology in discovery toxicology.
Xu, Jinghai James; Dunn, Margaret Condon; Smith, Arthur Russell
2009-11-01
Efficiency and accuracy in addressing drug safety issues proactively are critical in minimizing late-stage drug attritions. Discovery toxicology has become a specialty subdivision of toxicology seeking to effectively provide early predictions and safety assessment in the drug discovery process. Among the many technologies utilized to select safer compounds for further development, in vitro imaging technology is one of the best characterized and validated to provide translatable biomarkers towards clinically-relevant outcomes of drug safety. By carefully applying imaging technologies in genetic, hepatic, and cardiac toxicology, and integrating them with the rest of the drug discovery processes, it was possible to demonstrate significant impact of imaging technology on drug research and development and substantial returns on investment.
Stryjewska, Agnieszka; Kiepura, Katarzyna; Librowski, Tadeusz; Lochyński, Stanisław
2013-01-01
Industrial biotechnology has been defined as the use and application of biotechnology for the sustainable processing and production of chemicals, materials and fuels. It makes use of biocatalysts such as microbial communities, whole-cell microorganisms or purified enzymes. In the review these processes are described. Drug design is an iterative process which begins when a chemist identifies a compound that displays an interesting biological profile and ends when both the activity profile and the chemical synthesis of the new chemical entity are optimized. Traditional approaches to drug discovery rely on a stepwise synthesis and screening program for large numbers of compounds to optimize activity profiles. Over the past ten to twenty years, scientists have used computer models of new chemical entities to help define activity profiles, geometries and relativities. This article introduces inter alia the concepts of molecular modelling and contains references for further reading.
ERIC Educational Resources Information Center
Sheridan, Susan M.; DiLillo, David; Hansen, David J.; DeKraai, Mark; Koenig-Kellas, Jody; Swearer, Susan M.; Lorey A. Wheeler
2016-01-01
According to the National Institutes of Health, "Translational research includes …the process of applying discoveries generated during research in the laboratory, and in preclinical studies, to the development of trials and studies in humans… [and] research aimed at enhancing the adoption of best practices in the community. Following this…
Neural Regeneration in Caenorhabditis elegans
El Bejjani, Rachid; Hammarlund, Marc
2013-01-01
Axon regeneration is a medically relevant process that can repair damaged neurons. This review describes current progress in understanding axon regeneration in the model organism Caenorhabditis elegans. Factors that regulate axon regeneration in C. elegans have broadly similar roles in vertebrate neurons. This means that using C. elegans as a tool to leverage discovery is a legitimate strategy for identifying conserved mechanisms of axon regeneration. PMID:22974301
Joint Service Chemical and Biological Defense Program: FY 06-07 Overview
2006-01-01
Performers Molecular model of human plasma-derived butyryl Electronmicrograph of bacillus spores adhering to cell membrane processes 38866_BATT_TX 11...agents, and radioactive fallout. CPS is integrated with the ship’s Heating, Ventilation, and Air-Conditioning ( HVAC ) systems and provides filtered air...molecules for intervention against protein NTA. • Identify and evaluate effectiveness of spore germination inhibitors. • Expand drug discovery program
2015-01-01
The biopharmaceutics classification system (BCS) and biopharmaceutics drug distribution classification system (BDDCS) are complementary classification systems that can improve, simplify, and accelerate drug discovery, development, and regulatory processes. Drug permeability has been widely accepted as a screening tool for determining intestinal absorption via the BCS during the drug development and regulatory approval processes. Currently, predicting clinically significant drug interactions during drug development is a known challenge for industry and regulatory agencies. The BDDCS, a modification of BCS that utilizes drug metabolism instead of intestinal permeability, predicts drug disposition and potential drug–drug interactions in the intestine, the liver, and most recently the brain. Although correlations between BCS and BDDCS have been observed with drug permeability rates, discrepancies have been noted in drug classifications between the two systems utilizing different permeability models, which are accepted as surrogate models for demonstrating human intestinal permeability by the FDA. Here, we recommend the most applicable permeability models for improving the prediction of BCS and BDDCS classifications. We demonstrate that the passive transcellular permeability rate, characterized by means of permeability models that are deficient in transporter expression and paracellular junctions (e.g., PAMPA and Caco-2), will most accurately predict BDDCS metabolism. These systems will inaccurately predict BCS classifications for drugs that particularly are substrates of highly expressed intestinal transporters. Moreover, in this latter case, a system more representative of complete human intestinal permeability is needed to accurately predict BCS absorption. PMID:24628254
Larregieu, Caroline A; Benet, Leslie Z
2014-04-07
The biopharmaceutics classification system (BCS) and biopharmaceutics drug distribution classification system (BDDCS) are complementary classification systems that can improve, simplify, and accelerate drug discovery, development, and regulatory processes. Drug permeability has been widely accepted as a screening tool for determining intestinal absorption via the BCS during the drug development and regulatory approval processes. Currently, predicting clinically significant drug interactions during drug development is a known challenge for industry and regulatory agencies. The BDDCS, a modification of BCS that utilizes drug metabolism instead of intestinal permeability, predicts drug disposition and potential drug-drug interactions in the intestine, the liver, and most recently the brain. Although correlations between BCS and BDDCS have been observed with drug permeability rates, discrepancies have been noted in drug classifications between the two systems utilizing different permeability models, which are accepted as surrogate models for demonstrating human intestinal permeability by the FDA. Here, we recommend the most applicable permeability models for improving the prediction of BCS and BDDCS classifications. We demonstrate that the passive transcellular permeability rate, characterized by means of permeability models that are deficient in transporter expression and paracellular junctions (e.g., PAMPA and Caco-2), will most accurately predict BDDCS metabolism. These systems will inaccurately predict BCS classifications for drugs that particularly are substrates of highly expressed intestinal transporters. Moreover, in this latter case, a system more representative of complete human intestinal permeability is needed to accurately predict BCS absorption.
LSST Survey Data: Models for EPO Interaction
NASA Astrophysics Data System (ADS)
Olsen, J. K.; Borne, K. D.
2007-12-01
The potential for education and public outreach with the Large Synoptic Survey Telescope is as far reaching as the telescope itself. LSST data will be available to the public, giving anyone with a web browser a movie-like window on the Universe. The LSST project is unique in designing its data management and data access systems with the public and community users in mind. The enormous volume of data to be generated by LSST is staggering: 30 Terabytes per night, 10 Petabytes per year. The final database of extracted science parameters from the images will also be enormous -- 50-100 Petabytes -- a rich gold mine for data mining and scientific discovery potential. LSST will also generate 100,000 astronomical alerts per night, for 10 years. The LSST EPO team is examining models for EPO interaction with the survey data, particularly in how the community (amateurs, teachers, students, and general public) can participate in the discovery process. We will outline some of our models of community interaction for inquiry-based science using the LSST survey data, and we invite discussion on these topics.
Manipulation of the mouse genome: a multiple impact resource for drug discovery and development.
Prosser, Haydn; Rastan, Sohaila
2003-05-01
Few would deny that the pharmaceutical industry's investment in genomics throughout the 1990s has yet to deliver in terms of drugs on the market. The reasons are complex and beyond the scope of this review. The unique ability to manipulate the mouse genome, however, has already had a positive impact on all stages of the drug discovery process and, increasingly, on the drug development process too. We give an overview of some recent applications of so-called 'transgenic' mouse technology in pharmaceutical research and development. We show how genetic manipulation in the mouse can be employed at multiple points in the drug discovery and development process, providing new solutions to old problems.
To ontologise or not to ontologise: An information model for a geospatial knowledge infrastructure
NASA Astrophysics Data System (ADS)
Stock, Kristin; Stojanovic, Tim; Reitsma, Femke; Ou, Yang; Bishr, Mohamed; Ortmann, Jens; Robertson, Anne
2012-08-01
A geospatial knowledge infrastructure consists of a set of interoperable components, including software, information, hardware, procedures and standards, that work together to support advanced discovery and creation of geoscientific resources, including publications, data sets and web services. The focus of the work presented is the development of such an infrastructure for resource discovery. Advanced resource discovery is intended to support scientists in finding resources that meet their needs, and focuses on representing the semantic details of the scientific resources, including the detailed aspects of the science that led to the resource being created. This paper describes an information model for a geospatial knowledge infrastructure that uses ontologies to represent these semantic details, including knowledge about domain concepts, the scientific elements of the resource (analysis methods, theories and scientific processes) and web services. This semantic information can be used to enable more intelligent search over scientific resources, and to support new ways to infer and visualise scientific knowledge. The work describes the requirements for semantic support of a knowledge infrastructure, and analyses the different options for information storage based on the twin goals of semantic richness and syntactic interoperability to allow communication between different infrastructures. Such interoperability is achieved by the use of open standards, and the architecture of the knowledge infrastructure adopts such standards, particularly from the geospatial community. The paper then describes an information model that uses a range of different types of ontologies, explaining those ontologies and their content. The information model was successfully implemented in a working geospatial knowledge infrastructure, but the evaluation identified some issues in creating the ontologies.
A bioinformatics knowledge discovery in text application for grid computing
Castellano, Marcello; Mastronardi, Giuseppe; Bellotti, Roberto; Tarricone, Gianfranco
2009-01-01
Background A fundamental activity in biomedical research is Knowledge Discovery which has the ability to search through large amounts of biomedical information such as documents and data. High performance computational infrastructures, such as Grid technologies, are emerging as a possible infrastructure to tackle the intensive use of Information and Communication resources in life science. The goal of this work was to develop a software middleware solution in order to exploit the many knowledge discovery applications on scalable and distributed computing systems to achieve intensive use of ICT resources. Methods The development of a grid application for Knowledge Discovery in Text using a middleware solution based methodology is presented. The system must be able to: perform a user application model, process the jobs with the aim of creating many parallel jobs to distribute on the computational nodes. Finally, the system must be aware of the computational resources available, their status and must be able to monitor the execution of parallel jobs. These operative requirements lead to design a middleware to be specialized using user application modules. It included a graphical user interface in order to access to a node search system, a load balancing system and a transfer optimizer to reduce communication costs. Results A middleware solution prototype and the performance evaluation of it in terms of the speed-up factor is shown. It was written in JAVA on Globus Toolkit 4 to build the grid infrastructure based on GNU/Linux computer grid nodes. A test was carried out and the results are shown for the named entity recognition search of symptoms and pathologies. The search was applied to a collection of 5,000 scientific documents taken from PubMed. Conclusion In this paper we discuss the development of a grid application based on a middleware solution. It has been tested on a knowledge discovery in text process to extract new and useful information about symptoms and pathologies from a large collection of unstructured scientific documents. As an example a computation of Knowledge Discovery in Database was applied on the output produced by the KDT user module to extract new knowledge about symptom and pathology bio-entities. PMID:19534749
A bioinformatics knowledge discovery in text application for grid computing.
Castellano, Marcello; Mastronardi, Giuseppe; Bellotti, Roberto; Tarricone, Gianfranco
2009-06-16
A fundamental activity in biomedical research is Knowledge Discovery which has the ability to search through large amounts of biomedical information such as documents and data. High performance computational infrastructures, such as Grid technologies, are emerging as a possible infrastructure to tackle the intensive use of Information and Communication resources in life science. The goal of this work was to develop a software middleware solution in order to exploit the many knowledge discovery applications on scalable and distributed computing systems to achieve intensive use of ICT resources. The development of a grid application for Knowledge Discovery in Text using a middleware solution based methodology is presented. The system must be able to: perform a user application model, process the jobs with the aim of creating many parallel jobs to distribute on the computational nodes. Finally, the system must be aware of the computational resources available, their status and must be able to monitor the execution of parallel jobs. These operative requirements lead to design a middleware to be specialized using user application modules. It included a graphical user interface in order to access to a node search system, a load balancing system and a transfer optimizer to reduce communication costs. A middleware solution prototype and the performance evaluation of it in terms of the speed-up factor is shown. It was written in JAVA on Globus Toolkit 4 to build the grid infrastructure based on GNU/Linux computer grid nodes. A test was carried out and the results are shown for the named entity recognition search of symptoms and pathologies. The search was applied to a collection of 5,000 scientific documents taken from PubMed. In this paper we discuss the development of a grid application based on a middleware solution. It has been tested on a knowledge discovery in text process to extract new and useful information about symptoms and pathologies from a large collection of unstructured scientific documents. As an example a computation of Knowledge Discovery in Database was applied on the output produced by the KDT user module to extract new knowledge about symptom and pathology bio-entities.
Impact of computational structure-based methods on drug discovery.
Reynolds, Charles H
2014-01-01
Structure-based drug design has become an indispensible tool in drug discovery. The emergence of structure-based design is due to gains in structural biology that have provided exponential growth in the number of protein crystal structures, new computational algorithms and approaches for modeling protein-ligand interactions, and the tremendous growth of raw computer power in the last 30 years. Computer modeling and simulation have made major contributions to the discovery of many groundbreaking drugs in recent years. Examples are presented that highlight the evolution of computational structure-based design methodology, and the impact of that methodology on drug discovery.
Peetla, Chiranjeevi; Stine, Andrew; Labhasetwar, Vinod
2009-01-01
The transport of drugs or drug delivery systems across the cell membrane is a complex biological process, often difficult to understand because of its dynamic nature. In this regard, model lipid membranes, which mimic many aspects of cell-membrane lipids, have been very useful in helping investigators to discern the roles of lipids in cellular interactions. One can use drug-lipid interactions to predict pharmacokinetic properties of drugs, such as their transport, biodistribution, accumulation, and hence efficacy. These interactions can also be used to study the mechanisms of transport, based on the structure and hydrophilicity/hydrophobicity of drug molecules. In recent years, model lipid membranes have also been explored to understand their mechanisms of interactions with peptides, polymers, and nanocarriers. These interaction studies can be used to design and develop efficient drug delivery systems. Changes in the lipid composition of cells and tissue in certain disease conditions may alter biophysical interactions, which could be explored to develop target-specific drugs and drug delivery systems. In this review, we discuss different model membranes, drug-lipid interactions and their significance, studies of model membrane interactions with nanocarriers, and how biophysical interaction studies with lipid model membranes could play an important role in drug discovery and drug delivery. PMID:19432455
Assessment of cardiovascular risk based on a data-driven knowledge discovery approach.
Mendes, D; Paredes, S; Rocha, T; Carvalho, P; Henriques, J; Cabiddu, R; Morais, J
2015-01-01
The cardioRisk project addresses the development of personalized risk assessment tools for patients who have been admitted to the hospital with acute myocardial infarction. Although there are models available that assess the short-term risk of death/new events for such patients, these models were established in circumstances that do not take into account the present clinical interventions and, in some cases, the risk factors used by such models are not easily available in clinical practice. The integration of the existing risk tools (applied in the clinician's daily practice) with data-driven knowledge discovery mechanisms based on data routinely collected during hospitalizations, will be a breakthrough in overcoming some of these difficulties. In this context, the development of simple and interpretable models (based on recent datasets), unquestionably will facilitate and will introduce confidence in this integration process. In this work, a simple and interpretable model based on a real dataset is proposed. It consists of a decision tree model structure that uses a reduced set of six binary risk factors. The validation is performed using a recent dataset provided by the Portuguese Society of Cardiology (11113 patients), which originally comprised 77 risk factors. A sensitivity, specificity and accuracy of, respectively, 80.42%, 77.25% and 78.80% were achieved showing the effectiveness of the approach.
The Relation between Prior Knowledge and Students' Collaborative Discovery Learning Processes
ERIC Educational Resources Information Center
Gijlers, Hannie; de Jong, Ton
2005-01-01
In this study we investigate how prior knowledge influences knowledge development during collaborative discovery learning. Fifteen dyads of students (pre-university education, 15-16 years old) worked on a discovery learning task in the physics field of kinematics. The (face-to-face) communication between students was recorded and the interaction…
Geospatial Crypto Reconnaissance: A Campus Self-Discovery Game
ERIC Educational Resources Information Center
Lallie, Harjinder Singh
2015-01-01
Campus discovery is an important feature of a university student induction process. Approaches towards campus discovery differ from course to course and can comprise guided tours that are often lengthy and uninspiring, or self-guided tours that run the risk of students failing to complete them. This paper describes a campus self-discovery…
Fang, Ferric C.
2015-01-01
In contrast to many other human endeavors, science pays little attention to its history. Fundamental scientific discoveries are often considered to be timeless and independent of how they were made. Science and the history of science are regarded as independent academic disciplines. Although most scientists are aware of great discoveries in their fields and their association with the names of individual scientists, few know the detailed stories behind the discoveries. Indeed, the history of scientific discovery is sometimes recorded only in informal accounts that may be inaccurate or biased for self-serving reasons. Scientific papers are generally written in a formulaic style that bears no relationship to the actual process of discovery. Here we examine why scientists should care more about the history of science. A better understanding of history can illuminate social influences on the scientific process, allow scientists to learn from previous errors, and provide a greater appreciation for the importance of serendipity in scientific discovery. Moreover, history can help to assign credit where it is due and call attention to evolving ethical standards in science. History can make science better. PMID:26371119
Casadevall, Arturo; Fang, Ferric C
2015-12-01
In contrast to many other human endeavors, science pays little attention to its history. Fundamental scientific discoveries are often considered to be timeless and independent of how they were made. Science and the history of science are regarded as independent academic disciplines. Although most scientists are aware of great discoveries in their fields and their association with the names of individual scientists, few know the detailed stories behind the discoveries. Indeed, the history of scientific discovery is sometimes recorded only in informal accounts that may be inaccurate or biased for self-serving reasons. Scientific papers are generally written in a formulaic style that bears no relationship to the actual process of discovery. Here we examine why scientists should care more about the history of science. A better understanding of history can illuminate social influences on the scientific process, allow scientists to learn from previous errors, and provide a greater appreciation for the importance of serendipity in scientific discovery. Moreover, history can help to assign credit where it is due and call attention to evolving ethical standards in science. History can make science better. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
The Prehistory of Discovery: Precursors of Representational Change in Solving Gear System Problems.
ERIC Educational Resources Information Center
Dixon, James A.; Bangert, Ashley S.
2002-01-01
This study investigated whether the process of representational change undergoes developmental change or different processes occupy different niches in the course of knowledge acquisition. Subjects--college, third-, and sixth-grade students--solved gear system problems over two sessions. Findings indicated that for all grades, discovery of the…
Workshop on Discovery Lessons-Learned
NASA Technical Reports Server (NTRS)
Saunders, M. (Editor)
1995-01-01
As part of the Discovery Program's continuous improvement effort, a Discovery Program Lessons-Learned workshop was designed to review how well the Discovery Program is moving toward its goal of providing low-cost research opportunities to the planetary science community while ensuring continued U.S. leadership in solar system exploration. The principal focus of the workshop was on the recently completed Announcement of Opportunity (AO) cycle, but the program direction and program management were also open to comment. The objective of the workshop was to identify both the strengths and weaknesses of the process up to this point, with the goal of improving the process for the next AO cycle. The process for initializing the workshop was to solicit comments from the communities involved in the program and to use the feedback as the basis for establishing the workshop agenda. The following four sessions were developed after reviewing and synthesizing both the formal feedback received and informal feedback obtained during discussions with various participants: (1) Science and Return on Investment; (2) Technology vs. Risk; Mission Success and Other Factors; (3) Cost; and (4) AO.AO Process Changes and Program Management.
A Thematic Analysis of Theoretical Models for Translational Science in Nursing: Mapping the Field
Mitchell, Sandra A.; Fisher, Cheryl A.; Hastings, Clare E.; Silverman, Leanne B.; Wallen, Gwenyth R.
2010-01-01
Background The quantity and diversity of conceptual models in translational science may complicate rather than advance the use of theory. Purpose This paper offers a comparative thematic analysis of the models available to inform knowledge development, transfer, and utilization. Method Literature searches identified 47 models for knowledge translation. Four thematic areas emerged: (1) evidence-based practice and knowledge transformation processes; (2) strategic change to promote adoption of new knowledge; (3) knowledge exchange and synthesis for application and inquiry; (4) designing and interpreting dissemination research. Discussion This analysis distinguishes the contributions made by leaders and researchers at each phase in the process of discovery, development, and service delivery. It also informs the selection of models to guide activities in knowledge translation. Conclusions A flexible theoretical stance is essential to simultaneously develop new knowledge and accelerate the translation of that knowledge into practice behaviors and programs of care that support optimal patient outcomes. PMID:21074646
BayesMotif: de novo protein sorting motif discovery from impure datasets.
Hu, Jianjun; Zhang, Fan
2010-01-18
Protein sorting is the process that newly synthesized proteins are transported to their target locations within or outside of the cell. This process is precisely regulated by protein sorting signals in different forms. A major category of sorting signals are amino acid sub-sequences usually located at the N-terminals or C-terminals of protein sequences. Genome-wide experimental identification of protein sorting signals is extremely time-consuming and costly. Effective computational algorithms for de novo discovery of protein sorting signals is needed to improve the understanding of protein sorting mechanisms. We formulated the protein sorting motif discovery problem as a classification problem and proposed a Bayesian classifier based algorithm (BayesMotif) for de novo identification of a common type of protein sorting motifs in which a highly conserved anchor is present along with a less conserved motif regions. A false positive removal procedure is developed to iteratively remove sequences that are unlikely to contain true motifs so that the algorithm can identify motifs from impure input sequences. Experiments on both implanted motif datasets and real-world datasets showed that the enhanced BayesMotif algorithm can identify anchored sorting motifs from pure or impure protein sequence dataset. It also shows that the false positive removal procedure can help to identify true motifs even when there is only 20% of the input sequences containing true motif instances. We proposed BayesMotif, a novel Bayesian classification based algorithm for de novo discovery of a special category of anchored protein sorting motifs from impure datasets. Compared to conventional motif discovery algorithms such as MEME, our algorithm can find less-conserved motifs with short highly conserved anchors. Our algorithm also has the advantage of easy incorporation of additional meta-sequence features such as hydrophobicity or charge of the motifs which may help to overcome the limitations of PWM (position weight matrix) motif model.
Surveillance theory applied to virus detection: a case for targeted discovery
Bogich, Tiffany L.; Anthony, Simon J.; Nichols, James D.
2013-01-01
Virus detection and mathematical modeling have gone through rapid developments in the past decade. Both offer new insights into the epidemiology of infectious disease and characterization of future risk; however, modeling has not yet been applied to designing the best surveillance strategies for viral and pathogen discovery. We review recent developments and propose methods to integrate viral and pathogen discovery and mathematical modeling through optimal surveillance theory, arguing for a more targeted approach to novel virus detection guided by the principles of adaptive management and structured decision-making.
NASA Astrophysics Data System (ADS)
Camargo-Molina, José Eliel; Mandal, Tanumoy; Pasechnik, Roman; Wessén, Jonas
2018-03-01
We describe a class of three Higgs doublet models (3HDMs) with a softly broken U(1) × U(1) family symmetry that enforces a Cabibbo-like quark mixing while forbidding tree-level flavour changing neutral currents. The hierarchy in the observed quark masses is partly explained by a softer hierarchy in the vacuum expectation values of the three Higgs doublets. As a consequence, the physical scalar spectrum contains a Standard Model (SM) like Higgs boson h 125 while exotic scalars couple the strongest to the second quark family, leading to rather unconventional discovery channels that could be probed at the Large Hadron Collider. In particular, we describe a search strategy for the lightest charged Higgs boson H ±, through the process c\\overline{s}\\to {H}+\\to {W}+{h}_{125} , using a multivariate analysis that leads to an excellent discriminatory power against the SM background. Although the analysis is applied to the proposed class of 3HDMs, we employ a model-independent formulation such that it can be applied to any other model with the same discovery channel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDermott, Jason E.; Wang, Jing; Mitchell, Hugh D.
2013-01-01
The advent of high throughput technologies capable of comprehensive analysis of genes, transcripts, proteins and other significant biological molecules has provided an unprecedented opportunity for the identification of molecular markers of disease processes. However, it has simultaneously complicated the problem of extracting meaningful signatures of biological processes from these complex datasets. The process of biomarker discovery and characterization provides opportunities both for purely statistical and expert knowledge-based approaches and would benefit from improved integration of the two. Areas covered In this review we will present examples of current practices for biomarker discovery from complex omic datasets and the challenges thatmore » have been encountered. We will then present a high-level review of data-driven (statistical) and knowledge-based methods applied to biomarker discovery, highlighting some current efforts to combine the two distinct approaches. Expert opinion Effective, reproducible and objective tools for combining data-driven and knowledge-based approaches to biomarker discovery and characterization are key to future success in the biomarker field. We will describe our recommendations of possible approaches to this problem including metrics for the evaluation of biomarkers.« less
Towards Robot Scientists for autonomous scientific discovery
2010-01-01
We review the main components of autonomous scientific discovery, and how they lead to the concept of a Robot Scientist. This is a system which uses techniques from artificial intelligence to automate all aspects of the scientific discovery process: it generates hypotheses from a computer model of the domain, designs experiments to test these hypotheses, runs the physical experiments using robotic systems, analyses and interprets the resulting data, and repeats the cycle. We describe our two prototype Robot Scientists: Adam and Eve. Adam has recently proven the potential of such systems by identifying twelve genes responsible for catalysing specific reactions in the metabolic pathways of the yeast Saccharomyces cerevisiae. This work has been formally recorded in great detail using logic. We argue that the reporting of science needs to become fully formalised and that Robot Scientists can help achieve this. This will make scientific information more reproducible and reusable, and promote the integration of computers in scientific reasoning. We believe the greater automation of both the physical and intellectual aspects of scientific investigations to be essential to the future of science. Greater automation improves the accuracy and reliability of experiments, increases the pace of discovery and, in common with conventional laboratory automation, removes tedious and repetitive tasks from the human scientist. PMID:20119518
Towards Robot Scientists for autonomous scientific discovery.
Sparkes, Andrew; Aubrey, Wayne; Byrne, Emma; Clare, Amanda; Khan, Muhammed N; Liakata, Maria; Markham, Magdalena; Rowland, Jem; Soldatova, Larisa N; Whelan, Kenneth E; Young, Michael; King, Ross D
2010-01-04
We review the main components of autonomous scientific discovery, and how they lead to the concept of a Robot Scientist. This is a system which uses techniques from artificial intelligence to automate all aspects of the scientific discovery process: it generates hypotheses from a computer model of the domain, designs experiments to test these hypotheses, runs the physical experiments using robotic systems, analyses and interprets the resulting data, and repeats the cycle. We describe our two prototype Robot Scientists: Adam and Eve. Adam has recently proven the potential of such systems by identifying twelve genes responsible for catalysing specific reactions in the metabolic pathways of the yeast Saccharomyces cerevisiae. This work has been formally recorded in great detail using logic. We argue that the reporting of science needs to become fully formalised and that Robot Scientists can help achieve this. This will make scientific information more reproducible and reusable, and promote the integration of computers in scientific reasoning. We believe the greater automation of both the physical and intellectual aspects of scientific investigations to be essential to the future of science. Greater automation improves the accuracy and reliability of experiments, increases the pace of discovery and, in common with conventional laboratory automation, removes tedious and repetitive tasks from the human scientist.
NASA Astrophysics Data System (ADS)
Bai, Peng; Jeon, Mi Young; Ren, Limin; Knight, Chris; Deem, Michael W.; Tsapatsis, Michael; Siepmann, J. Ilja
2015-01-01
Zeolites play numerous important roles in modern petroleum refineries and have the potential to advance the production of fuels and chemical feedstocks from renewable resources. The performance of a zeolite as separation medium and catalyst depends on its framework structure. To date, 213 framework types have been synthesized and >330,000 thermodynamically accessible zeolite structures have been predicted. Hence, identification of optimal zeolites for a given application from the large pool of candidate structures is attractive for accelerating the pace of materials discovery. Here we identify, through a large-scale, multi-step computational screening process, promising zeolite structures for two energy-related applications: the purification of ethanol from fermentation broths and the hydroisomerization of alkanes with 18-30 carbon atoms encountered in petroleum refining. These results demonstrate that predictive modelling and data-driven science can now be applied to solve some of the most challenging separation problems involving highly non-ideal mixtures and highly articulated compounds.
Community-Reviewed Biological Network Models for Toxicology and Drug Discovery Applications
Namasivayam, Aishwarya Alex; Morales, Alejandro Ferreiro; Lacave, Ángela María Fajardo; Tallam, Aravind; Simovic, Borislav; Alfaro, David Garrido; Bobbili, Dheeraj Reddy; Martin, Florian; Androsova, Ganna; Shvydchenko, Irina; Park, Jennifer; Calvo, Jorge Val; Hoeng, Julia; Peitsch, Manuel C.; Racero, Manuel González Vélez; Biryukov, Maria; Talikka, Marja; Pérez, Modesto Berraquero; Rohatgi, Neha; Díaz-Díaz, Noberto; Mandarapu, Rajesh; Ruiz, Rubén Amián; Davidyan, Sergey; Narayanasamy, Shaman; Boué, Stéphanie; Guryanova, Svetlana; Arbas, Susana Martínez; Menon, Swapna; Xiang, Yang
2016-01-01
Biological network models offer a framework for understanding disease by describing the relationships between the mechanisms involved in the regulation of biological processes. Crowdsourcing can efficiently gather feedback from a wide audience with varying expertise. In the Network Verification Challenge, scientists verified and enhanced a set of 46 biological networks relevant to lung and chronic obstructive pulmonary disease. The networks were built using Biological Expression Language and contain detailed information for each node and edge, including supporting evidence from the literature. Network scoring of public transcriptomics data inferred perturbation of a subset of mechanisms and networks that matched the measured outcomes. These results, based on a computable network approach, can be used to identify novel mechanisms activated in disease, quantitatively compare different treatments and time points, and allow for assessment of data with low signal. These networks are periodically verified by the crowd to maintain an up-to-date suite of networks for toxicology and drug discovery applications. PMID:27429547
Liu, Ling; Huang, Jin-Sha; Han, Chao; Zhang, Guo-Xin; Xu, Xiao-Yun; Shen, Yan; Li, Jie; Jiang, Hai-Yang; Lin, Zhi-Cheng; Xiong, Nian; Wang, Tao
2016-12-01
Huntington's disease (HD) is an incurable neurodegenerative disorder that is characterized by motor dysfunction, cognitive impairment, and behavioral abnormalities. It is an autosomal dominant disorder caused by a CAG repeat expansion in the huntingtin gene, resulting in progressive neuronal loss predominately in the striatum and cortex. Despite the discovery of the causative gene in 1993, the exact mechanisms underlying HD pathogenesis have yet to be elucidated. Treatments that slow or halt the disease process are currently unavailable. Recent advances in induced pluripotent stem cell (iPSC) technologies have transformed our ability to study disease in human neural cells. Here, we firstly review the progress made to model HD in vitro using patient-derived iPSCs, which reveal unique insights into illuminating molecular mechanisms and provide a novel human cell-based platform for drug discovery. We then highlight the promises and challenges for pluripotent stem cells that might be used as a therapeutic source for cell replacement therapy of the lost neurons in HD brains.
Discovering latent commercial networks from online financial news articles
NASA Astrophysics Data System (ADS)
Xia, Yunqing; Su, Weifeng; Lau, Raymond Y. K.; Liu, Yi
2013-08-01
Unlike most online social networks where explicit links among individual users are defined, the relations among commercial entities (e.g. firms) may not be explicitly declared in commercial Web sites. One main contribution of this article is the development of a novel computational model for the discovery of the latent relations among commercial entities from online financial news. More specifically, a CRF model which can exploit both structural and contextual features is applied to commercial entity recognition. In addition, a point-wise mutual information (PMI)-based unsupervised learning method is developed for commercial relation identification. To evaluate the effectiveness of the proposed computational methods, a prototype system called CoNet has been developed. Based on the financial news articles crawled from Google finance, the CoNet system achieves average F-scores of 0.681 and 0.754 in commercial entity recognition and commercial relation identification, respectively. Our experimental results confirm that the proposed shallow natural language processing methods are effective for the discovery of latent commercial networks from online financial news.
Boritz, Tali Z; Bryntwick, Emily; Angus, Lynne; Greenberg, Leslie S; Constantino, Michael J
2014-01-01
While the individual contributions of narrative and emotion processes to psychotherapy outcome have been the focus of recent interest in psychotherapy research literature, the empirical analysis of narrative and emotion integration has rarely been addressed. The Narrative-Emotion Processes Coding System (NEPCS) was developed to provide researchers with a systematic method for identifying specific narrative and emotion process markers, for application to therapy session videos. The present study examined the relationship between NEPCS-derived problem markers (same old storytelling, empty storytelling, unstoried emotion, abstract storytelling) and change markers (competing plotlines storytelling, inchoate storytelling, unexpected outcome storytelling, and discovery storytelling), and treatment outcome (recovered versus unchanged at therapy termination) and stage of therapy (early, middle, late) in brief emotion-focused (EFT), client-centred (CCT), and cognitive (CT) therapies for depression. Hierarchical linear modelling analyses demonstrated a significant Outcome effect for inchoate storytelling (p = .037) and discovery storytelling (p = .002), a Stage × Outcome effect for abstract storytelling (p = .05), and a Stage × Outcome × Treatment effect for competing plotlines storytelling (p = .001). There was also a significant Stage × Outcome effect for NEPCS problem markers (p = .007) and change markers (p = .03). The results provide preliminary support for the importance of assessing the contribution of narrative-emotion processes to efficacious treatment outcomes in EFT, CCT, and CT treatments of depression.
Heifetz, Alexander; Southey, Michelle; Morao, Inaki; Townsend-Nicholson, Andrea; Bodkin, Mike J
2018-01-01
GPCR modeling approaches are widely used in the hit-to-lead (H2L) and lead optimization (LO) stages of drug discovery. The aims of these modeling approaches are to predict the 3D structures of the receptor-ligand complexes, to explore the key interactions between the receptor and the ligand and to utilize these insights in the design of new molecules with improved binding, selectivity or other pharmacological properties. In this book chapter, we present a brief survey of key computational approaches integrated with hierarchical GPCR modeling protocol (HGMP) used in hit-to-lead (H2L) and in lead optimization (LO) stages of structure-based drug discovery (SBDD). We outline the differences in modeling strategies used in H2L and LO of SBDD and illustrate how these tools have been applied in three drug discovery projects.
Janero, David R
2014-08-01
Technology often serves as a handmaiden and catalyst of invention. The discovery of safe, effective medications depends critically upon experimental approaches capable of providing high-impact information on the biological effects of drug candidates early in the discovery pipeline. This information can enable reliable lead identification, pharmacological compound differentiation and successful translation of research output into clinically useful therapeutics. The shallow preclinical profiling of candidate compounds promulgates a minimalistic understanding of their biological effects and undermines the level of value creation necessary for finding quality leads worth moving forward within the development pipeline with efficiency and prognostic reliability sufficient to help remediate the current pharma-industry productivity drought. Three specific technologies discussed herein, in addition to experimental areas intimately associated with contemporary drug discovery, appear to hold particular promise for strengthening the preclinical valuation of drug candidates by deepening lead characterization. These are: i) hydrogen-deuterium exchange mass spectrometry for characterizing structural and ligand-interaction dynamics of disease-relevant proteins; ii) activity-based chemoproteomics for profiling the functional diversity of mammalian proteomes; and iii) nuclease-mediated precision gene editing for developing more translatable cellular and in vivo models of human diseases. When applied in an informed manner congruent with the clinical understanding of disease processes, technologies such as these that span levels of biological organization can serve as valuable enablers of drug discovery and potentially contribute to reducing the current, unacceptably high rates of compound clinical failure.
Oguz, Cihan; Sen, Shurjo K; Davis, Adam R; Fu, Yi-Ping; O'Donnell, Christopher J; Gibbons, Gary H
2017-10-26
One goal of personalized medicine is leveraging the emerging tools of data science to guide medical decision-making. Achieving this using disparate data sources is most daunting for polygenic traits. To this end, we employed random forests (RFs) and neural networks (NNs) for predictive modeling of coronary artery calcium (CAC), which is an intermediate endo-phenotype of coronary artery disease (CAD). Model inputs were derived from advanced cases in the ClinSeq®; discovery cohort (n=16) and the FHS replication cohort (n=36) from 89 th -99 th CAC score percentile range, and age-matched controls (ClinSeq®; n=16, FHS n=36) with no detectable CAC (all subjects were Caucasian males). These inputs included clinical variables and genotypes of 56 single nucleotide polymorphisms (SNPs) ranked highest in terms of their nominal correlation with the advanced CAC state in the discovery cohort. Predictive performance was assessed by computing the areas under receiver operating characteristic curves (ROC-AUC). RF models trained and tested with clinical variables generated ROC-AUC values of 0.69 and 0.61 in the discovery and replication cohorts, respectively. In contrast, in both cohorts, the set of SNPs derived from the discovery cohort were highly predictive (ROC-AUC ≥0.85) with no significant change in predictive performance upon integration of clinical and genotype variables. Using the 21 SNPs that produced optimal predictive performance in both cohorts, we developed NN models trained with ClinSeq®; data and tested with FHS data and obtained high predictive accuracy (ROC-AUC=0.80-0.85) with several topologies. Several CAD and "vascular aging" related biological processes were enriched in the network of genes constructed from the predictive SNPs. We identified a molecular network predictive of advanced coronary calcium using genotype data from ClinSeq®; and FHS cohorts. Our results illustrate that machine learning tools, which utilize complex interactions between disease predictors intrinsic to the pathogenesis of polygenic disorders, hold promise for deriving predictive disease models and networks.
van Dijk, Marjolein J A M; Claassen, Tom; Suwartono, Christiany; van der Veld, William M; van der Heijden, Paul T; Hendriks, Marc P H
Since the publication of the WAIS-IV in the U.S. in 2008, efforts have been made to explore the structural validity by applying factor analysis to various samples. This study aims to achieve a more fine-grained understanding of the structure of the Dutch language version of the WAIS-IV (WAIS-IV-NL) by applying an alternative analysis based on causal modeling in addition to confirmatory factor analysis (CFA). The Bayesian Constraint-based Causal Discovery (BCCD) algorithm learns underlying network structures directly from data and assesses more complex structures than is possible with factor analysis. WAIS-IV-NL profiles of two clinical samples of 202 patients (i.e. patients with temporal lobe epilepsy and a mixed psychiatric outpatient group) were analyzed and contrasted with a matched control group (N = 202) selected from the Dutch standardization sample of the WAIS-IV-NL to investigate internal structure by means of CFA and BCCD. With CFA, the four-factor structure as proposed by Wechsler demonstrates acceptable fit in all three subsamples. However, BCCD revealed three consistent clusters (verbal comprehension, visual processing, and processing speed) in all three subsamples. The combination of Arithmetic and Digit Span as a coherent working memory factor could not be verified, and Matrix Reasoning appeared to be isolated. With BCCD, some discrepancies from the proposed four-factor structure are exemplified. Furthermore, these results fit CHC theory of intelligence more clearly. Consistent clustering patterns indicate these results are robust. The structural causal discovery approach may be helpful in better interpreting existing tests, the development of new tests, and aid in diagnostic instruments.
A neural model of figure-ground organization.
Craft, Edward; Schütze, Hartmut; Niebur, Ernst; von der Heydt, Rüdiger
2007-06-01
Psychophysical studies suggest that figure-ground organization is a largely autonomous process that guides--and thus precedes--allocation of attention and object recognition. The discovery of border-ownership representation in single neurons of early visual cortex has confirmed this view. Recent theoretical studies have demonstrated that border-ownership assignment can be modeled as a process of self-organization by lateral interactions within V2 cortex. However, the mechanism proposed relies on propagation of signals through horizontal fibers, which would result in increasing delays of the border-ownership signal with increasing size of the visual stimulus, in contradiction with experimental findings. It also remains unclear how the resulting border-ownership representation would interact with attention mechanisms to guide further processing. Here we present a model of border-ownership coding based on dedicated neural circuits for contour grouping that produce border-ownership assignment and also provide handles for mechanisms of selective attention. The results are consistent with neurophysiological and psychophysical findings. The model makes predictions about the hypothetical grouping circuits and the role of feedback between cortical areas.
NASA Astrophysics Data System (ADS)
Ghafuri, Mohazabeh; Golfar, Bahareh; Nosrati, Mohsen; Hoseinkhani, Saman
2014-12-01
The process of ATP production is one of the most vital processes in living cells which happens with a high efficiency. Thermodynamic evaluation of this process and the factors involved in oxidative phosphorylation can provide a valuable guide for increasing the energy production efficiency in research and industry. Although energy transduction has been studied qualitatively in several researches, there are only few brief reviews based on mathematical models on this subject. In our previous work, we suggested a mathematical model for ATP production based on non-equilibrium thermodynamic principles. In the present study, based on the new discoveries on the respiratory chain of animal mitochondria, Golfar's model has been used to generate improved results for the efficiency of oxidative phosphorylation and the rate of energy loss. The results calculated from the modified coefficients for the proton pumps of the respiratory chain enzymes are closer to the experimental results and validate the model.
NASA Reverb: Standards-Driven Earth Science Data and Service Discovery
NASA Astrophysics Data System (ADS)
Cechini, M. F.; Mitchell, A.; Pilone, D.
2011-12-01
NASA's Earth Observing System Data and Information System (EOSDIS) is a core capability in NASA's Earth Science Data Systems Program. NASA's EOS ClearingHOuse (ECHO) is a metadata catalog for the EOSDIS, providing a centralized catalog of data products and registry of related data services. Working closely with the EOSDIS community, the ECHO team identified a need to develop the next generation EOS data and service discovery tool. This development effort relied on the following principles: + Metadata Driven User Interface - Users should be presented with data and service discovery capabilities based on dynamic processing of metadata describing the targeted data. + Integrated Data & Service Discovery - Users should be able to discovery data and associated data services that facilitate their research objectives. + Leverage Common Standards - Users should be able to discover and invoke services that utilize common interface standards. Metadata plays a vital role facilitating data discovery and access. As data providers enhance their metadata, more advanced search capabilities become available enriching a user's search experience. Maturing metadata formats such as ISO 19115 provide the necessary depth of metadata that facilitates advanced data discovery capabilities. Data discovery and access is not limited to simply the retrieval of data granules, but is growing into the more complex discovery of data services. These services include, but are not limited to, services facilitating additional data discovery, subsetting, reformatting, and re-projecting. The discovery and invocation of these data services is made significantly simpler through the use of consistent and interoperable standards. By utilizing an adopted standard, developing standard-specific adapters can be utilized to communicate with multiple services implementing a specific protocol. The emergence of metadata standards such as ISO 19119 plays a similarly important role in discovery as the 19115 standard. After a yearlong design, development, and testing process, the ECHO team successfully released "Reverb - The Next Generation Earth Science Discovery Tool." Reverb relies heavily on the information contained in dataset and granule metadata, such as ISO 19115, to provide a dynamic experience to users based on identified search facet values extracted from science metadata. Such an approach allows users to perform cross-dataset correlation and searches, discovering additional data that they may not previously have been aware of. In addition to data discovery, Reverb users may discover services associated with their data of interest. When services utilize supported standards and/or protocols, Reverb can facilitate the invocation of both synchronous and asynchronous data processing services. This greatly enhances a users ability to discover data of interest and accomplish their research goals. Extrapolating on the current movement towards interoperable standards and an increase in available services, data service invocation and chaining will become a natural part of data discovery. Reverb is one example of a discovery tool that provides a mechanism for transforming the earth science data discovery paradigm.
Searching for additional Higgs bosons via Higgs cascades
NASA Astrophysics Data System (ADS)
Gao, Christina; Luty, Markus A.; Mulhearn, Michael; Neill, Nicolás A.; Wang, Zhangqier
2018-04-01
The discovery of a 125 GeV Higgs boson at the Large Hadron Collider strongly motivates direct searches for additional Higgs bosons. In a type I two Higgs doublet model there is a large region of parameter space at tan β ≳5 that is currently unconstrained experimentally. We show that the process g g →H →A Z →Z Z h can probe this region, and can be the discovery mode for an extended Higgs sector at the LHC. We analyze 9 promising decay modes for the Z Z h state, and we find that the most sensitive final states are ℓℓℓℓb b , ℓℓj j b b , ℓℓν ν γ γ and ℓℓℓℓ+ missing energy.
The Endoplasmic Reticulum-Associated Degradation Pathways of Budding Yeast
Thibault, Guillaume; Ng, Davis T.W.
2012-01-01
Protein misfolding is a common cellular event that can produce intrinsically harmful products. To reduce the risk, quality control mechanisms are deployed to detect and eliminate misfolded, aggregated, and unassembled proteins. In the secretory pathway, it is mainly the endoplasmic reticulum-associated degradation (ERAD) pathways that perform this role. Here, specialized factors are organized to monitor and process the folded states of nascent polypeptides. Despite the complex structures, topologies, and posttranslational modifications of client molecules, the ER mechanisms are the best understood among all protein quality-control systems. This is the result of convergent and sometimes serendipitous discoveries by researchers from diverse fields. Although major advances in ER quality control and ERAD came from all model organisms, this review will focus on the discoveries culminating from the simple budding yeast. PMID:23209158
2008-10-20
CAPE CANAVERAL, Fla. - In Orbiter Processing Facility bay 3 at NASA's Kennedy Space Center in Florida, boundary layer transition, or BLT, tile is being affixed to space shuttle Discovery before its launch on the STS-119 mission in February 2009. The specially modified tiles and instrumentation package will monitor the heating effects of early re-entry boundary layer transition at high mach numbers. These data support analytical modeling and design efforts for both the space shuttles and NASA next-generation spacecraft, the Orion crew exploration vehicle. On the STS-119 mission, Discovery also will carry the S6 truss segment to complete the 361-foot-long backbone of the International Space Station. The truss includes the fourth pair of solar array wings and electronics that convert sunlight to power for the orbiting laboratory. Photo credit: NASA/Tim Jacobs
2008-10-20
CAPE CANAVERAL, Fla. - In Orbiter Processing Facility bay 3 at NASA's Kennedy Space Center in Florida, workers attach boundary layer transition, or BLT, tile to space shuttle Discovery before its launch on the STS-119 mission in February 2009. The specially modified tiles and instrumentation package will monitor the heating effects of early re-entry boundary layer transition at high mach numbers. These data support analytical modeling and design efforts for both the space shuttles and NASA next-generation spacecraft, the Orion crew exploration vehicle. On the STS-119 mission, Discovery also will carry the S6 truss segment to complete the 361-foot-long backbone of the International Space Station. The truss includes the fourth pair of solar array wings and electronics that convert sunlight to power for the orbiting laboratory. Photo credit: NASA/Tim Jacobs
2008-10-20
CAPE CANAVERAL, Fla. - In Orbiter Processing Facility bay 3 at NASA's Kennedy Space Center in Florida, workers attach boundary layer transition, or BLT, tile to space shuttle Discovery before its launch on the STS-119 mission in February 2009. The specially modified tiles and instrumentation package will monitor the heating effects of early re-entry boundary layer transition at high mach numbers. These data support analytical modeling and design efforts for both the space shuttles and NASA next-generation spacecraft, the Orion crew exploration vehicle. On the STS-119 mission, Discovery also will carry the S6 truss segment to complete the 361-foot-long backbone of the International Space Station. The truss includes the fourth pair of solar array wings and electronics that convert sunlight to power for the orbiting laboratory. Photo credit: NASA/Tim Jacobs
2008-10-20
CAPE CANAVERAL, Fla. - In Orbiter Processing Facility bay 3 at NASA's Kennedy Space Center in Florida, workers attach boundary layer transition, or BLT, tile to space shuttle Discovery before its launch on the STS-119 mission in February 2009. The specially modified tiles and instrumentation package will monitor the heating effects of early re-entry boundary layer transition at high mach numbers. These data support analytical modeling and design efforts for both the space shuttles and NASA next-generation spacecraft, the Orion crew exploration vehicle. On the STS-119 mission, Discovery also will carry the S6 truss segment to complete the 361-foot-long backbone of the International Space Station. The truss includes the fourth pair of solar array wings and electronics that convert sunlight to power for the orbiting laboratory. Photo credit: NASA/Tim Jacobs
Role of Academic Drug Discovery in the Quest for New CNS Therapeutics.
Yokley, Brian H; Hartman, Matthew; Slusher, Barbara S
2017-03-15
There was a greater than 50% decline in central nervous system (CNS) drug discovery and development programs by major pharmaceutical companies from 2009 to 2014. This decline was paralleled by a rise in the number of university led drug discovery centers, many in the CNS area, and a growth in the number of public-private drug discovery partnerships. Diverse operating models have emerged as the academic drug discovery centers adapt to this changing ecosystem.
Mathematical and Computational Modeling in Complex Biological Systems
Li, Wenyang; Zhu, Xiaoliang
2017-01-01
The biological process and molecular functions involved in the cancer progression remain difficult to understand for biologists and clinical doctors. Recent developments in high-throughput technologies urge the systems biology to achieve more precise models for complex diseases. Computational and mathematical models are gradually being used to help us understand the omics data produced by high-throughput experimental techniques. The use of computational models in systems biology allows us to explore the pathogenesis of complex diseases, improve our understanding of the latent molecular mechanisms, and promote treatment strategy optimization and new drug discovery. Currently, it is urgent to bridge the gap between the developments of high-throughput technologies and systemic modeling of the biological process in cancer research. In this review, we firstly studied several typical mathematical modeling approaches of biological systems in different scales and deeply analyzed their characteristics, advantages, applications, and limitations. Next, three potential research directions in systems modeling were summarized. To conclude, this review provides an update of important solutions using computational modeling approaches in systems biology. PMID:28386558
Mathematical and Computational Modeling in Complex Biological Systems.
Ji, Zhiwei; Yan, Ke; Li, Wenyang; Hu, Haigen; Zhu, Xiaoliang
2017-01-01
The biological process and molecular functions involved in the cancer progression remain difficult to understand for biologists and clinical doctors. Recent developments in high-throughput technologies urge the systems biology to achieve more precise models for complex diseases. Computational and mathematical models are gradually being used to help us understand the omics data produced by high-throughput experimental techniques. The use of computational models in systems biology allows us to explore the pathogenesis of complex diseases, improve our understanding of the latent molecular mechanisms, and promote treatment strategy optimization and new drug discovery. Currently, it is urgent to bridge the gap between the developments of high-throughput technologies and systemic modeling of the biological process in cancer research. In this review, we firstly studied several typical mathematical modeling approaches of biological systems in different scales and deeply analyzed their characteristics, advantages, applications, and limitations. Next, three potential research directions in systems modeling were summarized. To conclude, this review provides an update of important solutions using computational modeling approaches in systems biology.
2003-12-09
KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, KSC employee Duane Williams prepares the blanket insulation to be installed on the body flap on orbiter Discovery. The blankets are part of the Orbiter Thermal Protection System, thermal shields to protect against temperatures as high as 3,000° Fahrenheit, which are produced during descent for landing. Discovery is scheduled to fly on mission STS-121 to the International Space Station.
Virtual drug discovery: beyond computational chemistry?
Gilardoni, Francois; Arvanites, Anthony C
2010-02-01
This editorial looks at how a fully integrated structure that performs all aspects in the drug discovery process, under one company, is slowly disappearing. The steps in the drug discovery paradigm have been slowly increasing toward virtuality or outsourcing at various phases of product development in a company's candidate pipeline. Each step in the process, such as target identification and validation and medicinal chemistry, can be managed by scientific teams within a 'virtual' company. Pharmaceutical companies to biotechnology start-ups have been quick in adopting this new research and development business strategy in order to gain flexibility, access the best technologies and technical expertise, and decrease product developmental costs. In today's financial climate, the term virtual drug discovery has an organizational meaning. It represents the next evolutionary step in outsourcing drug development.
Drug Discovery in Fish, Flies, and Worms
Strange, Kevin
2016-01-01
Abstract Nonmammalian model organisms such as the nematode Caenorhabditis elegans, the fruit fly Drosophila melanogaster, and the zebrafish Danio rerio provide numerous experimental advantages for drug discovery including genetic and molecular tractability, amenability to high-throughput screening methods and reduced experimental costs and increased experimental throughput compared to traditional mammalian models. An interdisciplinary approach that strategically combines the study of nonmammalian and mammalian animal models with diverse experimental tools has and will continue to provide deep molecular and genetic understanding of human disease and will significantly enhance the discovery and application of new therapies to treat those diseases. This review will provide an overview of C. elegans, Drosophila, and zebrafish biology and husbandry and will discuss how these models are being used for phenotype-based drug screening and for identification of drug targets and mechanisms of action. The review will also describe how these and other nonmammalian model organisms are uniquely suited for the discovery of drug-based regenerative medicine therapies. PMID:28053067
Experimental constraints from flavour changing processes and physics beyond the Standard Model.
Gersabeck, M; Gligorov, V V; Serra, N
Flavour physics has a long tradition of paving the way for direct discoveries of new particles and interactions. Results over the last decade have placed stringent bounds on the parameter space of physics beyond the Standard Model. Early results from the LHC, and its dedicated flavour factory LHCb, have further tightened these constraints and reiterate the ongoing relevance of flavour studies. The experimental status of flavour observables in the charm and beauty sectors is reviewed in measurements of CP violation, neutral meson mixing, and measurements of rare decays.
Chasing the long tail of environmental data: PEcAn is nuts about Brown Dog
NASA Astrophysics Data System (ADS)
Dietze, M.; Cowdery, E.; Desai, A. R.; Gardella, A.; Kelly, R.; Kooper, R.; LeBauer, D.; Mantooth, J.; McHenry, K.; Serbin, S.; Shiklomanov, A. N.; Simkins, J.; Viskari, T.; Raiho, A.
2015-12-01
The Predictive Ecosystem Analyzer (PEcAn) is a ecological modeling informatics system that manages the flows of information in and out of terrestrial biosphere models, provenance tracking, visualization, analysis, and model-data fusion. We are in the process of scaling the PEcAn system from one that currently supports a handful of models and system nodes to one that aims to provide bottom-up connectivity across much of the model-data integration done by the terrestrial biogeochemistry community. This talk reports on the current state of PEcAn, it's data processing workflows, and the near- and long-term challenges faced. Particular emphasis will be given to the tools being developed by the Brown Dog project to make unstructured, un-curated data more accessible: the Data Access Proxy (DAP) and the Data Tilling Service (DTS). The use of the DAP to process meteorological data and the DTS to read vegetation data will be demonstrated and other Brown Dog environmental case studies will be briefly touched on. Beyond data processing, facilitating data discovery and import into PEcAn and distributing analyses across the PEcAn network (i.e. bringing models to data) are key challenges moving forward.
ERIC Educational Resources Information Center
Liu, Chen-Chung; Don, Ping-Hsing; Chung, Chen-Wei; Lin, Shao-Jun; Chen, Gwo-Dong; Liu, Baw-Jhiune
2010-01-01
While Web discovery is usually undertaken as a solitary activity, Web co-discovery may transform Web learning activities from the isolated individual search process into interactive and collaborative knowledge exploration. Recent studies have proposed Web co-search environments on a single computer, supported by multiple one-to-one technologies.…
Communicating Our Science to Our Customers: Drug Discovery in Five Simple Experiments.
Pearson, Lesley-Anne; Foley, David William
2017-02-09
The complexities of modern drug discovery-an interdisciplinary process that often takes years and costs billions-can be extremely challenging to explain to a public audience. We present details of a 30 minute demonstrative lecture that uses well-known experiments to illustrate key concepts in drug discovery including synthesis, assay and metabolism.
The port side view of the Orbiter Discovery while mounted ...
The port side view of the Orbiter Discovery while mounted atop the 76-wheeled orbiter transfer system as it is being rolled from the Orbiter Processing Facility to the Vehicle Assembly Building at Kennedy Space Center. - Space Transportation System, Orbiter Discovery (OV-103), Lyndon B. Johnson Space Center, 2101 NASA Parkway, Houston, Harris County, TX
The starboard side view of the Orbiter Discovery while mounted ...
The starboard side view of the Orbiter Discovery while mounted atop the 76-wheeled orbiter transfer system as it is being rolled from the Orbiter Processing Facility to the Vehicle Assembly Building at Kennedy Space Center. - Space Transportation System, Orbiter Discovery (OV-103), Lyndon B. Johnson Space Center, 2101 NASA Parkway, Houston, Harris County, TX
ERIC Educational Resources Information Center
Hulshof, Casper D.; de Jong, Ton
2006-01-01
Students encounter many obstacles during scientific discovery learning with computer-based simulations. It is hypothesized that an effective type of support, that does not interfere with the scientific discovery learning process, should be delivered on a "just-in-time" base. This study explores the effect of facilitating access to…
ERIC Educational Resources Information Center
Wilczek-Vera, Grazyna; Salin, Eric Dunbar
2011-01-01
An experiment on fluorescence spectroscopy suitable for an advanced analytical laboratory is presented. Its conceptual development used a combination of the expository and discovery styles. The "learn-as-you-go" and direct "hands-on" methodology applied ensures an active role for a student in the process of visualization and discovery of concepts.…
NASA Astrophysics Data System (ADS)
Fujitani, Y.; Sumino, Y.
2018-04-01
A classically scale invariant extension of the standard model predicts large anomalous Higgs self-interactions. We compute missing contributions in previous studies for probing the Higgs triple coupling of a minimal model using the process e+e- → Zhh. Employing a proper order counting, we compute the total and differential cross sections at the leading order, which incorporate the one-loop corrections between zero external momenta and their physical values. Discovery/exclusion potential of a future e+e- collider for this model is estimated. We also find a unique feature in the momentum dependence of the Higgs triple vertex for this class of models.
BP-Broker use-cases in the UncertWeb framework
NASA Astrophysics Data System (ADS)
Roncella, Roberto; Bigagli, Lorenzo; Schulz, Michael; Stasch, Christoph; Proß, Benjamin; Jones, Richard; Santoro, Mattia
2013-04-01
The UncertWeb framework is a distributed, Web-based Information and Communication Technology (ICT) system to support scientific data modeling in presence of uncertainty. We designed and prototyped a core component of the UncertWeb framework: the Business Process Broker. The BP-Broker implements several functionalities, such as: discovery of available processes/BPs, preprocessing of a BP into its executable form (EBP), publication of EBPs and their execution through a workflow-engine. According to the Composition-as-a-Service (CaaS) approach, the BP-Broker supports discovery and chaining of modeling resources (and processing resources in general), providing the necessary interoperability services for creating, validating, editing, storing, publishing, and executing scientific workflows. The UncertWeb project targeted several scenarios, which were used to evaluate and test the BP-Broker. The scenarios cover the following environmental application domains: biodiversity and habitat change, land use and policy modeling, local air quality forecasting, and individual activity in the environment. This work reports on the study of a number of use-cases, by means of the BP-Broker, namely: - eHabitat use-case: implements a Monte Carlo simulation performed on a deterministic ecological model; an extended use-case supports inter-comparison of model outputs; - FERA use-case: is composed of a set of models for predicting land-use and crop yield response to climatic and economic change; - NILU use-case: is composed of a Probabilistic Air Quality Forecasting model for predicting concentrations of air pollutants; - Albatross use-case: includes two model services for simulating activity-travel patterns of individuals in time and space; - Overlay use-case: integrates the NILU scenario with the Albatross scenario to calculate the exposure to air pollutants of individuals. Our aim was to prove the feasibility of describing composite modeling processes with a high-level, abstract notation (i.e. BPMN 2.0), and delegating the resolution of technical issues (e.g. I/O matching) as much as possible to an external service. The results of the experimented solution indicate that this approach facilitates the integration of environmental model workflows into the standard geospatial Web Services framework (e.g. the GEOSS Common Infrastructure), mitigating its inherent complexity. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement n° 248488.
Establishing a reliable framework for harnessing the creative power of the scientific crowd.
Carter, Adrian J; Donner, Amy; Lee, Wen Hwa; Bountra, Chas
2017-02-01
Discovering new medicines is difficult and increasingly expensive. The pharmaceutical industry has responded to this challenge by embracing open innovation to access external ideas. Historically, partnerships were usually bilateral, and the drug discovery process was shrouded in secrecy. This model is rapidly changing. With the advent of the Internet, drug discovery has become more decentralised, bottom-up, and scalable than ever before. The term open innovation is now accepted as just one of many terms that capture different but overlapping levels of openness in the drug discovery process. Many pharmaceutical companies recognise the advantages of revealing some proprietary information in the form of results, chemical tools, or unsolved problems in return for valuable insights and ideas. For example, such selective revealing can take the form of openly shared chemical tools to explore new biological mechanisms or by publicly admitting what is not known in the form of an open call. The essential ingredient for addressing these problems is access to the wider scientific crowd. The business of crowdsourcing, a form of outsourcing in which individuals or organisations solicit contributions from Internet users to obtain ideas or desired services, has grown significantly to fill this need and takes many forms today. Here, we posit that open-innovation approaches are more successful when they establish a reliable framework for converting creative ideas of the scientific crowd into practice with actionable plans.
Establishing a reliable framework for harnessing the creative power of the scientific crowd
Donner, Amy; Lee, Wen Hwa; Bountra, Chas
2017-01-01
Discovering new medicines is difficult and increasingly expensive. The pharmaceutical industry has responded to this challenge by embracing open innovation to access external ideas. Historically, partnerships were usually bilateral, and the drug discovery process was shrouded in secrecy. This model is rapidly changing. With the advent of the Internet, drug discovery has become more decentralised, bottom-up, and scalable than ever before. The term open innovation is now accepted as just one of many terms that capture different but overlapping levels of openness in the drug discovery process. Many pharmaceutical companies recognise the advantages of revealing some proprietary information in the form of results, chemical tools, or unsolved problems in return for valuable insights and ideas. For example, such selective revealing can take the form of openly shared chemical tools to explore new biological mechanisms or by publicly admitting what is not known in the form of an open call. The essential ingredient for addressing these problems is access to the wider scientific crowd. The business of crowdsourcing, a form of outsourcing in which individuals or organisations solicit contributions from Internet users to obtain ideas or desired services, has grown significantly to fill this need and takes many forms today. Here, we posit that open-innovation approaches are more successful when they establish a reliable framework for converting creative ideas of the scientific crowd into practice with actionable plans. PMID:28199324
Treiber, Alexander; de Kanter, Ruben; Roch, Catherine; Gatfield, John; Boss, Christoph; von Raumer, Markus; Schindelholz, Benno; Muehlan, Clemens; van Gerven, Joop; Jenck, Francois
2017-09-01
The identification of new sleep drugs poses particular challenges in drug discovery owing to disease-specific requirements such as rapid onset of action, sleep maintenance throughout major parts of the night, and absence of residual next-day effects. Robust tools to estimate drug levels in human brain are therefore key for a successful discovery program. Animal models constitute an appropriate choice for drugs without species differences in receptor pharmacology or pharmacokinetics. Translation to man becomes more challenging when interspecies differences are prominent. This report describes the discovery of the dual orexin receptor 1 and 2 (OX 1 and OX 2 ) antagonist ACT-541468 out of a class of structurally related compounds, by use of physiology-based pharmacokinetic and pharmacodynamic (PBPK-PD) modeling applied early in drug discovery. Although all drug candidates exhibited similar target receptor potencies and efficacy in a rat sleep model, they exhibited large interspecies differences in key factors determining their pharmacokinetic profile. Human PK models were built on the basis of in vitro metabolism and physicochemical data and were then used to predict the time course of OX 2 receptor occupancy in brain. An active ACT-541468 dose of 25 mg was estimated on the basis of OX 2 receptor occupancy thresholds of about 65% derived from clinical data for two other orexin antagonists, almorexant and suvorexant. Modeling predictions for ACT-541468 in man were largely confirmed in a single-ascending dose trial in healthy subjects. PBPK-PD modeling applied early in drug discovery, therefore, has great potential to assist in the identification of drug molecules when specific pharmacokinetic and pharmacodynamic requirements need to be met. Copyright © 2017 by The American Society for Pharmacology and Experimental Therapeutics.
General view from outside the Orbiter Processing Facility at the ...
General view from outside the Orbiter Processing Facility at the Kennedy Space Center with the bay doors open as the Orbiter Discovery is atop the transport vehicle prepared to be moved over to the Vehicle Assembly Building. - Space Transportation System, Orbiter Discovery (OV-103), Lyndon B. Johnson Space Center, 2101 NASA Parkway, Houston, Harris County, TX
Knowledge Discovery Process: Case Study of RNAV Adherence of Radar Track Data
NASA Technical Reports Server (NTRS)
Matthews, Bryan
2018-01-01
This talk is an introduction to the knowledge discovery process, beginning with: identifying the problem, choosing data sources, matching the appropriate machine learning tools, and reviewing the results. The overview will be given in the context of an ongoing study that is assessing RNAV adherence of commercial aircraft in the national airspace.
The Relation of Learners' Motivation with the Process of Collaborative Scientific Discovery Learning
ERIC Educational Resources Information Center
Saab, Nadira; van Joolingen, Wouter R.; van Hout-Wolters, B. H. A. M.
2009-01-01
In this study, we investigated the influence of individual learners' motivation on the collaborative discovery learning process. In this we distinguished the motivation of the individual learners and had eye for the composition of groups, which could be homogeneous or heterogeneous in terms of motivation. The study involved 73 dyads of 10th-grade…
Recombinant organisms for production of industrial products
Adrio, Jose-Luis
2010-01-01
A revolution in industrial microbiology was sparked by the discoveries of ther double-stranded structure of DNA and the development of recombinant DNA technology. Traditional industrial microbiology was merged with molecular biology to yield improved recombinant processes for the industrial production of primary and secondary metabolites, protein biopharmaceuticals and industrial enzymes. Novel genetic techniques such as metabolic engineering, combinatorial biosynthesis and molecular breeding techniques and their modifications are contributing greatly to the development of improved industrial processes. In addition, functional genomics, proteomics and metabolomics are being exploited for the discovery of novel valuable small molecules for medicine as well as enzymes for catalysis. The sequencing of industrial microbal genomes is being carried out which bodes well for future process improvement and discovery of new industrial products. PMID:21326937
Basavaraj, S; Betageri, Guru V.
2014-01-01
Drug discovery and development has become longer and costlier process. The fear of failure and stringent regulatory review process is driving pharmaceutical companies towards “me too” drugs and improved generics (505(b) (2)) fillings. The discontinuance of molecules at late stage clinical trials is common these years. The molecules are withdrawn at various stages of discovery and development process for reasons such as poor ADME properties, lack of efficacy and safety reasons. Hence this review focuses on possible applications of formulation and drug delivery to salvage molecules and improve the drugability. The formulation and drug delivery technologies are suitable for addressing various issues contributing to attrition are discussed in detail. PMID:26579359
A renaissance of neural networks in drug discovery.
Baskin, Igor I; Winkler, David; Tetko, Igor V
2016-08-01
Neural networks are becoming a very popular method for solving machine learning and artificial intelligence problems. The variety of neural network types and their application to drug discovery requires expert knowledge to choose the most appropriate approach. In this review, the authors discuss traditional and newly emerging neural network approaches to drug discovery. Their focus is on backpropagation neural networks and their variants, self-organizing maps and associated methods, and a relatively new technique, deep learning. The most important technical issues are discussed including overfitting and its prevention through regularization, ensemble and multitask modeling, model interpretation, and estimation of applicability domain. Different aspects of using neural networks in drug discovery are considered: building structure-activity models with respect to various targets; predicting drug selectivity, toxicity profiles, ADMET and physicochemical properties; characteristics of drug-delivery systems and virtual screening. Neural networks continue to grow in importance for drug discovery. Recent developments in deep learning suggests further improvements may be gained in the analysis of large chemical data sets. It's anticipated that neural networks will be more widely used in drug discovery in the future, and applied in non-traditional areas such as drug delivery systems, biologically compatible materials, and regenerative medicine.
MassImager: A software for interactive and in-depth analysis of mass spectrometry imaging data.
He, Jiuming; Huang, Luojiao; Tian, Runtao; Li, Tiegang; Sun, Chenglong; Song, Xiaowei; Lv, Yiwei; Luo, Zhigang; Li, Xin; Abliz, Zeper
2018-07-26
Mass spectrometry imaging (MSI) has become a powerful tool to probe molecule events in biological tissue. However, it is a widely held viewpoint that one of the biggest challenges is an easy-to-use data processing software for discovering the underlying biological information from complicated and huge MSI dataset. Here, a user-friendly and full-featured MSI software including three subsystems, Solution, Visualization and Intelligence, named MassImager, is developed focusing on interactive visualization, in-situ biomarker discovery and artificial intelligent pathological diagnosis. Simplified data preprocessing and high-throughput MSI data exchange, serialization jointly guarantee the quick reconstruction of ion image and rapid analysis of dozens of gigabytes datasets. It also offers diverse self-defined operations for visual processing, including multiple ion visualization, multiple channel superposition, image normalization, visual resolution enhancement and image filter. Regions-of-interest analysis can be performed precisely through the interactive visualization between the ion images and mass spectra, also the overlaid optical image guide, to directly find out the region-specific biomarkers. Moreover, automatic pattern recognition can be achieved immediately upon the supervised or unsupervised multivariate statistical modeling. Clear discrimination between cancer tissue and adjacent tissue within a MSI dataset can be seen in the generated pattern image, which shows great potential in visually in-situ biomarker discovery and artificial intelligent pathological diagnosis of cancer. All the features are integrated together in MassImager to provide a deep MSI processing solution at the in-situ metabolomics level for biomarker discovery and future clinical pathological diagnosis. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Pereira, Claudia V; Nadanaciva, Sashi; Oliveira, Paulo J; Will, Yvonne
2012-02-01
Nowadays the 'redox hypothesis' is based on the fact that thiol/disulfide couples such as glutathione (GSH/GSSG), cysteine (Cys/CySS) and thioredoxin ((Trx-(SH)2/Trx-SS)) are functionally organized in redox circuits controlled by glutathione pools, thioredoxins and other control nodes, and they are not in equilibrium relative to each other. Although ROS can be important intermediates of cellular signaling pathways, disturbances in the normal cellular redox can result in widespread damage to several cell components. Moreover, oxidative stress has been linked to a variety of age-related diseases. In recent years, oxidative stress has also been identified to contribute to drug-induced liver, heart, renal and brain toxicity. This review provides an overview of current in vitro and in vivo methods that can be deployed throughout the drug discovery process. In addition, animal models and noninvasive biomarkers are described. Reducing post-market drug withdrawals is essential for all pharmaceutical companies in a time of increased patient welfare and tight budgets. Predictive screens positioned early in the drug discovery process will help to reduce such liabilities. Although new and more efficient assays and models are being developed, the hunt for biomarkers and noninvasive techniques is still in progress.
Phenotypic screening in cancer drug discovery - past, present and future.
Moffat, John G; Rudolph, Joachim; Bailey, David
2014-08-01
There has been a resurgence of interest in the use of phenotypic screens in drug discovery as an alternative to target-focused approaches. Given that oncology is currently the most active therapeutic area, and also one in which target-focused approaches have been particularly prominent in the past two decades, we investigated the contribution of phenotypic assays to oncology drug discovery by analysing the origins of all new small-molecule cancer drugs approved by the US Food and Drug Administration (FDA) over the past 15 years and those currently in clinical development. Although the majority of these drugs originated from target-based discovery, we identified a significant number whose discovery depended on phenotypic screening approaches. We postulate that the contribution of phenotypic screening to cancer drug discovery has been hampered by a reliance on 'classical' nonspecific drug effects such as cytotoxicity and mitotic arrest, exacerbated by a paucity of mechanistically defined cellular models for therapeutically translatable cancer phenotypes. However, technical and biological advances that enable such mechanistically informed phenotypic models have the potential to empower phenotypic drug discovery in oncology.
NASA Astrophysics Data System (ADS)
Tong, Wei
2017-04-01
Combinatorial material research offers fast and efficient solutions to identify promising and advanced materials. It has revolutionized the pharmaceutical industry and now is being applied to accelerate the discovery of other new compounds, e.g. superconductors, luminescent materials, catalysts etc. Differing from the traditional trial-and-error process, this approach allows for the synthesis of a large number of compositionally diverse compounds by varying the combinations of the components and adjusting the ratios. It largely reduces the cost of single-sample synthesis/characterization, along with the turnaround time in the material discovery process, therefore, could dramatically change the existing paradigm for discovering and commercializing new materials. This talk outlines the use of combinatorial materials approach in the material discovery in transportation sector. It covers the general introduction to the combinatorial material concept, state of art for its application in energy-related research. At the end, LBNL capabilities in combinatorial materials synthesis and high throughput characterization that are applicable for material discovery research will be highlighted.
Kang, Lifeng; Chung, Bong Geun; Langer, Robert; Khademhosseini, Ali
2009-01-01
Microfluidic technologies’ ability to miniaturize assays and increase experimental throughput have generated significant interest in the drug discovery and development domain. These characteristics make microfluidic systems a potentially valuable tool for many drug discovery and development applications. Here, we review the recent advances of microfluidic devices for drug discovery and development and highlight their applications in different stages of the process, including target selection, lead identification, preclinical tests, clinical trials, chemical synthesis, formulations studies, and product management. PMID:18190858
Neptune: a bioinformatics tool for rapid discovery of genomic variation in bacterial populations
Marinier, Eric; Zaheer, Rahat; Berry, Chrystal; Weedmark, Kelly A.; Domaratzki, Michael; Mabon, Philip; Knox, Natalie C.; Reimer, Aleisha R.; Graham, Morag R.; Chui, Linda; Patterson-Fortin, Laura; Zhang, Jian; Pagotto, Franco; Farber, Jeff; Mahony, Jim; Seyer, Karine; Bekal, Sadjia; Tremblay, Cécile; Isaac-Renton, Judy; Prystajecky, Natalie; Chen, Jessica; Slade, Peter
2017-01-01
Abstract The ready availability of vast amounts of genomic sequence data has created the need to rethink comparative genomics algorithms using ‘big data’ approaches. Neptune is an efficient system for rapidly locating differentially abundant genomic content in bacterial populations using an exact k-mer matching strategy, while accommodating k-mer mismatches. Neptune’s loci discovery process identifies sequences that are sufficiently common to a group of target sequences and sufficiently absent from non-targets using probabilistic models. Neptune uses parallel computing to efficiently identify and extract these loci from draft genome assemblies without requiring multiple sequence alignments or other computationally expensive comparative sequence analyses. Tests on simulated and real datasets showed that Neptune rapidly identifies regions that are both sensitive and specific. We demonstrate that this system can identify trait-specific loci from different bacterial lineages. Neptune is broadly applicable for comparative bacterial analyses, yet will particularly benefit pathogenomic applications, owing to efficient and sensitive discovery of differentially abundant genomic loci. The software is available for download at: http://github.com/phac-nml/neptune. PMID:29048594
Camp, David; Newman, Stuart; Pham, Ngoc B; Quinn, Ronald J
2014-03-01
The Eskitis Institute for Drug Discovery is home to two unique resources, Nature Bank and the Queensland Compound Library (QCL), that differentiate it from many other academic institutes pursuing chemical biology or early phase drug discovery. Nature Bank is a comprehensive collection of plants and marine invertebrates that have been subjected to a process which aligns downstream extracts and fractions with lead- and drug-like physicochemical properties. Considerable expertise in screening natural product extracts/fractions was developed at Eskitis over the last two decades. Importantly, biodiscovery activities have been conducted from the beginning in accordance with the UN Convention on Biological Diversity (CBD) to ensure compliance with all international and national legislative requirements. The QCL is a compound management and logistics facility that was established from public funds to augment previous investments in high throughput and phenotypic screening in the region. A unique intellectual property (IP) model has been developed in the case of the QCL to stimulate applied, basic and translational research in the chemical and life sciences by industry, non-profit, and academic organizations.
Semantic Service Design for Collaborative Business Processes in Internetworked Enterprises
NASA Astrophysics Data System (ADS)
Bianchini, Devis; Cappiello, Cinzia; de Antonellis, Valeria; Pernici, Barbara
Modern collaborating enterprises can be seen as borderless organizations whose processes are dynamically transformed and integrated with the ones of their partners (Internetworked Enterprises, IE), thus enabling the design of collaborative business processes. The adoption of Semantic Web and service-oriented technologies for implementing collaboration in such distributed and heterogeneous environments promises significant benefits. IE can model their own processes independently by using the Software as a Service paradigm (SaaS). Each enterprise maintains a catalog of available services and these can be shared across IE and reused to build up complex collaborative processes. Moreover, each enterprise can adopt its own terminology and concepts to describe business processes and component services. This brings requirements to manage semantic heterogeneity in process descriptions which are distributed across different enterprise systems. To enable effective service-based collaboration, IEs have to standardize their process descriptions and model them through component services using the same approach and principles. For enabling collaborative business processes across IE, services should be designed following an homogeneous approach, possibly maintaining a uniform level of granularity. In the paper we propose an ontology-based semantic modeling approach apt to enrich and reconcile semantics of process descriptions to facilitate process knowledge management and to enable semantic service design (by discovery, reuse and integration of process elements/constructs). The approach brings together Semantic Web technologies, techniques in process modeling, ontology building and semantic matching in order to provide a comprehensive semantic modeling framework.
Automated Discovery and Modeling of Sequential Patterns Preceding Events of Interest
NASA Technical Reports Server (NTRS)
Rohloff, Kurt
2010-01-01
The integration of emerging data manipulation technologies has enabled a paradigm shift in practitioners' abilities to understand and anticipate events of interest in complex systems. Example events of interest include outbreaks of socio-political violence in nation-states. Rather than relying on human-centric modeling efforts that are limited by the availability of SMEs, automated data processing technologies has enabled the development of innovative automated complex system modeling and predictive analysis technologies. We introduce one such emerging modeling technology - the sequential pattern methodology. We have applied the sequential pattern methodology to automatically identify patterns of observed behavior that precede outbreaks of socio-political violence such as riots, rebellions and coups in nation-states. The sequential pattern methodology is a groundbreaking approach to automated complex system model discovery because it generates easily interpretable patterns based on direct observations of sampled factor data for a deeper understanding of societal behaviors that is tolerant of observation noise and missing data. The discovered patterns are simple to interpret and mimic human's identifications of observed trends in temporal data. Discovered patterns also provide an automated forecasting ability: we discuss an example of using discovered patterns coupled with a rich data environment to forecast various types of socio-political violence in nation-states.
1994-09-30
relational versus object oriented DBMS, knowledge discovery, data models, rnetadata, data filtering, clustering techniques, and synthetic data. A secondary...The first was the investigation of Al/ES Lapplications (knowledge discovery, data mining, and clustering ). Here CAST collabo.rated with Dr. Fred Petry...knowledge discovery system based on clustering techniques; implemented an on-line data browser to the DBMS; completed preliminary efforts to apply object
Ardal, Christine; Alstadsæter, Annette; Røttingen, John-Arne
2011-09-28
Innovation through an open source model has proven to be successful for software development. This success has led many to speculate if open source can be applied to other industries with similar success. We attempt to provide an understanding of open source software development characteristics for researchers, business leaders and government officials who may be interested in utilizing open source innovation in other contexts and with an emphasis on drug discovery. A systematic review was performed by searching relevant, multidisciplinary databases to extract empirical research regarding the common characteristics and barriers of initiating and maintaining an open source software development project. Common characteristics to open source software development pertinent to open source drug discovery were extracted. The characteristics were then grouped into the areas of participant attraction, management of volunteers, control mechanisms, legal framework and physical constraints. Lastly, their applicability to drug discovery was examined. We believe that the open source model is viable for drug discovery, although it is unlikely that it will exactly follow the form used in software development. Hybrids will likely develop that suit the unique characteristics of drug discovery. We suggest potential motivations for organizations to join an open source drug discovery project. We also examine specific differences between software and medicines, specifically how the need for laboratories and physical goods will impact the model as well as the effect of patents.
Translational neuropharmacology and the appropriate and effective use of animal models
Green, AR; Gabrielsson, J; Fone, KCF
2011-01-01
This issue of the British Journal of Pharmacology is dedicated to reviews of the major animal models used in neuropharmacology to examine drugs for both neurological and psychiatric conditions. Almost all major conditions are reviewed. In general, regulatory authorities require evidence for the efficacy of novel compounds in appropriate animal models. However, the failure of many compounds in clinical trials following clear demonstration of efficacy in animal models has called into question both the value of the models and the discovery process in general. These matters are expertly reviewed in this issue and proposals for better models outlined. In this editorial, we further suggest that more attention be made to incorporate pharmacokinetic knowledge into the studies (quantitative pharmacology). We also suggest that more attention be made to ensure that full methodological details are published and recommend that journals should be more amenable to publishing negative data. Finally, we propose that new approaches must be used in drug discovery so that preclinical studies become more reflective of the clinical situation, and studies using animal models mimic the anticipated design of studies to be performed in humans, as closely as possible. LINKED ARTICLES This article is part of a themed issue on Translational Neuropharmacology. To view the other articles in this issue visit http://dx.doi.org/10.1111/bph.2011.164.issue-4 PMID:21545411
Kell, Douglas B
2013-12-01
Despite the sequencing of the human genome, the rate of innovative and successful drug discovery in the pharmaceutical industry has continued to decrease. Leaving aside regulatory matters, the fundamental and interlinked intellectual issues proposed to be largely responsible for this are: (a) the move from 'function-first' to 'target-first' methods of screening and drug discovery; (b) the belief that successful drugs should and do interact solely with single, individual targets, despite natural evolution's selection for biochemical networks that are robust to individual parameter changes; (c) an over-reliance on the rule-of-5 to constrain biophysical and chemical properties of drug libraries; (d) the general abandoning of natural products that do not obey the rule-of-5; (e) an incorrect belief that drugs diffuse passively into (and presumably out of) cells across the bilayers portions of membranes, according to their lipophilicity; (f) a widespread failure to recognize the overwhelmingly important role of proteinaceous transporters, as well as their expression profiles, in determining drug distribution in and between different tissues and individual patients; and (g) the general failure to use engineering principles to model biology in parallel with performing 'wet' experiments, such that 'what if?' experiments can be performed in silico to assess the likely success of any strategy. These facts/ideas are illustrated with a reasonably extensive literature review. Success in turning round drug discovery consequently requires: (a) decent systems biology models of human biochemical networks; (b) the use of these (iteratively with experiments) to model how drugs need to interact with multiple targets to have substantive effects on the phenotype; (c) the adoption of polypharmacology and/or cocktails of drugs as a desirable goal in itself; (d) the incorporation of drug transporters into systems biology models, en route to full and multiscale systems biology models that incorporate drug absorption, distribution, metabolism and excretion; (e) a return to 'function-first' or phenotypic screening; and (f) novel methods for inferring modes of action by measuring the properties on system variables at all levels of the 'omes. Such a strategy offers the opportunity of achieving a state where we can hope to predict biological processes and the effect of pharmaceutical agents upon them. Consequently, this should both lower attrition rates and raise the rates of discovery of effective drugs substantially. © 2013 The Author Journal compilation © 2013 FEBS.
Creating a culture of patient-focused care through a learner-centered philosophy.
Linscott, J; Spee, R; Flint, F; Fisher, A
1999-01-01
This paper will discuss the teaching-learning process used in the Patient-Focused Care Course at a major teaching hospital in Canada that is transforming nursing practice from a provider driven to a patient-focused approach. The experiential and reflective nature of the course offers opportunities for nurses to link theory with practice, to think critically and reflectively about their own values and beliefs and to translate that meaning into practice. The learning process reflects principles of adult learning based on Knowles andragogical model which differs from the traditional pedagogical model of teaching. The essence of andragogy is a constant unfolding process of discovery based on dialogue. Utilization of adult learning principles that support critical thinking and foster transformational change present an alternative to traditional ways of teaching and learning the art and science of nursing practice.
Discovery and problem solving: Triangulation as a weak heuristic
NASA Technical Reports Server (NTRS)
Rochowiak, Daniel
1987-01-01
Recently the artificial intelligence community has turned its attention to the process of discovery and found that the history of science is a fertile source for what Darden has called compiled hindsight. Such hindsight generates weak heuristics for discovery that do not guarantee that discoveries will be made but do have proven worth in leading to discoveries. Triangulation is one such heuristic that is grounded in historical hindsight. This heuristic is explored within the general framework of the BACON, GLAUBER, STAHL, DALTON, and SUTTON programs. In triangulation different bases of information are compared in an effort to identify gaps between the bases. Thus, assuming that the bases of information are relevantly related, the gaps that are identified should be good locations for discovery and robust analysis.
A New Student Performance Analysing System Using Knowledge Discovery in Higher Educational Databases
ERIC Educational Resources Information Center
Guruler, Huseyin; Istanbullu, Ayhan; Karahasan, Mehmet
2010-01-01
Knowledge discovery is a wide ranged process including data mining, which is used to find out meaningful and useful patterns in large amounts of data. In order to explore the factors having impact on the success of university students, knowledge discovery software, called MUSKUP, has been developed and tested on student data. In this system a…
Close up view of the Orbiter Discovery in the Orbiter ...
Close up view of the Orbiter Discovery in the Orbiter Processing Facility at Kennedy Space Center. The view is a detail of the aft, starboard landing gear and a general view of the Thermal Protection System tiles around the landing-gear housing. - Space Transportation System, Orbiter Discovery (OV-103), Lyndon B. Johnson Space Center, 2101 NASA Parkway, Houston, Harris County, TX
ERIC Educational Resources Information Center
Zhu, Lixin
2011-01-01
For the purpose of teaching collegians the fundamentals of biological research, literature explaining the discovery of the gastric proton pump was presented in a 50-min lecture. The presentation included detailed information pertaining to the discovery process. This study was chosen because it demonstrates the importance of having a broad range of…
Why Quantify Uncertainty in Ecosystem Studies: Obligation versus Discovery Tool?
NASA Astrophysics Data System (ADS)
Harmon, M. E.
2016-12-01
There are multiple motivations for quantifying uncertainty in ecosystem studies. One is as an obligation; the other is as a tool useful in moving ecosystem science toward discovery. While reporting uncertainty should become a routine expectation, a more convincing motivation involves discovery. By clarifying what is known and to what degree it is known, uncertainty analyses can point the way toward improvements in measurements, sampling designs, and models. While some of these improvements (e.g., better sampling designs) may lead to incremental gains, those involving models (particularly model selection) may require large gains in knowledge. To be fully harnessed as a discovery tool, attitudes toward uncertainty may have to change: rather than viewing uncertainty as a negative assessment of what was done, it should be viewed as positive, helpful assessment of what remains to be done.
An Adaptive Jitter Mechanism for Reactive Route Discovery in Sensor Networks
Cordero, Juan Antonio; Yi, Jiazi; Clausen, Thomas
2014-01-01
This paper analyses the impact of jitter when applied to route discovery in reactive (on-demand) routing protocols. In multi-hop non-synchronized wireless networks, jitter—a small, random variation in the timing of message emission—is commonly employed, as a means to avoid collisions of simultaneous transmissions by adjacent routers over the same channel. In a reactive routing protocol for sensor and ad hoc networks, jitter is recommended during the route discovery process, specifically, during the network-wide flooding of route request messages, in order to avoid collisions. Commonly, a simple uniform jitter is recommended. Alas, this is not without drawbacks: when applying uniform jitter to the route discovery process, an effect called delay inversion is observed. This paper, first, studies and quantifies this delay inversion effect. Second, this paper proposes an adaptive jitter mechanism, designed to alleviate the delay inversion effect and thereby to reduce the route discovery overhead and (ultimately) allow the routing protocol to find more optimal paths, as compared to uniform jitter. This paper presents both analytical and simulation studies, showing that the proposed adaptive jitter can effectively decrease the cost of route discovery and increase the path quality. PMID:25111238
Zebrafish models in neuropsychopharmacology and CNS drug discovery.
Khan, Kanza M; Collier, Adam D; Meshalkina, Darya A; Kysil, Elana V; Khatsko, Sergey L; Kolesnikova, Tatyana; Morzherin, Yury Yu; Warnick, Jason E; Kalueff, Allan V; Echevarria, David J
2017-07-01
Despite the high prevalence of neuropsychiatric disorders, their aetiology and molecular mechanisms remain poorly understood. The zebrafish (Danio rerio) is increasingly utilized as a powerful animal model in neuropharmacology research and in vivo drug screening. Collectively, this makes zebrafish a useful tool for drug discovery and the identification of disordered molecular pathways. Here, we discuss zebrafish models of selected human neuropsychiatric disorders and drug-induced phenotypes. As well as covering a broad range of brain disorders (from anxiety and psychoses to neurodegeneration), we also summarize recent developments in zebrafish genetics and small molecule screening, which markedly enhance the disease modelling and the discovery of novel drug targets. © 2017 The British Pharmacological Society.
General View looking forward along the centerline of the Orbiter ...
General View looking forward along the centerline of the Orbiter Discovery looking into the payload bay with a payload in the process of being secured into place. This photograph was taken in the Orbiter Processing Facility at Kennedy Space Center. - Space Transportation System, Orbiter Discovery (OV-103), Lyndon B. Johnson Space Center, 2101 NASA Parkway, Houston, Harris County, TX
Discovering Mendeleev's Model.
ERIC Educational Resources Information Center
Sterling, Donna
1996-01-01
Presents an activity that introduces the historical developments in science that led to the discovery of the periodic table and lets students experience scientific discovery firsthand. Enables students to learn about patterns among the elements and experience how scientists analyze data to discover patterns and build models. (JRH)
mHealth Visual Discovery Dashboard.
Fang, Dezhi; Hohman, Fred; Polack, Peter; Sarker, Hillol; Kahng, Minsuk; Sharmin, Moushumi; al'Absi, Mustafa; Chau, Duen Horng
2017-09-01
We present Discovery Dashboard, a visual analytics system for exploring large volumes of time series data from mobile medical field studies. Discovery Dashboard offers interactive exploration tools and a data mining motif discovery algorithm to help researchers formulate hypotheses, discover trends and patterns, and ultimately gain a deeper understanding of their data. Discovery Dashboard emphasizes user freedom and flexibility during the data exploration process and enables researchers to do things previously challenging or impossible to do - in the web-browser and in real time. We demonstrate our system visualizing data from a mobile sensor study conducted at the University of Minnesota that included 52 participants who were trying to quit smoking.
mHealth Visual Discovery Dashboard
Fang, Dezhi; Hohman, Fred; Polack, Peter; Sarker, Hillol; Kahng, Minsuk; Sharmin, Moushumi; al'Absi, Mustafa; Chau, Duen Horng
2018-01-01
We present Discovery Dashboard, a visual analytics system for exploring large volumes of time series data from mobile medical field studies. Discovery Dashboard offers interactive exploration tools and a data mining motif discovery algorithm to help researchers formulate hypotheses, discover trends and patterns, and ultimately gain a deeper understanding of their data. Discovery Dashboard emphasizes user freedom and flexibility during the data exploration process and enables researchers to do things previously challenging or impossible to do — in the web-browser and in real time. We demonstrate our system visualizing data from a mobile sensor study conducted at the University of Minnesota that included 52 participants who were trying to quit smoking. PMID:29354812
Bouwknecht, J Adriaan
2015-04-15
The review describes a personal journey through 25 years of animal research with a focus on the contribution of rodent models for anxiety and depression to the development of new medicines in a drug discovery environment. Several classic acute models for mood disorders are briefly described as well as chronic stress and disease-induction models. The paper highlights a variety of factors that influence the quality and consistency of behavioral data in a laboratory setting. The importance of meta-analysis techniques for study validation (tolerance interval) and assay sensitivity (Monte Carlo modeling) are demonstrated by examples that use historic data. It is essential for successful discovery of new potential drugs to maintain a high level of control in animal research and to bridge knowledge across in silico modeling, and in vitro and in vivo assays. Today, drug discovery is a highly dynamic environment in search of new types of treatments and new animal models which should be guided by enhanced two-way translation between bench and bed. Although productivity has been disappointing in the search of new and better medicines in psychiatry over the past decades, there has been and will always be an important role for in vivo models in-between preclinical discovery and clinical development. The right balance between good science and proper judgment versus a decent level of innovation, assay development and two-way translation will open the doors to a very bright future. Copyright © 2014 Elsevier B.V. All rights reserved.
High-Content Monitoring of Drug Effects in a 3D Spheroid Model
Mittler, Frédérique; Obeïd, Patricia; Rulina, Anastasia V.; Haguet, Vincent; Gidrol, Xavier; Balakirev, Maxim Y.
2017-01-01
A recent decline in the discovery of novel medications challenges the widespread use of 2D monolayer cell assays in the drug discovery process. As a result, the need for more appropriate cellular models of human physiology and disease has renewed the interest in spheroid 3D culture as a pertinent model for drug screening. However, despite technological progress that has significantly simplified spheroid production and analysis, the seeming complexity of the 3D approach has delayed its adoption in many laboratories. The present report demonstrates that the use of a spheroid model may be straightforward and can provide information that is not directly available with a standard 2D approach. We describe a cost-efficient method that allows for the production of an array of uniform spheroids, their staining with vital dyes, real-time monitoring of drug effects, and an ATP-endpoint assay, all in the same 96-well U-bottom plate. To demonstrate the method performance, we analyzed the effect of the preclinical anticancer drug MLN4924 on spheroids formed by VCaP and LNCaP prostate cancer cells. The drug has different outcomes in these cell lines, varying from cell cycle arrest and protective dormancy to senescence and apoptosis. We demonstrate that by using high-content analysis of spheroid arrays, the effect of the drug can be described as a series of EC50 values that clearly dissect the cytostatic and cytotoxic drug actions. The method was further evaluated using four standard cancer chemotherapeutics with different mechanisms of action, and the effect of each drug is described as a unique multi-EC50 diagram. Once fully validated in a wider range of conditions, this method could be particularly valuable for phenotype-based drug discovery. PMID:29322028