Multi-level damage identification with response reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Chao-Dong; Xu, You-Lin
2017-10-01
Damage identification through finite element (FE) model updating usually forms an inverse problem. Solving the inverse identification problem for complex civil structures is very challenging since the dimension of potential damage parameters in a complex civil structure is often very large. Aside from enormous computation efforts needed in iterative updating, the ill-condition and non-global identifiability features of the inverse problem probably hinder the realization of model updating based damage identification for large civil structures. Following a divide-and-conquer strategy, a multi-level damage identification method is proposed in this paper. The entire structure is decomposed into several manageable substructures and each substructure is further condensed as a macro element using the component mode synthesis (CMS) technique. The damage identification is performed at two levels: the first is at macro element level to locate the potentially damaged region and the second is over the suspicious substructures to further locate as well as quantify the damage severity. In each level's identification, the damage searching space over which model updating is performed is notably narrowed down, not only reducing the computation amount but also increasing the damage identifiability. Besides, the Kalman filter-based response reconstruction is performed at the second level to reconstruct the response of the suspicious substructure for exact damage quantification. Numerical studies and laboratory tests are both conducted on a simply supported overhanging steel beam for conceptual verification. The results demonstrate that the proposed multi-level damage identification via response reconstruction does improve the identification accuracy of damage localization and quantization considerably.
Chen, Lih-Shyang; Hsu, Ta-Wen; Chang, Shu-Han; Lin, Chih-Wen; Chen, Yu-Ruei; Hsieh, Chin-Chiang; Han, Shu-Chen; Chang, Ku-Yaw; Hou, Chun-Ju
2017-01-01
Objective: In traditional surface rendering (SR) computed tomographic endoscopy, only the shape of endoluminal lesion is depicted without gray-level information unless the volume rendering technique is used. However, volume rendering technique is relatively slow and complex in terms of computation time and parameter setting. We use computed tomographic colonography (CTC) images as examples and report a new visualization technique by three-dimensional gray level mapping (GM) to better identify and differentiate endoluminal lesions. Methods: There are 33 various endoluminal cases from 30 patients evaluated in this clinical study. These cases were segmented using gray-level threshold. The marching cube algorithm was used to detect isosurfaces in volumetric data sets. GM is applied using the surface gray level of CTC. Radiologists conducted the clinical evaluation of the SR and GM images. The Wilcoxon signed-rank test was used for data analysis. Results: Clinical evaluation confirms GM is significantly superior to SR in terms of gray-level pattern and spatial shape presentation of endoluminal cases (p < 0.01) and improves the confidence of identification and clinical classification of endoluminal lesions significantly (p < 0.01). The specificity and diagnostic accuracy of GM is significantly better than those of SR in diagnostic performance evaluation (p < 0.01). Conclusion: GM can reduce confusion in three-dimensional CTC and well correlate CTC with sectional images by the location as well as gray-level value. Hence, GM increases identification and differentiation of endoluminal lesions, and facilitates diagnostic process. Advances in knowledge: GM significantly improves the traditional SR method by providing reliable gray-level information for the surface points and is helpful in identification and differentiation of endoluminal lesions according to their shape and density. PMID:27925483
ERIC Educational Resources Information Center
Crowe, Jacquelyn
This study investigated computer and word processing operator skills necessary for employment in today's high technology office. The study was comprised of seven major phases: (1) identification of existing community college computer operator programs in the state of Washington; (2) attendance at an information management seminar; (3) production…
Towards large-scale FAME-based bacterial species identification using machine learning techniques.
Slabbinck, Bram; De Baets, Bernard; Dawyndt, Peter; De Vos, Paul
2009-05-01
In the last decade, bacterial taxonomy witnessed a huge expansion. The swift pace of bacterial species (re-)definitions has a serious impact on the accuracy and completeness of first-line identification methods. Consequently, back-end identification libraries need to be synchronized with the List of Prokaryotic names with Standing in Nomenclature. In this study, we focus on bacterial fatty acid methyl ester (FAME) profiling as a broadly used first-line identification method. From the BAME@LMG database, we have selected FAME profiles of individual strains belonging to the genera Bacillus, Paenibacillus and Pseudomonas. Only those profiles resulting from standard growth conditions have been retained. The corresponding data set covers 74, 44 and 95 validly published bacterial species, respectively, represented by 961, 378 and 1673 standard FAME profiles. Through the application of machine learning techniques in a supervised strategy, different computational models have been built for genus and species identification. Three techniques have been considered: artificial neural networks, random forests and support vector machines. Nearly perfect identification has been achieved at genus level. Notwithstanding the known limited discriminative power of FAME analysis for species identification, the computational models have resulted in good species identification results for the three genera. For Bacillus, Paenibacillus and Pseudomonas, random forests have resulted in sensitivity values, respectively, 0.847, 0.901 and 0.708. The random forests models outperform those of the other machine learning techniques. Moreover, our machine learning approach also outperformed the Sherlock MIS (MIDI Inc., Newark, DE, USA). These results show that machine learning proves very useful for FAME-based bacterial species identification. Besides good bacterial identification at species level, speed and ease of taxonomic synchronization are major advantages of this computational species identification strategy.
Level-set techniques for facies identification in reservoir modeling
NASA Astrophysics Data System (ADS)
Iglesias, Marco A.; McLaughlin, Dennis
2011-03-01
In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.
Nesvizhskii, Alexey I.
2010-01-01
This manuscript provides a comprehensive review of the peptide and protein identification process using tandem mass spectrometry (MS/MS) data generated in shotgun proteomic experiments. The commonly used methods for assigning peptide sequences to MS/MS spectra are critically discussed and compared, from basic strategies to advanced multi-stage approaches. A particular attention is paid to the problem of false-positive identifications. Existing statistical approaches for assessing the significance of peptide to spectrum matches are surveyed, ranging from single-spectrum approaches such as expectation values to global error rate estimation procedures such as false discovery rates and posterior probabilities. The importance of using auxiliary discriminant information (mass accuracy, peptide separation coordinates, digestion properties, and etc.) is discussed, and advanced computational approaches for joint modeling of multiple sources of information are presented. This review also includes a detailed analysis of the issues affecting the interpretation of data at the protein level, including the amplification of error rates when going from peptide to protein level, and the ambiguities in inferring the identifies of sample proteins in the presence of shared peptides. Commonly used methods for computing protein-level confidence scores are discussed in detail. The review concludes with a discussion of several outstanding computational issues. PMID:20816881
Wang, Jiang-Ning; Chen, Xiao-Lin; Hou, Xin-Wen; Zhou, Li-Bing; Zhu, Chao-Dong; Ji, Li-Qiang
2017-07-01
Many species of Tephritidae are damaging to fruit, which might negatively impact international fruit trade. Automatic or semi-automatic identification of fruit flies are greatly needed for diagnosing causes of damage and quarantine protocols for economically relevant insects. A fruit fly image identification system named AFIS1.0 has been developed using 74 species belonging to six genera, which include the majority of pests in the Tephritidae. The system combines automated image identification and manual verification, balancing operability and accuracy. AFIS1.0 integrates image analysis and expert system into a content-based image retrieval framework. In the the automatic identification module, AFIS1.0 gives candidate identification results. Afterwards users can do manual selection based on comparing unidentified images with a subset of images corresponding to the automatic identification result. The system uses Gabor surface features in automated identification and yielded an overall classification success rate of 87% to the species level by Independent Multi-part Image Automatic Identification Test. The system is useful for users with or without specific expertise on Tephritidae in the task of rapid and effective identification of fruit flies. It makes the application of computer vision technology to fruit fly recognition much closer to production level. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Identification of quasi-steady compressor characteristics from transient data
NASA Technical Reports Server (NTRS)
Nunes, K. B.; Rock, S. M.
1984-01-01
The principal goal was to demonstrate that nonlinear compressor map parameters, which govern an in-stall response, can be identified from test data using parameter identification techniques. The tasks included developing and then applying an identification procedure to data generated by NASA LeRC on a hybrid computer. Two levels of model detail were employed. First was a lumped compressor rig model; second was a simplified turbofan model. The main outputs are the tools and procedures generated to accomplish the identification.
Sehlen, Susanne; Ott, Martin; Marten-Mittag, Birgitt; Haimerl, Wolfgang; Dinkel, Andreas; Duehmke, Eckhart; Klein, Christian; Schaefer, Christof; Herschbach, Peter
2012-07-01
This study investigated feasibility and acceptance of computer-based assessment for the identification of psychosocial distress in routine radiotherapy care. 155 cancer patients were assessed using QSC-R10, PO-Bado-SF and Mach-9. The congruence between computerized tablet PC and conventional paper assessment was analysed in 50 patients. The agreement between the 2 modes was high (ICC 0.869-0.980). Acceptance of computer-based assessment was very high (>95%). Sex, age, education, distress and Karnofsky performance status (KPS) did not influence acceptance. Computerized assessment was rated more difficult by older patients (p = 0.039) and patients with low KPS (p = 0.020). 75.5% of the respondents supported referral for psycho-social intervention for distressed patients. The prevalence of distress was 27.1% (QSC-R10). Computer-based assessment allows easy identification of distressed patients. Level of staff involvement is low, and the results are quickly available for care providers. © Georg Thieme Verlag KG Stuttgart · New York.
The Use of Computer-Assisted Identification of ARIMA Time-Series.
ERIC Educational Resources Information Center
Brown, Roger L.
This study was conducted to determine the effects of using various levels of tutorial statistical software for the tentative identification of nonseasonal ARIMA models, a statistical technique proposed by Box and Jenkins for the interpretation of time-series data. The Box-Jenkins approach is an iterative process encompassing several stages of…
AutoCNet: A Python library for sparse multi-image correspondence identification for planetary data
NASA Astrophysics Data System (ADS)
Laura, Jason; Rodriguez, Kelvin; Paquette, Adam C.; Dunn, Evin
2018-01-01
In this work we describe the AutoCNet library, written in Python, to support the application of computer vision techniques for n-image correspondence identification in remotely sensed planetary images and subsequent bundle adjustment. The library is designed to support exploratory data analysis, algorithm and processing pipeline development, and application at scale in High Performance Computing (HPC) environments for processing large data sets and generating foundational data products. We also present a brief case study illustrating high level usage for the Apollo 15 Metric camera.
NASA Technical Reports Server (NTRS)
Eberlein, A. J.; Lahm, T. G.
1976-01-01
The degree to which flight-critical failures in a strapdown laser gyro tetrad sensor assembly can be isolated in short-haul aircraft after a failure occurrence has been detected by the skewed sensor failure-detection voting logic is investigated along with the degree to which a failure in the tetrad computer can be detected and isolated at the computer level, assuming a dual-redundant computer configuration. The tetrad system was mechanized with two two-axis inertial navigation channels (INCs), each containing two gyro/accelerometer axes, computer, control circuitry, and input/output circuitry. Gyro/accelerometer data is crossfed between the two INCs to enable each computer to independently perform the navigation task. Computer calculations are synchronized between the computers so that calculated quantities are identical and may be compared. Fail-safe performance (identification of the first failure) is accomplished with a probability approaching 100 percent of the time, while fail-operational performance (identification and isolation of the first failure) is achieved 93 to 96 percent of the time.
Yousef Kalafi, Elham; Town, Christopher; Kaur Dhillon, Sarinder
2017-09-04
Identification of taxonomy at a specific level is time consuming and reliant upon expert ecologists. Hence the demand for automated species identification increased over the last two decades. Automation of data classification is primarily focussed on images, incorporating and analysing image data has recently become easier due to developments in computational technology. Research efforts in identification of species include specimens' image processing, extraction of identical features, followed by classifying them into correct categories. In this paper, we discuss recent automated species identification systems, categorizing and evaluating their methods. We reviewed and compared different methods in step by step scheme of automated identification and classification systems of species images. The selection of methods is influenced by many variables such as level of classification, number of training data and complexity of images. The aim of writing this paper is to provide researchers and scientists an extensive background study on work related to automated species identification, focusing on pattern recognition techniques in building such systems for biodiversity studies.
2009-12-17
IEEE TDKE, 1996. 8( 1). 14. Garvey, T.D., The inference Problem for Computer Security. 1992, SRI International. 15. Chaum , D ., Blind Signatures for...Pervasive Computing Environments. IEEE Transactions on Vehicular Technology, 2006. 55(4). 17. Chaum , D ., Security without Identification: Transaction...Systems to make Big Brother Obsolete. Communications of the ACM 1985. 28(10). 18. Chaum , D ., Untraceable Electronic Mail, Return Addresses, and Digital
Crop identification and area estimation over large geographic areas using LANDSAT MSS data
NASA Technical Reports Server (NTRS)
Bauer, M. E. (Principal Investigator)
1977-01-01
The author has identified the following significant results. LANDSAT MSS data was adequate to accurately identify wheat in Kansas; corn and soybean estimates in Indiana were less accurate. Computer-aided analysis techniques were effectively used to extract crop identification information from LANDSAT data. Systematic sampling of entire counties made possible by computer classification methods resulted in very precise area estimates at county, district, and state levels. Training statistics were successfully extended from one county to other counties having similar crops and soils if the training areas sampled the total variation of the area to be classified.
Preparing Students for Computer Aided Drafting (CAD). A Conceptual Approach.
ERIC Educational Resources Information Center
Putnam, A. R.; Duelm, Brian
This presentation outlines guidelines for developing and implementing an introductory course in computer-aided drafting (CAD) that is geared toward secondary-level students. The first section of the paper, which deals with content identification and selection, includes lists of mechanical drawing and CAD competencies and a list of rationales for…
Wheeze sound analysis using computer-based techniques: a systematic review.
Ghulam Nabi, Fizza; Sundaraj, Kenneth; Chee Kiang, Lam; Palaniappan, Rajkumar; Sundaraj, Sebastian
2017-10-31
Wheezes are high pitched continuous respiratory acoustic sounds which are produced as a result of airway obstruction. Computer-based analyses of wheeze signals have been extensively used for parametric analysis, spectral analysis, identification of airway obstruction, feature extraction and diseases or pathology classification. While this area is currently an active field of research, the available literature has not yet been reviewed. This systematic review identified articles describing wheeze analyses using computer-based techniques on the SCOPUS, IEEE Xplore, ACM, PubMed and Springer and Elsevier electronic databases. After a set of selection criteria was applied, 41 articles were selected for detailed analysis. The findings reveal that 1) computerized wheeze analysis can be used for the identification of disease severity level or pathology, 2) further research is required to achieve acceptable rates of identification on the degree of airway obstruction with normal breathing, 3) analysis using combinations of features and on subgroups of the respiratory cycle has provided a pathway to classify various diseases or pathology that stem from airway obstruction.
A comparative approach to closed-loop computation.
Roth, E; Sponberg, S; Cowan, N J
2014-04-01
Neural computation is inescapably closed-loop: the nervous system processes sensory signals to shape motor output, and motor output consequently shapes sensory input. Technological advances have enabled neuroscientists to close, open, and alter feedback loops in a wide range of experimental preparations. The experimental capability of manipulating the topology-that is, how information can flow between subsystems-provides new opportunities to understand the mechanisms and computations underlying behavior. These experiments encompass a spectrum of approaches from fully open-loop, restrained preparations to the fully closed-loop character of free behavior. Control theory and system identification provide a clear computational framework for relating these experimental approaches. We describe recent progress and new directions for translating experiments at one level in this spectrum to predictions at another level. Operating across this spectrum can reveal new understanding of how low-level neural mechanisms relate to high-level function during closed-loop behavior. Copyright © 2013 Elsevier Ltd. All rights reserved.
Ravindran, Prabu; Costa, Adriana; Soares, Richard; Wiedenhoeft, Alex C
2018-01-01
The current state-of-the-art for field wood identification to combat illegal logging relies on experienced practitioners using hand lenses, specialized identification keys, atlases of woods, and field manuals. Accumulation of this expertise is time-consuming and access to training is relatively rare compared to the international demand for field wood identification. A reliable, consistent and cost effective field screening method is necessary for effective global scale enforcement of international treaties such as the Convention on the International Trade in Endagered Species (CITES) or national laws (e.g. the US Lacey Act) governing timber trade and imports. We present highly effective computer vision classification models, based on deep convolutional neural networks, trained via transfer learning, to identify the woods of 10 neotropical species in the family Meliaceae, including CITES-listed Swietenia macrophylla , Swietenia mahagoni , Cedrela fissilis , and Cedrela odorata . We build and evaluate models to classify the 10 woods at the species and genus levels, with image-level model accuracy ranging from 87.4 to 97.5%, with the strongest performance by the genus-level model. Misclassified images are attributed to classes consistent with traditional wood anatomical results, and our species-level accuracy greatly exceeds the resolution of traditional wood identification. The end-to-end trained image classifiers that we present discriminate the woods based on digital images of the transverse surface of solid wood blocks, which are surfaces and images that can be prepared and captured in the field. Hence this work represents a strong proof-of-concept for using computer vision and convolutional neural networks to develop practical models for field screening timber and wood products to combat illegal logging.
Systems Epidemiology: What’s in a Name?
Dammann, O.; Gray, P.; Gressens, P.; Wolkenhauer, O.; Leviton, A.
2014-01-01
Systems biology is an interdisciplinary effort to integrate molecular, cellular, tissue, organ, and organism levels of function into computational models that facilitate the identification of general principles. Systems medicine adds a disease focus. Systems epidemiology adds yet another level consisting of antecedents that might contribute to the disease process in populations. In etiologic and prevention research, systems-type thinking about multiple levels of causation will allow epidemiologists to identify contributors to disease at multiple levels as well as their interactions. In public health, systems epidemiology will contribute to the improvement of syndromic surveillance methods. We encourage the creation of computational simulation models that integrate information about disease etiology, pathogenetic data, and the expertise of investigators from different disciplines. PMID:25598870
Mental Mechanisms for Topics Identification
2014-01-01
Topics identification (TI) is the process that consists in determining the main themes present in natural language documents. The current TI modeling paradigm aims at acquiring semantic information from statistic properties of large text datasets. We investigate the mental mechanisms responsible for the identification of topics in a single document given existing knowledge. Our main hypothesis is that topics are the result of accumulated neural activation of loosely organized information stored in long-term memory (LTM). We experimentally tested our hypothesis with a computational model that simulates LTM activation. The model assumes activation decay as an unavoidable phenomenon originating from the bioelectric nature of neural systems. Since decay should negatively affect the quality of topics, the model predicts the presence of short-term memory (STM) to keep the focus of attention on a few words, with the expected outcome of restoring quality to a baseline level. Our experiments measured topics quality of over 300 documents with various decay rates and STM capacity. Our results showed that accumulated activation of loosely organized information was an effective mental computational commodity to identify topics. It was furthermore confirmed that rapid decay is detrimental to topics quality but that limited capacity STM restores quality to a baseline level, even exceeding it slightly. PMID:24744775
Code of Federal Regulations, 2014 CFR
2014-10-01
... and marking of computer software or computer software documentation to be furnished with restrictive... Rights in Computer Software and Computer Software Documentation 227.7203-10 Contractor identification and marking of computer software or computer software documentation to be furnished with restrictive markings...
Code of Federal Regulations, 2011 CFR
2011-10-01
... and marking of computer software or computer software documentation to be furnished with restrictive... Rights in Computer Software and Computer Software Documentation 227.7203-10 Contractor identification and marking of computer software or computer software documentation to be furnished with restrictive markings...
Code of Federal Regulations, 2012 CFR
2012-10-01
... and marking of computer software or computer software documentation to be furnished with restrictive... Rights in Computer Software and Computer Software Documentation 227.7203-10 Contractor identification and marking of computer software or computer software documentation to be furnished with restrictive markings...
Code of Federal Regulations, 2010 CFR
2010-10-01
... and marking of computer software or computer software documentation to be furnished with restrictive... Rights in Computer Software and Computer Software Documentation 227.7203-10 Contractor identification and marking of computer software or computer software documentation to be furnished with restrictive markings...
Code of Federal Regulations, 2013 CFR
2013-10-01
... and marking of computer software or computer software documentation to be furnished with restrictive... Rights in Computer Software and Computer Software Documentation 227.7203-10 Contractor identification and marking of computer software or computer software documentation to be furnished with restrictive markings...
Basic concepts and development of an all-purpose computer interface for ROC/FROC observer study.
Shiraishi, Junji; Fukuoka, Daisuke; Hara, Takeshi; Abe, Hiroyuki
2013-01-01
In this study, we initially investigated various aspects of requirements for a computer interface employed in receiver operating characteristic (ROC) and free-response ROC (FROC) observer studies which involve digital images and ratings obtained by observers (radiologists). Secondly, by taking into account these aspects, an all-purpose computer interface utilized for these observer performance studies was developed. Basically, the observer studies can be classified into three paradigms, such as one rating for one case without an identification of a signal location, one rating for one case with an identification of a signal location, and multiple ratings for one case with identification of signal locations. For these paradigms, display modes on the computer interface can be used for single/multiple views of a static image, continuous viewing with cascade images (i.e., CT, MRI), and dynamic viewing of movies (i.e., DSA, ultrasound). Various functions on these display modes, which include windowing (contrast/level), magnifications, and annotations, are needed to be selected by an experimenter corresponding to the purpose of the research. In addition, the rules of judgment for distinguishing between true positives and false positives are an important factor for estimating diagnostic accuracy in an observer study. We developed a computer interface which runs on a Windows operating system by taking into account all aspects required for various observer studies. This computer interface requires experimenters to have sufficient knowledge about ROC/FROC observer studies, but allows its use for any purpose of the observer studies. This computer interface will be distributed publicly in the near future.
NASA Technical Reports Server (NTRS)
Gatski, T. B.
1979-01-01
The sound due to the large-scale (wavelike) structure in an infinite free turbulent shear flow is examined. Specifically, a computational study of a plane shear layer is presented, which accounts, by way of triple decomposition of the flow field variables, for three distinct component scales of motion (mean, wave, turbulent), and from which the sound - due to the large-scale wavelike structure - in the acoustic field can be isolated by a simple phase average. The computational approach has allowed for the identification of a specific noise production mechanism, viz the wave-induced stress, and has indicated the effect of coherent structure amplitude and growth and decay characteristics on noise levels produced in the acoustic far field.
Code of Federal Regulations, 2010 CFR
2010-10-01
... computer software or computer software documentation to be furnished to the Government with restrictions on..., DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-3 Early identification of computer software or computer software documentation to be furnished to the Government with...
Code of Federal Regulations, 2012 CFR
2012-10-01
... computer software or computer software documentation to be furnished to the Government with restrictions on..., DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-3 Early identification of computer software or computer software documentation to be furnished to the Government with...
Code of Federal Regulations, 2011 CFR
2011-10-01
... computer software or computer software documentation to be furnished to the Government with restrictions on..., DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-3 Early identification of computer software or computer software documentation to be furnished to the Government with...
Code of Federal Regulations, 2013 CFR
2013-10-01
... computer software or computer software documentation to be furnished to the Government with restrictions on..., DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-3 Early identification of computer software or computer software documentation to be furnished to the Government with...
Code of Federal Regulations, 2014 CFR
2014-10-01
... computer software or computer software documentation to be furnished to the Government with restrictions on..., DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-3 Early identification of computer software or computer software documentation to be furnished to the Government with...
Cloud computing approaches to accelerate drug discovery value chain.
Garg, Vibhav; Arora, Suchir; Gupta, Chitra
2011-12-01
Continued advancements in the area of technology have helped high throughput screening (HTS) evolve from a linear to parallel approach by performing system level screening. Advanced experimental methods used for HTS at various steps of drug discovery (i.e. target identification, target validation, lead identification and lead validation) can generate data of the order of terabytes. As a consequence, there is pressing need to store, manage, mine and analyze this data to identify informational tags. This need is again posing challenges to computer scientists to offer the matching hardware and software infrastructure, while managing the varying degree of desired computational power. Therefore, the potential of "On-Demand Hardware" and "Software as a Service (SAAS)" delivery mechanisms cannot be denied. This on-demand computing, largely referred to as Cloud Computing, is now transforming the drug discovery research. Also, integration of Cloud computing with parallel computing is certainly expanding its footprint in the life sciences community. The speed, efficiency and cost effectiveness have made cloud computing a 'good to have tool' for researchers, providing them significant flexibility, allowing them to focus on the 'what' of science and not the 'how'. Once reached to its maturity, Discovery-Cloud would fit best to manage drug discovery and clinical development data, generated using advanced HTS techniques, hence supporting the vision of personalized medicine.
Trofimov, Vyacheslav A.; Varentsova, Svetlana A.; Zakharova, Irina G.; Zagursky, Dmitry Yu.
2017-01-01
Using an experiment with thin paper layers and computer simulation, we demonstrate the principal limitations of standard Time Domain Spectroscopy (TDS) based on using a broadband THz pulse for the detection and identification of a substance placed inside a disordered structure. We demonstrate the spectrum broadening of both transmitted and reflected pulses due to the cascade mechanism of the high energy level excitation considering, for example, a three-energy level medium. The pulse spectrum in the range of high frequencies remains undisturbed in the presence of a disordered structure. To avoid false absorption frequencies detection, we apply the spectral dynamics analysis method (SDA-method) together with certain integral correlation criteria (ICC). PMID:29186849
Nuclear Magnetic Resonance Spectroscopy-Based Identification of Yeast.
Himmelreich, Uwe; Sorrell, Tania C; Daniel, Heide-Marie
2017-01-01
Rapid and robust high-throughput identification of environmental, industrial, or clinical yeast isolates is important whenever relatively large numbers of samples need to be processed in a cost-efficient way. Nuclear magnetic resonance (NMR) spectroscopy generates complex data based on metabolite profiles, chemical composition and possibly on medium consumption, which can not only be used for the assessment of metabolic pathways but also for accurate identification of yeast down to the subspecies level. Initial results on NMR based yeast identification where comparable with conventional and DNA-based identification. Potential advantages of NMR spectroscopy in mycological laboratories include not only accurate identification but also the potential of automated sample delivery, automated analysis using computer-based methods, rapid turnaround time, high throughput, and low running costs.We describe here the sample preparation, data acquisition and analysis for NMR-based yeast identification. In addition, a roadmap for the development of classification strategies is given that will result in the acquisition of a database and analysis algorithms for yeast identification in different environments.
A service-oriented data access control model
NASA Astrophysics Data System (ADS)
Meng, Wei; Li, Fengmin; Pan, Juchen; Song, Song; Bian, Jiali
2017-01-01
The development of mobile computing, cloud computing and distributed computing meets the growing individual service needs. Facing with complex application system, it's an urgent problem to ensure real-time, dynamic, and fine-grained data access control. By analyzing common data access control models, on the basis of mandatory access control model, the paper proposes a service-oriented access control model. By regarding system services as subject and data of databases as object, the model defines access levels and access identification of subject and object, and ensures system services securely to access databases.
Software architecture of the III/FBI segment of the FBI's integrated automated identification system
NASA Astrophysics Data System (ADS)
Booker, Brian T.
1997-02-01
This paper will describe the software architecture of the Interstate Identification Index (III/FBI) Segment of the FBI's Integrated Automated Fingerprint Identification System (IAFIS). IAFIS is currently under development, with deployment to begin in 1998. III/FBI will provide the repository of criminal history and photographs for criminal subjects, as well as identification data for military and civilian federal employees. Services provided by III/FBI include maintenance of the criminal and civil data, subject search of the criminal and civil data, and response generation services for IAFIS. III/FBI software will be comprised of both COTS and an estimated 250,000 lines of developed C code. This paper will describe the following: (1) the high-level requirements of the III/FBI software; (2) the decomposition of the III/FBI software into Computer Software Configuration Items (CSCIs); (3) the top-level design of the III/FBI CSCIs; and (4) the relationships among the developed CSCIs and the COTS products that will comprise the III/FBI software.
Merlyn J. Paulson
1979-01-01
This paper outlines a project level process (V.I.S.) which utilizes very accurate and flexible computer algorithms in combination with contemporary site analysis and design techniques for visual evaluation, design and management. The process provides logical direction and connecting bridges through problem identification, information collection and verification, visual...
Computer-aided drug discovery.
Bajorath, Jürgen
2015-01-01
Computational approaches are an integral part of interdisciplinary drug discovery research. Understanding the science behind computational tools, their opportunities, and limitations is essential to make a true impact on drug discovery at different levels. If applied in a scientifically meaningful way, computational methods improve the ability to identify and evaluate potential drug molecules, but there remain weaknesses in the methods that preclude naïve applications. Herein, current trends in computer-aided drug discovery are reviewed, and selected computational areas are discussed. Approaches are highlighted that aid in the identification and optimization of new drug candidates. Emphasis is put on the presentation and discussion of computational concepts and methods, rather than case studies or application examples. As such, this contribution aims to provide an overview of the current methodological spectrum of computational drug discovery for a broad audience.
Face recognition system and method using face pattern words and face pattern bytes
Zheng, Yufeng
2014-12-23
The present invention provides a novel system and method for identifying individuals and for face recognition utilizing facial features for face identification. The system and method of the invention comprise creating facial features or face patterns called face pattern words and face pattern bytes for face identification. The invention also provides for pattern recognitions for identification other than face recognition. The invention further provides a means for identifying individuals based on visible and/or thermal images of those individuals by utilizing computer software implemented by instructions on a computer or computer system and a computer readable medium containing instructions on a computer system for face recognition and identification.
Vanhaebost, Jessica; Ducrot, Kewin; de Froidmont, Sébastien; Scarpelli, Maria Pia; Egger, Coraline; Baumann, Pia; Schmit, Gregory; Grabherr, Silke; Palmiere, Cristian
2017-02-01
The aim of this study was to assess whether the identification of pathological myocardial enhancement at multiphase postmortem computed tomography angiography was correlated with increased levels of troponin T and I in postmortem serum from femoral blood as well as morphological findings of myocardial ischemia. We further aimed to investigate whether autopsy cases characterized by increased troponin T and I concentrations as well as morphological findings of myocardial ischemia were also characterized by pathological myocardial enhancement at multiphase postmortem computed tomography angiography. Two different approaches were used. In one, 40 forensic autopsy cases that had pathological enhancement of the myocardium (mean Hounsfield units ≥95) observed at postmortem angiography were retrospectively selected. In the second approach, 40 forensic autopsy cases that had a cause of death attributed to acute myocardial ischemia were retrospectively selected. The preliminary results seem to indicate that the identification of a pathological enhancement of the myocardium at postmortem angiography is associated with the presence of increased levels of cardiac troponins in postmortem serum and morphological findings of ischemia. Analogously, a pathological enhancement of the myocardium at postmortem angiography can be retrospectively found in the great majority of autopsy cases characterized by increased cardiac troponin levels in postmortem serum and morphological findings of myocardial ischemia. Multiphase postmortem computed tomography angiography is a useful tool in the postmortem setting for investigating ischemically damaged myocardium.
Area estimation of crops by digital analysis of Landsat data
NASA Technical Reports Server (NTRS)
Bauer, M. E.; Hixson, M. M.; Davis, B. J.
1978-01-01
The study for which the results are presented had these objectives: (1) to use Landsat data and computer-implemented pattern recognition to classify the major crops from regions encompassing different climates, soils, and crops; (2) to estimate crop areas for counties and states by using crop identification data obtained from the Landsat identifications; and (3) to evaluate the accuracy, precision, and timeliness of crop area estimates obtained from Landsat data. The paper describes the method of developing the training statistics and evaluating the classification accuracy. Landsat MSS data were adequate to accurately identify wheat in Kansas; corn and soybean estimates for Indiana were less accurate. Systematic sampling of entire counties made possible by computer classification methods resulted in very precise area estimates at county, district, and state levels.
A Teaching Exercise for the Identification of Bacteria Using An Interactive Computer Program.
ERIC Educational Resources Information Center
Bryant, Trevor N.; Smith, John E.
1979-01-01
Describes an interactive Fortran computer program which provides an exercise in the identification of bacteria. Provides a way of enhancing a student's approach to systematic bacteriology and numerical identification procedures. (Author/MA)
Peng, Fei; Li, Jiao-ting; Long, Min
2015-03-01
To discriminate the acquisition pipelines of digital images, a novel scheme for the identification of natural images and computer-generated graphics is proposed based on statistical and textural features. First, the differences between them are investigated from the view of statistics and texture, and 31 dimensions of feature are acquired for identification. Then, LIBSVM is used for the classification. Finally, the experimental results are presented. The results show that it can achieve an identification accuracy of 97.89% for computer-generated graphics, and an identification accuracy of 97.75% for natural images. The analyses also demonstrate the proposed method has excellent performance, compared with some existing methods based only on statistical features or other features. The method has a great potential to be implemented for the identification of natural images and computer-generated graphics. © 2014 American Academy of Forensic Sciences.
ERIC Educational Resources Information Center
Stagg, Bethan C.; Donkin, Maria E.
2017-01-01
We investigated usability of mobile computers and field guide books with adult botanical novices, for the identification of wildflowers and deciduous trees in winter. Identification accuracy was significantly higher for wildflowers using a mobile computer app than field guide books but significantly lower for deciduous trees. User preference…
Outcomes Assessment of Computer-Assisted Behavioral Objectives for Accounting Graduates.
ERIC Educational Resources Information Center
Moore, John W.; Mitchem, Cheryl E.
1997-01-01
Presents behavioral objectives for accounting students and an outcomes assessment plan with five steps: (1) identification and definition of student competencies; (2) selection of valid instruments; (3) integration of assessment and instruction; (4) determination of levels of assessment; and (5) attribution of improvements to the program. (SK)
Tanabe, Akifumi S; Toju, Hirokazu
2013-01-01
Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding) is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA) barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used "1-nearest-neighbor" (1-NN) method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence) to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto) method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need to accelerate the registration of reference barcode sequences to apply high-throughput DNA barcoding to genus or species level identification in biodiversity research.
Tanabe, Akifumi S.; Toju, Hirokazu
2013-01-01
Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding) is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA) barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used “1-nearest-neighbor” (1-NN) method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence) to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto) method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need to accelerate the registration of reference barcode sequences to apply high-throughput DNA barcoding to genus or species level identification in biodiversity research. PMID:24204702
Sensor network based vehicle classification and license plate identification system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frigo, Janette Rose; Brennan, Sean M; Rosten, Edward J
Typically, for energy efficiency and scalability purposes, sensor networks have been used in the context of environmental and traffic monitoring applications in which operations at the sensor level are not computationally intensive. But increasingly, sensor network applications require data and compute intensive sensors such video cameras and microphones. In this paper, we describe the design and implementation of two such systems: a vehicle classifier based on acoustic signals and a license plate identification system using a camera. The systems are implemented in an energy-efficient manner to the extent possible using commercially available hardware, the Mica motes and the Stargate platform.more » Our experience in designing these systems leads us to consider an alternate more flexible, modular, low-power mote architecture that uses a combination of FPGAs, specialized embedded processing units and sensor data acquisition systems.« less
Advanced Health Management of a Brushless Direct Current Motor/Controller
NASA Technical Reports Server (NTRS)
Pickett, R. D.
2003-01-01
This effort demonstrates that health management can be taken to the component level for electromechanical systems. The same techniques can be applied to take any health management system to the component level, based on the practicality of the implementation for that particular system. This effort allows various logic schemes to be implemented for the identification and management of failures. By taking health management to the component level, integrated vehicle health management systems can be enhanced by protecting box-level avionics from being shut down in order to isolate a failed computer.
2014-01-01
Next generation sequencing (NGS) of metagenomic samples is becoming a standard approach to detect individual species or pathogenic strains of microorganisms. Computer programs used in the NGS community have to balance between speed and sensitivity and as a result, species or strain level identification is often inaccurate and low abundance pathogens can sometimes be missed. We have developed Taxoner, an open source, taxon assignment pipeline that includes a fast aligner (e.g. Bowtie2) and a comprehensive DNA sequence database. We tested the program on simulated datasets as well as experimental data from Illumina, IonTorrent, and Roche 454 sequencing platforms. We found that Taxoner performs as well as, and often better than BLAST, but requires two orders of magnitude less running time meaning that it can be run on desktop or laptop computers. Taxoner is slower than the approaches that use small marker databases but is more sensitive due the comprehensive reference database. In addition, it can be easily tuned to specific applications using small tailored databases. When applied to metagenomic datasets, Taxoner can provide a functional summary of the genes mapped and can provide strain level identification. Taxoner is written in C for Linux operating systems. The code and documentation are available for research applications at http://code.google.com/p/taxoner. PMID:25077800
Pongor, Lőrinc S; Vera, Roberto; Ligeti, Balázs
2014-01-01
Next generation sequencing (NGS) of metagenomic samples is becoming a standard approach to detect individual species or pathogenic strains of microorganisms. Computer programs used in the NGS community have to balance between speed and sensitivity and as a result, species or strain level identification is often inaccurate and low abundance pathogens can sometimes be missed. We have developed Taxoner, an open source, taxon assignment pipeline that includes a fast aligner (e.g. Bowtie2) and a comprehensive DNA sequence database. We tested the program on simulated datasets as well as experimental data from Illumina, IonTorrent, and Roche 454 sequencing platforms. We found that Taxoner performs as well as, and often better than BLAST, but requires two orders of magnitude less running time meaning that it can be run on desktop or laptop computers. Taxoner is slower than the approaches that use small marker databases but is more sensitive due the comprehensive reference database. In addition, it can be easily tuned to specific applications using small tailored databases. When applied to metagenomic datasets, Taxoner can provide a functional summary of the genes mapped and can provide strain level identification. Taxoner is written in C for Linux operating systems. The code and documentation are available for research applications at http://code.google.com/p/taxoner.
Heinrich, Andreas; Güttler, Felix; Wendt, Sebastian; Schenkl, Sebastian; Hubig, Michael; Wagner, Rebecca; Mall, Gita; Teichgräber, Ulf
2018-06-18
In forensic odontology the comparison between antemortem and postmortem panoramic radiographs (PRs) is a reliable method for person identification. The purpose of this study was to improve and automate identification of unknown people by comparison between antemortem and postmortem PR using computer vision. The study includes 43 467 PRs from 24 545 patients (46 % females/54 % males). All PRs were filtered and evaluated with Matlab R2014b including the toolboxes image processing and computer vision system. The matching process used the SURF feature to find the corresponding points between two PRs (unknown person and database entry) out of the whole database. From 40 randomly selected persons, 34 persons (85 %) could be reliably identified by corresponding PR matching points between an already existing scan in the database and the most recent PR. The systematic matching yielded a maximum of 259 points for a successful identification between two different PRs of the same person and a maximum of 12 corresponding matching points for other non-identical persons in the database. Hence 12 matching points are the threshold for reliable assignment. Operating with an automatic PR system and computer vision could be a successful and reliable tool for identification purposes. The applied method distinguishes itself by virtue of its fast and reliable identification of persons by PR. This Identification method is suitable even if dental characteristics were removed or added in the past. The system seems to be robust for large amounts of data. · Computer vision allows an automated antemortem and postmortem comparison of panoramic radiographs (PRs) for person identification.. · The present method is able to find identical matching partners among huge datasets (big data) in a short computing time.. · The identification method is suitable even if dental characteristics were removed or added.. · Heinrich A, Güttler F, Wendt S et al. Forensic Odontology: Automatic Identification of Persons Comparing Antemortem and Postmortem Panoramic Radiographs Using Computer Vision. Fortschr Röntgenstr 2018; DOI: 10.1055/a-0632-4744. © Georg Thieme Verlag KG Stuttgart · New York.
Software For Computer-Security Audits
NASA Technical Reports Server (NTRS)
Arndt, Kate; Lonsford, Emily
1994-01-01
Information relevant to potential breaches of security gathered efficiently. Automated Auditing Tools for VAX/VMS program includes following automated software tools performing noted tasks: Privileged ID Identification, program identifies users and their privileges to circumvent existing computer security measures; Critical File Protection, critical files not properly protected identified; Inactive ID Identification, identifications of users no longer in use found; Password Lifetime Review, maximum lifetimes of passwords of all identifications determined; and Password Length Review, minimum allowed length of passwords of all identifications determined. Written in DEC VAX DCL language.
NASA Astrophysics Data System (ADS)
Bu, Haifeng; Wang, Dansheng; Zhou, Pin; Zhu, Hongping
2018-04-01
An improved wavelet-Galerkin (IWG) method based on the Daubechies wavelet is proposed for reconstructing the dynamic responses of shear structures. The proposed method flexibly manages wavelet resolution level according to excitation, thereby avoiding the weakness of the wavelet-Galerkin multiresolution analysis (WGMA) method in terms of resolution and the requirement of external excitation. IWG is implemented by this work in certain case studies, involving single- and n-degree-of-freedom frame structures subjected to a determined discrete excitation. Results demonstrate that IWG performs better than WGMA in terms of accuracy and computation efficiency. Furthermore, a new method for parameter identification based on IWG and an optimization algorithm are also developed for shear frame structures, and a simultaneous identification of structural parameters and excitation is implemented. Numerical results demonstrate that the proposed identification method is effective for shear frame structures.
Huynh, Thien J; Flaherty, Matthew L; Gladstone, David J; Broderick, Joseph P; Demchuk, Andrew M; Dowlatshahi, Dar; Meretoja, Atte; Davis, Stephen M; Mitchell, Peter J; Tomlinson, George A; Chenkin, Jordan; Chia, Tze L; Symons, Sean P; Aviv, Richard I
2014-01-01
Rapid, accurate, and reliable identification of the computed tomography angiography spot sign is required to identify patients with intracerebral hemorrhage for trials of acute hemostatic therapy. We sought to assess the accuracy and interobserver agreement for spot sign identification. A total of 131 neurology, emergency medicine, and neuroradiology staff and fellows underwent imaging certification for spot sign identification before enrolling patients in 3 trials targeting spot-positive intracerebral hemorrhage for hemostatic intervention (STOP-IT, SPOTLIGHT, STOP-AUST). Ten intracerebral hemorrhage cases (spot-positive/negative ratio, 1:1) were presented for evaluation of spot sign presence, number, and mimics. True spot positivity was determined by consensus of 2 experienced neuroradiologists. Diagnostic performance, agreement, and differences by training level were analyzed. Mean accuracy, sensitivity, and specificity for spot sign identification were 87%, 78%, and 96%, respectively. Overall sensitivity was lower than specificity (P<0.001) because of true spot signs incorrectly perceived as spot mimics. Interobserver agreement for spot sign presence was moderate (k=0.60). When true spots were correctly identified, 81% correctly identified the presence of single or multiple spots. Median time needed to evaluate the presence of a spot sign was 1.9 minutes (interquartile range, 1.2-3.1 minutes). Diagnostic performance, interobserver agreement, and time needed for spot sign evaluation were similar among staff physicians and fellows. Accuracy for spot identification is high with opportunity for improvement in spot interpretation sensitivity and interobserver agreement particularly through greater reliance on computed tomography angiography source data and awareness of limitations of multiplanar images. Further prospective study is needed.
Optimal Demand Execution Strategy for the Defense Logistics Agency
2014-12-01
PLT Production Lead-Time PTO Paid Time Off FSC Federal Stock /Supply Class NIIN National Item Identification Number S&OP Sales and Operations...new software sets a recommended inventory level that balances the risk of stock out with holding cost expenses and buys to the optimal level instead...when a PR should be generated. A Time Phased Inventory Plan (TPIP) is computed daily that accounts for the lead-time and current stock on the shelf
Rule-driven defect detection in CT images of hardwood logs
Erol Sarigul; A. Lynn Abbott; Daniel L. Schmoldt
2000-01-01
This paper deals with automated detection and identification of internal defects in hardwood logs using computed tomography (CT) images. We have developed a system that employs artificial neural networks to perform tentative classification of logs on a pixel-by-pixel basis. This approach achieves a high level of classification accuracy for several hardwood species (...
ERIC Educational Resources Information Center
Wefer, Stephen H.; Anderson, O. Roger
2008-01-01
Bioinformatics, merging biological data with computer science, is increasingly incorporated into school curricula at all levels. This case study of 10 secondary school students highlights student individual differences (especially the way they processed information and integrated procedural and analytical thought) and summarizes a variety of…
NASA Astrophysics Data System (ADS)
Shul'ga, N. F.; Syshchenko, V. V.; Tarnovsky, A. I.; Solovyev, I. I.; Isupov, A. Yu.
2018-01-01
The motion of fast electrons through the crystal during axial channeling could be regular and chaotic. The dynamical chaos in quantum systems manifests itself in both statistical properties of energy spectra and morphology of wave functions of the individual stationary states. In this report, we investigate the axial channeling of high and low energy electrons and positrons near [100] direction of a silicon crystal. This case is particularly interesting because of the fact that the chaotic motion domain occupies only a small part of the phase space for the channeling electrons whereas the motion of the channeling positrons is substantially chaotic for the almost all initial conditions. The energy levels of transverse motion, as well as the wave functions of the stationary states, have been computed numerically. The group theory methods had been used for classification of the computed eigenfunctions and identification of the non-degenerate and doubly degenerate energy levels. The channeling radiation spectrum for the low energy electrons has been also computed.
Shteynberg, David; Deutsch, Eric W.; Lam, Henry; Eng, Jimmy K.; Sun, Zhi; Tasman, Natalie; Mendoza, Luis; Moritz, Robert L.; Aebersold, Ruedi; Nesvizhskii, Alexey I.
2011-01-01
The combination of tandem mass spectrometry and sequence database searching is the method of choice for the identification of peptides and the mapping of proteomes. Over the last several years, the volume of data generated in proteomic studies has increased dramatically, which challenges the computational approaches previously developed for these data. Furthermore, a multitude of search engines have been developed that identify different, overlapping subsets of the sample peptides from a particular set of tandem mass spectrometry spectra. We present iProphet, the new addition to the widely used open-source suite of proteomic data analysis tools Trans-Proteomics Pipeline. Applied in tandem with PeptideProphet, it provides more accurate representation of the multilevel nature of shotgun proteomic data. iProphet combines the evidence from multiple identifications of the same peptide sequences across different spectra, experiments, precursor ion charge states, and modified states. It also allows accurate and effective integration of the results from multiple database search engines applied to the same data. The use of iProphet in the Trans-Proteomics Pipeline increases the number of correctly identified peptides at a constant false discovery rate as compared with both PeptideProphet and another state-of-the-art tool Percolator. As the main outcome, iProphet permits the calculation of accurate posterior probabilities and false discovery rate estimates at the level of sequence identical peptide identifications, which in turn leads to more accurate probability estimates at the protein level. Fully integrated with the Trans-Proteomics Pipeline, it supports all commonly used MS instruments, search engines, and computer platforms. The performance of iProphet is demonstrated on two publicly available data sets: data from a human whole cell lysate proteome profiling experiment representative of typical proteomic data sets, and from a set of Streptococcus pyogenes experiments more representative of organism-specific composite data sets. PMID:21876204
Probing Majorana modes in the tunneling spectra of a resonant level.
Korytár, R; Schmitteckert, P
2013-11-27
Unambiguous identification of Majorana physics presents an outstanding problem whose solution could render topological quantum computing feasible. We develop a numerical approach to treat finite-size superconducting chains supporting Majorana modes, which is based on iterative application of a two-site Bogoliubov transformation. We demonstrate the applicability of the method by studying a resonant level attached to the superconductor subject to external perturbations. In the topological phase, we show that the spectrum of a single resonant level allows us to distinguish peaks coming from Majorana physics from the Kondo resonance.
Progress and challenges in bioinformatics approaches for enhancer identification
Kleftogiannis, Dimitrios; Kalnis, Panos
2016-01-01
Enhancers are cis-acting DNA elements that play critical roles in distal regulation of gene expression. Identifying enhancers is an important step for understanding distinct gene expression programs that may reflect normal and pathogenic cellular conditions. Experimental identification of enhancers is constrained by the set of conditions used in the experiment. This requires multiple experiments to identify enhancers, as they can be active under specific cellular conditions but not in different cell types/tissues or cellular states. This has opened prospects for computational prediction methods that can be used for high-throughput identification of putative enhancers to complement experimental approaches. Potential functions and properties of predicted enhancers have been catalogued and summarized in several enhancer-oriented databases. Because the current methods for the computational prediction of enhancers produce significantly different enhancer predictions, it will be beneficial for the research community to have an overview of the strategies and solutions developed in this field. In this review, we focus on the identification and analysis of enhancers by bioinformatics approaches. First, we describe a general framework for computational identification of enhancers, present relevant data types and discuss possible computational solutions. Next, we cover over 30 existing computational enhancer identification methods that were developed since 2000. Our review highlights advantages, limitations and potentials, while suggesting pragmatic guidelines for development of more efficient computational enhancer prediction methods. Finally, we discuss challenges and open problems of this topic, which require further consideration. PMID:26634919
NASA Astrophysics Data System (ADS)
Chioran, Doina; Nicoarǎ, Adrian; Roşu, Şerban; Cǎrligeriu, Virgil; Ianeş, Emilia
2013-10-01
Digital processing of two-dimensional cone beam computer tomography slicesstarts by identification of the contour of elements within. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating and implementation of algorithms in dental 2D imagery.
Current algorithmic solutions for peptide-based proteomics data generation and identification.
Hoopmann, Michael R; Moritz, Robert L
2013-02-01
Peptide-based proteomic data sets are ever increasing in size and complexity. These data sets provide computational challenges when attempting to quickly analyze spectra and obtain correct protein identifications. Database search and de novo algorithms must consider high-resolution MS/MS spectra and alternative fragmentation methods. Protein inference is a tricky problem when analyzing large data sets of degenerate peptide identifications. Combining multiple algorithms for improved peptide identification puts significant strain on computational systems when investigating large data sets. This review highlights some of the recent developments in peptide and protein identification algorithms for analyzing shotgun mass spectrometry data when encountering the aforementioned hurdles. Also explored are the roles that analytical pipelines, public spectral libraries, and cloud computing play in the evolution of peptide-based proteomics. Copyright © 2012 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rees, Brian G.
These are slides from a presentation. The identiFINDER provides information on radiation levels. It can automatically identify isotopes in its library. It can save spectra for transfer to a computer, and has a 4-8 hour battery life. The following is covered: an overview, operating modes, getting started, finder mode, search, identification mode, dose & rate, warning & alarm, options (ultra LGH), options (identifinder2), and general procedure.
ERIC Educational Resources Information Center
Hammonds, S. J.
1990-01-01
A technique for the numerical identification of bacteria using normalized likelihoods calculated from a probabilistic database is described, and the principles of the technique are explained. The listing of the computer program is included. Specimen results from the program, and examples of how they should be interpreted, are given. (KR)
Computer method for identification of boiler transfer functions
NASA Technical Reports Server (NTRS)
Miles, J. H.
1972-01-01
Iterative computer aided procedure was developed which provides for identification of boiler transfer functions using frequency response data. Method uses frequency response data to obtain satisfactory transfer function for both high and low vapor exit quality data.
mlCAF: Multi-Level Cross-Domain Semantic Context Fusioning for Behavior Identification.
Razzaq, Muhammad Asif; Villalonga, Claudia; Lee, Sungyoung; Akhtar, Usman; Ali, Maqbool; Kim, Eun-Soo; Khattak, Asad Masood; Seung, Hyonwoo; Hur, Taeho; Bang, Jaehun; Kim, Dohyeong; Ali Khan, Wajahat
2017-10-24
The emerging research on automatic identification of user's contexts from the cross-domain environment in ubiquitous and pervasive computing systems has proved to be successful. Monitoring the diversified user's contexts and behaviors can help in controlling lifestyle associated to chronic diseases using context-aware applications. However, availability of cross-domain heterogeneous contexts provides a challenging opportunity for their fusion to obtain abstract information for further analysis. This work demonstrates extension of our previous work from a single domain (i.e., physical activity) to multiple domains (physical activity, nutrition and clinical) for context-awareness. We propose multi-level Context-aware Framework (mlCAF), which fuses the multi-level cross-domain contexts in order to arbitrate richer behavioral contexts. This work explicitly focuses on key challenges linked to multi-level context modeling, reasoning and fusioning based on the mlCAF open-source ontology. More specifically, it addresses the interpretation of contexts from three different domains, their fusioning conforming to richer contextual information. This paper contributes in terms of ontology evolution with additional domains, context definitions, rules and inclusion of semantic queries. For the framework evaluation, multi-level cross-domain contexts collected from 20 users were used to ascertain abstract contexts, which served as basis for behavior modeling and lifestyle identification. The experimental results indicate a context recognition average accuracy of around 92.65% for the collected cross-domain contexts.
mlCAF: Multi-Level Cross-Domain Semantic Context Fusioning for Behavior Identification
Villalonga, Claudia; Lee, Sungyoung; Akhtar, Usman; Ali, Maqbool; Kim, Eun-Soo; Khattak, Asad Masood; Seung, Hyonwoo; Hur, Taeho; Kim, Dohyeong; Ali Khan, Wajahat
2017-01-01
The emerging research on automatic identification of user’s contexts from the cross-domain environment in ubiquitous and pervasive computing systems has proved to be successful. Monitoring the diversified user’s contexts and behaviors can help in controlling lifestyle associated to chronic diseases using context-aware applications. However, availability of cross-domain heterogeneous contexts provides a challenging opportunity for their fusion to obtain abstract information for further analysis. This work demonstrates extension of our previous work from a single domain (i.e., physical activity) to multiple domains (physical activity, nutrition and clinical) for context-awareness. We propose multi-level Context-aware Framework (mlCAF), which fuses the multi-level cross-domain contexts in order to arbitrate richer behavioral contexts. This work explicitly focuses on key challenges linked to multi-level context modeling, reasoning and fusioning based on the mlCAF open-source ontology. More specifically, it addresses the interpretation of contexts from three different domains, their fusioning conforming to richer contextual information. This paper contributes in terms of ontology evolution with additional domains, context definitions, rules and inclusion of semantic queries. For the framework evaluation, multi-level cross-domain contexts collected from 20 users were used to ascertain abstract contexts, which served as basis for behavior modeling and lifestyle identification. The experimental results indicate a context recognition average accuracy of around 92.65% for the collected cross-domain contexts. PMID:29064459
Accurate Identification of Cancerlectins through Hybrid Machine Learning Technology.
Zhang, Jieru; Ju, Ying; Lu, Huijuan; Xuan, Ping; Zou, Quan
2016-01-01
Cancerlectins are cancer-related proteins that function as lectins. They have been identified through computational identification techniques, but these techniques have sometimes failed to identify proteins because of sequence diversity among the cancerlectins. Advanced machine learning identification methods, such as support vector machine and basic sequence features (n-gram), have also been used to identify cancerlectins. In this study, various protein fingerprint features and advanced classifiers, including ensemble learning techniques, were utilized to identify this group of proteins. We improved the prediction accuracy of the original feature extraction methods and classification algorithms by more than 10% on average. Our work provides a basis for the computational identification of cancerlectins and reveals the power of hybrid machine learning techniques in computational proteomics.
Using state-issued identification cards for obesity tracking.
Morris, Daniel S; Schubert, Stacey S; Ngo, Duyen L; Rubado, Dan J; Main, Eric; Douglas, Jae P
2015-01-01
Obesity prevention has emerged as one of public health's top priorities. Public health agencies need reliable data on population health status to guide prevention efforts. Existing survey data sources provide county-level estimates; obtaining sub-county estimates from survey data can be prohibitively expensive. State-issued identification cards are an alternate data source for community-level obesity estimates. We computed body mass index for 3.2 million adult Oregonians who were issued a driver license or identification card between 2003 and 2010. Statewide estimates of obesity prevalence and average body mass index were compared to the Oregon Behavioral Risk Factor Surveillance System (BRFSS). After geocoding addresses we calculated average adult body mass index for every census tract and block group in the state. Sub-county estimates reveal striking patterns in the population's weight status. Annual obesity prevalence estimates from identification cards averaged 18% lower than the BRFSS for men and 31% lower for women. Body mass index estimates averaged 2% lower than the BRFSS for men and 5% lower for women. Identification card records are a promising data source to augment tracking of obesity. People do tend to misrepresent their weight, but the consistent bias does not obscure patterns and trends. Large numbers of records allow for stable estimates for small geographic areas. Copyright © 2014 Asian Oceanian Association for the Study of Obesity. All rights reserved.
DeRobertis, Christopher V.; Lu, Yantian T.
2010-02-23
A method, system, and program storage device for creating a new user account or user group with a unique identification number in a computing environment having multiple user registries is provided. In response to receiving a command to create a new user account or user group, an operating system of a clustered computing environment automatically checks multiple registries configured for the operating system to determine whether a candidate identification number for the new user account or user group has been assigned already to one or more existing user accounts or groups, respectively. The operating system automatically assigns the candidate identification number to the new user account or user group created in a target user registry if the checking indicates that the candidate identification number has not been assigned already to any of the existing user accounts or user groups, respectively.
2017-08-08
Usability Studies In Virtual And Traditional Computer Aided Design Environments For Fault Identification Dr. Syed Adeel Ahmed, Xavier University...virtual environment with wand interfaces compared directly with a workstation non-stereoscopic traditional CAD interface with keyboard and mouse. In...the differences in interaction when compared with traditional human computer interfaces. This paper provides analysis via usability study methods
Annotation: a computational solution for streamlining metabolomics analysis
Domingo-Almenara, Xavier; Montenegro-Burke, J. Rafael; Benton, H. Paul; Siuzdak, Gary
2017-01-01
Metabolite identification is still considered an imposing bottleneck in liquid chromatography mass spectrometry (LC/MS) untargeted metabolomics. The identification workflow usually begins with detecting relevant LC/MS peaks via peak-picking algorithms and retrieving putative identities based on accurate mass searching. However, accurate mass search alone provides poor evidence for metabolite identification. For this reason, computational annotation is used to reveal the underlying metabolites monoisotopic masses, improving putative identification in addition to confirmation with tandem mass spectrometry. This review examines LC/MS data from a computational and analytical perspective, focusing on the occurrence of neutral losses and in-source fragments, to understand the challenges in computational annotation methodologies. Herein, we examine the state-of-the-art strategies for computational annotation including: (i) peak grouping or full scan (MS1) pseudo-spectra extraction, i.e., clustering all mass spectral signals stemming from each metabolite; (ii) annotation using ion adduction and mass distance among ion peaks; (iii) incorporation of biological knowledge such as biotransformations or pathways; (iv) tandem MS data; and (v) metabolite retention time calibration, usually achieved by prediction from molecular descriptors. Advantages and pitfalls of each of these strategies are discussed, as well as expected future trends in computational annotation. PMID:29039932
A Penalized Robust Method for Identifying Gene-Environment Interactions
Shi, Xingjie; Liu, Jin; Huang, Jian; Zhou, Yong; Xie, Yang; Ma, Shuangge
2015-01-01
In high-throughput studies, an important objective is to identify gene-environment interactions associated with disease outcomes and phenotypes. Many commonly adopted methods assume specific parametric or semiparametric models, which may be subject to model mis-specification. In addition, they usually use significance level as the criterion for selecting important interactions. In this study, we adopt the rank-based estimation, which is much less sensitive to model specification than some of the existing methods and includes several commonly encountered data and models as special cases. Penalization is adopted for the identification of gene-environment interactions. It achieves simultaneous estimation and identification and does not rely on significance level. For computation feasibility, a smoothed rank estimation is further proposed. Simulation shows that under certain scenarios, for example with contaminated or heavy-tailed data, the proposed method can significantly outperform the existing alternatives with more accurate identification. We analyze a lung cancer prognosis study with gene expression measurements under the AFT (accelerated failure time) model. The proposed method identifies interactions different from those using the alternatives. Some of the identified genes have important implications. PMID:24616063
A Supplementary Clear-Sky Snow and Ice Recognition Technique for CERES Level 2 Products
NASA Technical Reports Server (NTRS)
Radkevich, Alexander; Khlopenkov, Konstantin; Rutan, David; Kato, Seiji
2013-01-01
Identification of clear-sky snow and ice is an important step in the production of cryosphere radiation budget products, which are used in the derivation of long-term data series for climate research. In this paper, a new method of clear-sky snow/ice identification for Moderate Resolution Imaging Spectroradiometer (MODIS) is presented. The algorithm's goal is to enhance the identification of snow and ice within the Clouds and the Earth's Radiant Energy System (CERES) data after application of the standard CERES scene identification scheme. The input of the algorithm uses spectral radiances from five MODIS bands and surface skin temperature available in the CERES Single Scanner Footprint (SSF) product. The algorithm produces a cryosphere rating from an aggregated test: a higher rating corresponds to a more certain identification of the clear-sky snow/ice-covered scene. Empirical analysis of regions of interest representing distinctive targets such as snow, ice, ice and water clouds, open waters, and snow-free land selected from a number of MODIS images shows that the cryosphere rating of snow/ice targets falls into 95% confidence intervals lying above the same confidence intervals of all other targets. This enables recognition of clear-sky cryosphere by using a single threshold applied to the rating, which makes this technique different from traditional branching techniques based on multiple thresholds. Limited tests show that the established threshold clearly separates the cryosphere rating values computed for the cryosphere from those computed for noncryosphere scenes, whereas individual tests applied consequently cannot reliably identify the cryosphere for complex scenes.
Evaluation of the Microbial Identification System for identification of clinically isolated yeasts.
Crist, A E; Johnson, L M; Burke, P J
1996-01-01
The Microbial Identification System (MIS; Microbial ID, Inc., Newark, Del.) was evaluated for the identification of 550 clinically isolated yeasts. The organisms evaluated were fresh clinical isolates identified by methods routinely used in our laboratory (API 20C and conventional methods) and included Candida albicans (n = 294), C. glabrata (n = 145), C. tropicalis (n = 58), C. parapsilosis (n = 33), and other yeasts (n = 20). In preparation for fatty acid analysis, yeasts were inoculated onto Sabouraud dextrose agar and incubated at 28 degrees C for 24 h. Yeasts were harvested, saponified, derivatized, and extracted, and fatty acid analysis was performed according to the manufacturer's instructions. Fatty acid profiles were analyzed, and computer identifications were made with the Yeast Clinical Library (database version 3.8). Of the 550 isolates tested, 374 (68.0%) were correctly identified to the species level, with 87 (15.8%) being incorrectly identified and 89 (16.2%) giving no identification. Repeat testing of isolates giving no identification resulted in an additional 18 isolates being correctly identified. This gave the MIS an overall identification rate of 71.3%. The most frequently misidentified yeast was C. glabrata, which was identified as Saccharomyces cerevisiae 32.4% of the time. On the basis of these results, the MIS, with its current database, does not appear suitable for the routine identification of clinically important yeasts. PMID:8880489
Detection and identification of substances using noisy THz signal
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Zakharova, Irina G.; Zagursky, Dmitry Yu.; Varentsova, Svetlana A.
2017-05-01
We discuss an effective method for the detection and identification of substances using a high noisy THz signal. In order to model such a noisy signal, we add to the THz signal transmitted through a pure substance, a noisy THz signal obtained in real conditions at a long distance (more than 3.5 m) from the receiver in air. The insufficiency of the standard THz-TDS method is demonstrated. The method discussed in the paper is based on time-dependent integral correlation criteria calculated using spectral dynamics of medium response. A new type of the integral correlation criterion, which is less dependent on spectral characteristics of the noisy signal under investigation, is used for the substance identification. To demonstrate the possibilities of the integral correlation criteria in real experiment, they are applied for the identification of explosive HMX in the reflection mode. To explain the physical mechanism for the false absorption frequencies appearance in the signal we make a computer simulation using 1D Maxwell's equations and density matrix formalism. We propose also new method for the substance identification by using the THz pulse frequency up-conversion and discuss an application of the cascade mechanism of molecules high energy levels excitation for the substance identification.
Overhead longwave infrared hyperspectral material identification using radiometric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zelinski, M. E.
Material detection algorithms used in hyperspectral data processing are computationally efficient but can produce relatively high numbers of false positives. Material identification performed as a secondary processing step on detected pixels can help separate true and false positives. This paper presents a material identification processing chain for longwave infrared hyperspectral data of solid materials collected from airborne platforms. The algorithms utilize unwhitened radiance data and an iterative algorithm that determines the temperature, humidity, and ozone of the atmospheric profile. Pixel unmixing is done using constrained linear regression and Bayesian Information Criteria for model selection. The resulting product includes an optimalmore » atmospheric profile and full radiance material model that includes material temperature, abundance values, and several fit statistics. A logistic regression method utilizing all model parameters to improve identification is also presented. This paper details the processing chain and provides justification for the algorithms used. Several examples are provided using modeled data at different noise levels.« less
NASA Astrophysics Data System (ADS)
Pandey, Saurabh; Majhi, Somanath; Ghorai, Prasenjit
2017-07-01
In this paper, the conventional relay feedback test has been modified for modelling and identification of a class of real-time dynamical systems in terms of linear transfer function models with time-delay. An ideal relay and unknown systems are connected through a negative feedback loop to bring the sustained oscillatory output around the non-zero setpoint. Thereafter, the obtained limit cycle information is substituted in the derived mathematical equations for accurate identification of unknown plants in terms of overdamped, underdamped, critically damped second-order plus dead time and stable first-order plus dead time transfer function models. Typical examples from the literature are included for the validation of the proposed identification scheme through computer simulations. Subsequently, the comparisons between estimated model and true system are drawn through integral absolute error criterion and frequency response plots. Finally, the obtained output responses through simulations are verified experimentally on real-time liquid level control system using Yokogawa Distributed Control System CENTUM CS3000 set up.
Deshmukh, Rupesh K; Sonah, Humira; Bélanger, Richard R
2016-01-01
Aquaporins (AQPs) are channel-forming integral membrane proteins that facilitate the movement of water and many other small molecules. Compared to animals, plants contain a much higher number of AQPs in their genome. Homology-based identification of AQPs in sequenced species is feasible because of the high level of conservation of protein sequences across plant species. Genome-wide characterization of AQPs has highlighted several important aspects such as distribution, genetic organization, evolution and conserved features governing solute specificity. From a functional point of view, the understanding of AQP transport system has expanded rapidly with the help of transcriptomics and proteomics data. The efficient analysis of enormous amounts of data generated through omic scale studies has been facilitated through computational advancements. Prediction of protein tertiary structures, pore architecture, cavities, phosphorylation sites, heterodimerization, and co-expression networks has become more sophisticated and accurate with increasing computational tools and pipelines. However, the effectiveness of computational approaches is based on the understanding of physiological and biochemical properties, transport kinetics, solute specificity, molecular interactions, sequence variations, phylogeny and evolution of aquaporins. For this purpose, tools like Xenopus oocyte assays, yeast expression systems, artificial proteoliposomes, and lipid membranes have been efficiently exploited to study the many facets that influence solute transport by AQPs. In the present review, we discuss genome-wide identification of AQPs in plants in relation with recent advancements in analytical tools, and their availability and technological challenges as they apply to AQPs. An exhaustive review of omics resources available for AQP research is also provided in order to optimize their efficient utilization. Finally, a detailed catalog of computational tools and analytical pipelines is offered as a resource for AQP research.
Deshmukh, Rupesh K.; Sonah, Humira; Bélanger, Richard R.
2016-01-01
Aquaporins (AQPs) are channel-forming integral membrane proteins that facilitate the movement of water and many other small molecules. Compared to animals, plants contain a much higher number of AQPs in their genome. Homology-based identification of AQPs in sequenced species is feasible because of the high level of conservation of protein sequences across plant species. Genome-wide characterization of AQPs has highlighted several important aspects such as distribution, genetic organization, evolution and conserved features governing solute specificity. From a functional point of view, the understanding of AQP transport system has expanded rapidly with the help of transcriptomics and proteomics data. The efficient analysis of enormous amounts of data generated through omic scale studies has been facilitated through computational advancements. Prediction of protein tertiary structures, pore architecture, cavities, phosphorylation sites, heterodimerization, and co-expression networks has become more sophisticated and accurate with increasing computational tools and pipelines. However, the effectiveness of computational approaches is based on the understanding of physiological and biochemical properties, transport kinetics, solute specificity, molecular interactions, sequence variations, phylogeny and evolution of aquaporins. For this purpose, tools like Xenopus oocyte assays, yeast expression systems, artificial proteoliposomes, and lipid membranes have been efficiently exploited to study the many facets that influence solute transport by AQPs. In the present review, we discuss genome-wide identification of AQPs in plants in relation with recent advancements in analytical tools, and their availability and technological challenges as they apply to AQPs. An exhaustive review of omics resources available for AQP research is also provided in order to optimize their efficient utilization. Finally, a detailed catalog of computational tools and analytical pipelines is offered as a resource for AQP research. PMID:28066459
The influence of the level formants on the perception of synthetic vowel sounds
NASA Astrophysics Data System (ADS)
Kubzdela, Henryk; Owsianny, Mariuz
A computer model of a generator of periodic complex sounds simulating consonants was developed. The system makes possible independent regulation of the level of each of the formants and instant generation of the sound. A trapezoid approximates the curve of the spectrum within the range of the formant. In using this model, each person in a group of six listeners experimentally selected synthesis parameters for six sounds that to him seemed optimal approximations of Polish consonants. From these, another six sounds were selected that were identified by a majority of the six persons and several additional listeners as being best qualified to serve as prototypes of Polish consonants. These prototypes were then used to randomly create sounds with various combinations at the level of the second and third formant and these were presented to seven listeners for identification. The results of the identifications are presented in table form in three variants and are described from the point of view of the requirements of automatic recognition of consonants in continuous speech.
Description and detection of burst events in turbulent flows
NASA Astrophysics Data System (ADS)
Schmid, P. J.; García-Gutierrez, A.; Jiménez, J.
2018-04-01
A mathematical and computational framework is developed for the detection and identification of coherent structures in turbulent wall-bounded shear flows. In a first step, this data-based technique will use an embedding methodology to formulate the fluid motion as a phase-space trajectory, from which state-transition probabilities can be computed. Within this formalism, a second step then applies repeated clustering and graph-community techniques to determine a hierarchy of coherent structures ranked by their persistencies. This latter information will be used to detect highly transitory states that act as precursors to violent and intermittent events in turbulent fluid motion (e.g., bursts). Used as an analysis tool, this technique allows the objective identification of intermittent (but important) events in turbulent fluid motion; however, it also lays the foundation for advanced control strategies for their manipulation. The techniques are applied to low-dimensional model equations for turbulent transport, such as the self-sustaining process (SSP), for varying levels of complexity.
Identification of Protein–Excipient Interaction Hotspots Using Computational Approaches
Barata, Teresa S.; Zhang, Cheng; Dalby, Paul A.; Brocchini, Steve; Zloh, Mire
2016-01-01
Protein formulation development relies on the selection of excipients that inhibit protein–protein interactions preventing aggregation. Empirical strategies involve screening many excipient and buffer combinations using force degradation studies. Such methods do not readily provide information on intermolecular interactions responsible for the protective effects of excipients. This study describes a molecular docking approach to screen and rank interactions allowing for the identification of protein–excipient hotspots to aid in the selection of excipients to be experimentally screened. Previously published work with Drosophila Su(dx) was used to develop and validate the computational methodology, which was then used to determine the formulation hotspots for Fab A33. Commonly used excipients were examined and compared to the regions in Fab A33 prone to protein–protein interactions that could lead to aggregation. This approach could provide information on a molecular level about the protective interactions of excipients in protein formulations to aid the more rational development of future formulations. PMID:27258262
Image analysis of pubic bone for age estimation in a computed tomography sample.
López-Alcaraz, Manuel; González, Pedro Manuel Garamendi; Aguilera, Inmaculada Alemán; López, Miguel Botella
2015-03-01
Radiology has demonstrated great utility for age estimation, but most of the studies are based on metrical and morphological methods in order to perform an identification profile. A simple image analysis-based method is presented, aimed to correlate the bony tissue ultrastructure with several variables obtained from the grey-level histogram (GLH) of computed tomography (CT) sagittal sections of the pubic symphysis surface and the pubic body, and relating them with age. The CT sample consisted of 169 hospital Digital Imaging and Communications in Medicine (DICOM) archives of known sex and age. The calculated multiple regression models showed a maximum R (2) of 0.533 for females and 0.726 for males, with a high intra- and inter-observer agreement. The method suggested is considered not only useful for performing an identification profile during virtopsy, but also for application in further studies in order to attach a quantitative correlation for tissue ultrastructure characteristics, without complex and expensive methods beyond image analysis.
Basic concepts and architectural details of the Delphi trigger system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bocci, V.; Booth, P.S.L.; Bozzo, M.
1995-08-01
Delphi (DEtector with Lepton, Photon and Hadron Identification) is one of the four experiments of the LEP (Large Electron Positron) collider at CERN. The detector is laid out to provide a nearly 4 {pi} coverage for charged particle tracking, electromagnetic, hadronic calorimetry and extended particle identification. The trigger system consists of four levels. The first two are synchronous with the BCO (Beam Cross Over) and rely on hardwired control units, while the last two are performed asynchronously with respect to the BCO and are driven by the Delphi host computers. The aim of this paper is to give a comprehensivemore » global view of the trigger system architecture, presenting in detail the first two levels, their various hardware components and the latest modifications introduced in order to improve their performance and make more user friendly the whole software user interface.« less
Hirayama, Denise; Saron, Clodoaldo
2015-06-01
Polymeric materials constitute a considerable fraction of waste computer equipment and polymers acrylonitrile-butadiene-styrene and high-impact polystyrene are the main thermoplastic polymeric components found in waste computer equipment. Identification, separation and characterisation of additives present in acrylonitrile-butadiene-styrene and high-impact polystyrene are fundamental procedures to mechanical recycling of these polymers. The aim of this study was to evaluate the methods for identification of acrylonitrile-butadiene-styrene and high-impact polystyrene from waste computer equipment in Brazil, as well as their potential for mechanical recycling. The imprecise utilisation of symbols for identification of the polymers and the presence of additives containing toxic elements in determinate computer devices are some of the difficulties found for recycling of acrylonitrile-butadiene-styrene and high-impact polystyrene from waste computer equipment. However, the considerable performance of mechanical properties of the recycled acrylonitrile-butadiene-styrene and high-impact polystyrene when compared with the virgin materials confirms the potential for mechanical recycling of these polymers. © The Author(s) 2015.
Mirror neurons and imitation: a computationally guided review.
Oztop, Erhan; Kawato, Mitsuo; Arbib, Michael
2006-04-01
Neurophysiology reveals the properties of individual mirror neurons in the macaque while brain imaging reveals the presence of 'mirror systems' (not individual neurons) in the human. Current conceptual models attribute high level functions such as action understanding, imitation, and language to mirror neurons. However, only the first of these three functions is well-developed in monkeys. We thus distinguish current opinions (conceptual models) on mirror neuron function from more detailed computational models. We assess the strengths and weaknesses of current computational models in addressing the data and speculations on mirror neurons (macaque) and mirror systems (human). In particular, our mirror neuron system (MNS), mental state inference (MSI) and modular selection and identification for control (MOSAIC) models are analyzed in more detail. Conceptual models often overlook the computational requirements for posited functions, while too many computational models adopt the erroneous hypothesis that mirror neurons are interchangeable with imitation ability. Our meta-analysis underlines the gap between conceptual and computational models and points out the research effort required from both sides to reduce this gap.
NASA Astrophysics Data System (ADS)
Astafiev, A.; Orlov, A.; Privezencev, D.
2018-01-01
The article is devoted to the development of technology and software for the construction of positioning and control systems in industrial plants based on aggregation to determine the current storage area using computer vision and radiofrequency identification. It describes the developed of the project of hardware for industrial products positioning system in the territory of a plant on the basis of radio-frequency grid. It describes the development of the project of hardware for industrial products positioning system in the plant on the basis of computer vision methods. It describes the development of the method of aggregation to determine the current storage area using computer vision and radiofrequency identification. Experimental studies in laboratory and production conditions have been conducted and described in the article.
A Corticothalamic Circuit Model for Sound Identification in Complex Scenes
Otazu, Gonzalo H.; Leibold, Christian
2011-01-01
The identification of the sound sources present in the environment is essential for the survival of many animals. However, these sounds are not presented in isolation, as natural scenes consist of a superposition of sounds originating from multiple sources. The identification of a source under these circumstances is a complex computational problem that is readily solved by most animals. We present a model of the thalamocortical circuit that performs level-invariant recognition of auditory objects in complex auditory scenes. The circuit identifies the objects present from a large dictionary of possible elements and operates reliably for real sound signals with multiple concurrently active sources. The key model assumption is that the activities of some cortical neurons encode the difference between the observed signal and an internal estimate. Reanalysis of awake auditory cortex recordings revealed neurons with patterns of activity corresponding to such an error signal. PMID:21931668
NASA Astrophysics Data System (ADS)
Yao, Guang-tao; Zhang, Xiao-hui; Ge, Wei-long
2012-01-01
The underwater laser imaging detection is an effective method of detecting short distance target underwater as an important complement of sonar detection. With the development of underwater laser imaging technology and underwater vehicle technology, the underwater automatic target identification has gotten more and more attention, and is a research difficulty in the area of underwater optical imaging information processing. Today, underwater automatic target identification based on optical imaging is usually realized with the method of digital circuit software programming. The algorithm realization and control of this method is very flexible. However, the optical imaging information is 2D image even 3D image, the amount of imaging processing information is abundant, so the electronic hardware with pure digital algorithm will need long identification time and is hard to meet the demands of real-time identification. If adopt computer parallel processing, the identification speed can be improved, but it will increase complexity, size and power consumption. This paper attempts to apply optical correlation identification technology to realize underwater automatic target identification. The optics correlation identification technology utilizes the Fourier transform characteristic of Fourier lens which can accomplish Fourier transform of image information in the level of nanosecond, and optical space interconnection calculation has the features of parallel, high speed, large capacity and high resolution, combines the flexibility of calculation and control of digital circuit method to realize optoelectronic hybrid identification mode. We reduce theoretical formulation of correlation identification and analyze the principle of optical correlation identification, and write MATLAB simulation program. We adopt single frame image obtained in underwater range gating laser imaging to identify, and through identifying and locating the different positions of target, we can improve the speed and orientation efficiency of target identification effectively, and validate the feasibility of this method primarily.
Identification of Computational and Experimental Reduced-Order Models
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Hong, Moeljo S.; Bartels, Robert E.; Piatak, David J.; Scott, Robert C.
2003-01-01
The identification of computational and experimental reduced-order models (ROMs) for the analysis of unsteady aerodynamic responses and for efficient aeroelastic analyses is presented. For the identification of a computational aeroelastic ROM, the CFL3Dv6.0 computational fluid dynamics (CFD) code is used. Flutter results for the AGARD 445.6 Wing and for a Rigid Semispan Model (RSM) computed using CFL3Dv6.0 are presented, including discussion of associated computational costs. Modal impulse responses of the unsteady aerodynamic system are computed using the CFL3Dv6.0 code and transformed into state-space form. The unsteady aerodynamic state-space ROM is then combined with a state-space model of the structure to create an aeroelastic simulation using the MATLAB/SIMULINK environment. The MATLAB/SIMULINK ROM is then used to rapidly compute aeroelastic transients, including flutter. The ROM shows excellent agreement with the aeroelastic analyses computed using the CFL3Dv6.0 code directly. For the identification of experimental unsteady pressure ROMs, results are presented for two configurations: the RSM and a Benchmark Supercritical Wing (BSCW). Both models were used to acquire unsteady pressure data due to pitching oscillations on the Oscillating Turntable (OTT) system at the Transonic Dynamics Tunnel (TDT). A deconvolution scheme involving a step input in pitch and the resultant step response in pressure, for several pressure transducers, is used to identify the unsteady pressure impulse responses. The identified impulse responses are then used to predict the pressure responses due to pitching oscillations at several frequencies. Comparisons with the experimental data are then presented.
Triple redundant computer system/display and keyboard subsystem interface
NASA Technical Reports Server (NTRS)
Gulde, F. J.
1973-01-01
Interfacing of the redundant display and keyboard subsystem with the triple redundant computer system is defined according to space shuttle design. The study is performed in three phases: (1) TRCS configuration and characteristics identification; (2) display and keyboard subsystem configuration and characteristics identification, and (3) interface approach definition.
Identification of control targets in Boolean molecular network models via computational algebra.
Murrugarra, David; Veliz-Cuba, Alan; Aguilar, Boris; Laubenbacher, Reinhard
2016-09-23
Many problems in biomedicine and other areas of the life sciences can be characterized as control problems, with the goal of finding strategies to change a disease or otherwise undesirable state of a biological system into another, more desirable, state through an intervention, such as a drug or other therapeutic treatment. The identification of such strategies is typically based on a mathematical model of the process to be altered through targeted control inputs. This paper focuses on processes at the molecular level that determine the state of an individual cell, involving signaling or gene regulation. The mathematical model type considered is that of Boolean networks. The potential control targets can be represented by a set of nodes and edges that can be manipulated to produce a desired effect on the system. This paper presents a method for the identification of potential intervention targets in Boolean molecular network models using algebraic techniques. The approach exploits an algebraic representation of Boolean networks to encode the control candidates in the network wiring diagram as the solutions of a system of polynomials equations, and then uses computational algebra techniques to find such controllers. The control methods in this paper are validated through the identification of combinatorial interventions in the signaling pathways of previously reported control targets in two well studied systems, a p53-mdm2 network and a blood T cell lymphocyte granular leukemia survival signaling network. Supplementary data is available online and our code in Macaulay2 and Matlab are available via http://www.ms.uky.edu/~dmu228/ControlAlg . This paper presents a novel method for the identification of intervention targets in Boolean network models. The results in this paper show that the proposed methods are useful and efficient for moderately large networks.
NASA Astrophysics Data System (ADS)
Monteil, P.
1981-11-01
Computation of the overall levels and spectral densities of the responses measured on a launcher skin, the fairing for instance, merged into a random acoustic environment during take off, was studied. The analysis of transmission of these vibrations to the payload required the simulation of these responses by a shaker control system, using a small number of distributed shakers. Results show that this closed loop computerized digital system allows the acquisition of auto and cross spectral densities equal to those of the responses previously computed. However, wider application is sought, e.g., road and runway profiles. The problems of multiple input-output system identification, multiple true random signal generation, and real time programming are evoked. The system should allow for the control of four shakers.
Computational Approaches to Chemical Hazard Assessment
Luechtefeld, Thomas; Hartung, Thomas
2018-01-01
Summary Computational prediction of toxicity has reached new heights as a result of decades of growth in the magnitude and diversity of biological data. Public packages for statistics and machine learning make model creation faster. New theory in machine learning and cheminformatics enables integration of chemical structure, toxicogenomics, simulated and physical data in the prediction of chemical health hazards, and other toxicological information. Our earlier publications have characterized a toxicological dataset of unprecedented scale resulting from the European REACH legislation (Registration Evaluation Authorisation and Restriction of Chemicals). These publications dove into potential use cases for regulatory data and some models for exploiting this data. This article analyzes the options for the identification and categorization of chemicals, moves on to the derivation of descriptive features for chemicals, discusses different kinds of targets modeled in computational toxicology, and ends with a high-level perspective of the algorithms used to create computational toxicology models. PMID:29101769
Mander, Luke; Baker, Sarah J.; Belcher, Claire M.; Haselhorst, Derek S.; Rodriguez, Jacklyn; Thorn, Jessica L.; Tiwari, Shivangi; Urrego, Dunia H.; Wesseln, Cassandra J.; Punyasena, Surangi W.
2014-01-01
• Premise of the study: Humans frequently identify pollen grains at a taxonomic rank above species. Grass pollen is a classic case of this situation, which has led to the development of computational methods for identifying grass pollen species. This paper aims to provide context for these computational methods by quantifying the accuracy and consistency of human identification. • Methods: We measured the ability of nine human analysts to identify 12 species of grass pollen using scanning electron microscopy images. These are the same images that were used in computational identifications. We have measured the coverage, accuracy, and consistency of each analyst, and investigated their ability to recognize duplicate images. • Results: Coverage ranged from 87.5% to 100%. Mean identification accuracy ranged from 46.67% to 87.5%. The identification consistency of each analyst ranged from 32.5% to 87.5%, and each of the nine analysts produced considerably different identification schemes. The proportion of duplicate image pairs that were missed ranged from 6.25% to 58.33%. • Discussion: The identification errors made by each analyst, which result in a decline in accuracy and consistency, are likely related to psychological factors such as the limited capacity of human memory, fatigue and boredom, recency effects, and positivity bias. PMID:25202649
Soyama, Takeshi; Sakuhara, Yusuke; Kudo, Kohsuke; Abo, Daisuke; Wang, Jeff; Ito, Yoichi M; Hasegawa, Yu; Shirato, Hiroki
2016-07-01
This preliminary study compared ultrasonography-computed tomography (US-CT) fusion imaging and conventional ultrasonography (US) for accuracy and time required for target identification using a combination of real phantoms and sets of digitally modified computed tomography (CT) images (digital/real hybrid phantoms). In this randomized prospective study, 27 spheres visible on B-mode US were placed at depths of 3.5, 8.5, and 13.5 cm (nine spheres each). All 27 spheres were digitally erased from the CT images, and a radiopaque sphere was digitally placed at each of the 27 locations to create 27 different sets of CT images. Twenty clinicians were instructed to identify the sphere target using US alone and fusion imaging. The accuracy of target identification of the two methods was compared using McNemar's test. The mean time required for target identification and error distances were compared using paired t tests. At all three depths, target identification was more accurate and the mean time required for target identification was significantly less with US-CT fusion imaging than with US alone, and the mean error distances were also shorter with US-CT fusion imaging. US-CT fusion imaging was superior to US alone in terms of accurate and rapid identification of target lesions.
Human operator identification model and related computer programs
NASA Technical Reports Server (NTRS)
Kessler, K. M.; Mohr, J. N.
1978-01-01
Four computer programs which provide computational assistance in the analysis of man/machine systems are reported. The programs are: (1) Modified Transfer Function Program (TF); (2) Time Varying Response Program (TVSR); (3) Optimal Simulation Program (TVOPT); and (4) Linear Identification Program (SCIDNT). The TV program converts the time domain state variable system representative to frequency domain transfer function system representation. The TVSR program computes time histories of the input/output responses of the human operator model. The TVOPT program is an optimal simulation program and is similar to TVSR in that it produces time histories of system states associated with an operator in the loop system. The differences between the two programs are presented. The SCIDNT program is an open loop identification code which operates on the simulated data from TVOPT (or TVSR) or real operator data from motion simulators.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peterson, Ruth C.; Kurucz, Robert L.; Ayres, Thomas R., E-mail: peterson@ucolick.org
2017-04-01
The Fe i spectrum is critical to many areas of astrophysics, yet many of the high-lying levels remain uncharacterized. To remedy this deficiency, Peterson and Kurucz identified Fe i lines in archival ultraviolet and optical spectra of metal-poor stars, whose warm temperatures favor moderate Fe i excitation. Sixty-five new levels were recovered, with 1500 detectable lines, including several bound levels in the ionization continuum of Fe i. Here, we extend the previous work by identifying 59 additional levels, with 1400 detectable lines, by incorporating new high-resolution UV spectra of warm metal-poor stars recently obtained by the Hubble Space Telescope Imagingmore » Spectrograph. We provide gf values for these transitions, both computed as well as adjusted to fit the stellar spectra. We also expand our spectral calculations to the infrared, confirming three levels by matching high-quality spectra of the Sun and two cool stars in the H -band. The predicted gf values suggest that an additional 3700 Fe i lines should be detectable in existing solar infrared spectra. Extending the empirical line identification work to the infrared would help confirm additional Fe i levels, as would new high-resolution UV spectra of metal-poor turnoff stars below 1900 Å.« less
NASA Astrophysics Data System (ADS)
Peterson, Ruth C.; Kurucz, Robert L.; Ayres, Thomas R.
2017-04-01
The Fe I spectrum is critical to many areas of astrophysics, yet many of the high-lying levels remain uncharacterized. To remedy this deficiency, Peterson & Kurucz identified Fe I lines in archival ultraviolet and optical spectra of metal-poor stars, whose warm temperatures favor moderate Fe I excitation. Sixty-five new levels were recovered, with 1500 detectable lines, including several bound levels in the ionization continuum of Fe I. Here, we extend the previous work by identifying 59 additional levels, with 1400 detectable lines, by incorporating new high-resolution UV spectra of warm metal-poor stars recently obtained by the Hubble Space Telescope Imaging Spectrograph. We provide gf values for these transitions, both computed as well as adjusted to fit the stellar spectra. We also expand our spectral calculations to the infrared, confirming three levels by matching high-quality spectra of the Sun and two cool stars in the H-band. The predicted gf values suggest that an additional 3700 Fe I lines should be detectable in existing solar infrared spectra. Extending the empirical line identification work to the infrared would help confirm additional Fe I levels, as would new high-resolution UV spectra of metal-poor turnoff stars below 1900 Å.
Kirchoff, Bruce K.; Delaney, Peter F.; Horton, Meg; Dellinger-Johnston, Rebecca
2014-01-01
Learning to identify organisms is extraordinarily difficult, yet trained field biologists can quickly and easily identify organisms at a glance. They do this without recourse to the use of traditional characters or identification devices. Achieving this type of recognition accuracy is a goal of many courses in plant systematics. Teaching plant identification is difficult because of variability in the plants’ appearance, the difficulty of bringing them into the classroom, and the difficulty of taking students into the field. To solve these problems, we developed and tested a cognitive psychology–based computer program to teach plant identification. The program incorporates presentation of plant images in a homework-based, active-learning format that was developed to stimulate expert-level visual recognition. A controlled experimental test using a within-subject design was performed against traditional study methods in the context of a college course in plant systematics. Use of the program resulted in an 8–25% statistically significant improvement in final exam scores, depending on the type of identification question used (living plants, photographs, written descriptions). The software demonstrates how the use of routines to train perceptual expertise, interleaved examples, spaced repetition, and retrieval practice can be used to train identification of complex and highly variable objects. PMID:25185226
Occupational risk identification using hand-held or laptop computers.
Naumanen, Paula; Savolainen, Heikki; Liesivuori, Jyrki
2008-01-01
This paper describes the Work Environment Profile (WEP) program and its use in risk identification by computer. It is installed into a hand-held computer or a laptop to be used in risk identification during work site visits. A 5-category system is used to describe the identified risks in 7 groups, i.e., accidents, biological and physical hazards, ergonomic and psychosocial load, chemicals, and information technology hazards. Each group contains several qualifying factors. These 5 categories are colour-coded at this stage to aid with visualization. Risk identification produces visual summary images the interpretation of which is facilitated by colours. The WEP program is a tool for risk assessment which is easy to learn and to use both by experts and nonprofessionals. It is especially well adapted to be used both in small and in larger enterprises. Considerable time is saved as no paper notes are needed.
NASA Astrophysics Data System (ADS)
Krishnanathan, Kirubhakaran; Anderson, Sean R.; Billings, Stephen A.; Kadirkamanathan, Visakan
2016-11-01
In this paper, we derive a system identification framework for continuous-time nonlinear systems, for the first time using a simulation-focused computational Bayesian approach. Simulation approaches to nonlinear system identification have been shown to outperform regression methods under certain conditions, such as non-persistently exciting inputs and fast-sampling. We use the approximate Bayesian computation (ABC) algorithm to perform simulation-based inference of model parameters. The framework has the following main advantages: (1) parameter distributions are intrinsically generated, giving the user a clear description of uncertainty, (2) the simulation approach avoids the difficult problem of estimating signal derivatives as is common with other continuous-time methods, and (3) as noted above, the simulation approach improves identification under conditions of non-persistently exciting inputs and fast-sampling. Term selection is performed by judging parameter significance using parameter distributions that are intrinsically generated as part of the ABC procedure. The results from a numerical example demonstrate that the method performs well in noisy scenarios, especially in comparison to competing techniques that rely on signal derivative estimation.
Multi-Directional Multi-Level Dual-Cross Patterns for Robust Face Recognition.
Ding, Changxing; Choi, Jonghyun; Tao, Dacheng; Davis, Larry S
2016-03-01
To perform unconstrained face recognition robust to variations in illumination, pose and expression, this paper presents a new scheme to extract "Multi-Directional Multi-Level Dual-Cross Patterns" (MDML-DCPs) from face images. Specifically, the MDML-DCPs scheme exploits the first derivative of Gaussian operator to reduce the impact of differences in illumination and then computes the DCP feature at both the holistic and component levels. DCP is a novel face image descriptor inspired by the unique textural structure of human faces. It is computationally efficient and only doubles the cost of computing local binary patterns, yet is extremely robust to pose and expression variations. MDML-DCPs comprehensively yet efficiently encodes the invariant characteristics of a face image from multiple levels into patterns that are highly discriminative of inter-personal differences but robust to intra-personal variations. Experimental results on the FERET, CAS-PERL-R1, FRGC 2.0, and LFW databases indicate that DCP outperforms the state-of-the-art local descriptors (e.g., LBP, LTP, LPQ, POEM, tLBP, and LGXP) for both face identification and face verification tasks. More impressively, the best performance is achieved on the challenging LFW and FRGC 2.0 databases by deploying MDML-DCPs in a simple recognition scheme.
Computer aided manual validation of mass spectrometry-based proteomic data.
Curran, Timothy G; Bryson, Bryan D; Reigelhaupt, Michael; Johnson, Hannah; White, Forest M
2013-06-15
Advances in mass spectrometry-based proteomic technologies have increased the speed of analysis and the depth provided by a single analysis. Computational tools to evaluate the accuracy of peptide identifications from these high-throughput analyses have not kept pace with technological advances; currently the most common quality evaluation methods are based on statistical analysis of the likelihood of false positive identifications in large-scale data sets. While helpful, these calculations do not consider the accuracy of each identification, thus creating a precarious situation for biologists relying on the data to inform experimental design. Manual validation is the gold standard approach to confirm accuracy of database identifications, but is extremely time-intensive. To palliate the increasing time required to manually validate large proteomic datasets, we provide computer aided manual validation software (CAMV) to expedite the process. Relevant spectra are collected, catalogued, and pre-labeled, allowing users to efficiently judge the quality of each identification and summarize applicable quantitative information. CAMV significantly reduces the burden associated with manual validation and will hopefully encourage broader adoption of manual validation in mass spectrometry-based proteomics. Copyright © 2013 Elsevier Inc. All rights reserved.
Electro-Optic Identification Research Program
2002-04-01
Electro - optic identification (EOID) sensors provide photographic quality images that can be used to identify mine-like contacts provided by long...tasks such as validating existing electro - optic models, development of performance metrics, and development of computer aided identification and
Faller, Christina E; Raman, E Prabhu; MacKerell, Alexander D; Guvench, Olgun
2015-01-01
Fragment-based drug design (FBDD) involves screening low molecular weight molecules ("fragments") that correspond to functional groups found in larger drug-like molecules to determine their binding to target proteins or nucleic acids. Based on the principle of thermodynamic additivity, two fragments that bind nonoverlapping nearby sites on the target can be combined to yield a new molecule whose binding free energy is the sum of those of the fragments. Experimental FBDD approaches, like NMR and X-ray crystallography, have proven very useful but can be expensive in terms of time, materials, and labor. Accordingly, a variety of computational FBDD approaches have been developed that provide different levels of detail and accuracy.The Site Identification by Ligand Competitive Saturation (SILCS) method of computational FBDD uses all-atom explicit-solvent molecular dynamics (MD) simulations to identify fragment binding. The target is "soaked" in an aqueous solution with multiple fragments having different identities. The resulting computational competition assay reveals what small molecule types are most likely to bind which regions of the target. From SILCS simulations, 3D probability maps of fragment binding called "FragMaps" can be produced. Based on the probabilities relative to bulk, SILCS FragMaps can be used to determine "Grid Free Energies (GFEs)," which provide per-atom contributions to fragment binding affinities. For essentially no additional computational overhead relative to the production of the FragMaps, GFEs can be used to compute Ligand Grid Free Energies (LGFEs) for arbitrarily complex molecules, and these LGFEs can be used to rank-order the molecules in accordance with binding affinities.
Computational approaches to schizophrenia: A perspective on negative symptoms.
Deserno, Lorenz; Heinz, Andreas; Schlagenhauf, Florian
2017-08-01
Schizophrenia is a heterogeneous spectrum disorder often associated with detrimental negative symptoms. In recent years, computational approaches to psychiatry have attracted growing attention. Negative symptoms have shown some overlap with general cognitive impairments and were also linked to impaired motivational processing in brain circuits implementing reward prediction. In this review, we outline how computational approaches may help to provide a better understanding of negative symptoms in terms of the potentially underlying behavioural and biological mechanisms. First, we describe the idea that negative symptoms could arise from a failure to represent reward expectations to enable flexible behavioural adaptation. It has been proposed that these impairments arise from a failure to use prediction errors to update expectations. Important previous studies focused on processing of so-called model-free prediction errors where learning is determined by past rewards only. However, learning and decision-making arise from multiple cognitive mechanisms functioning simultaneously, and dissecting them via well-designed tasks in conjunction with computational modelling is a promising avenue. Second, we move on to a proof-of-concept example on how generative models of functional imaging data from a cognitive task enable the identification of subgroups of patients mapping on different levels of negative symptoms. Combining the latter approach with behavioural studies regarding learning and decision-making may allow the identification of key behavioural and biological parameters distinctive for different dimensions of negative symptoms versus a general cognitive impairment. We conclude with an outlook on how this computational framework could, at some point, enrich future clinical studies. Copyright © 2016. Published by Elsevier B.V.
[Computer simulation by passenger wound analysis of vehicle collision].
Zou, Dong-Hua; Liu, Nning-Guo; Shen, Jie; Zhang, Xiao-Yun; Jin, Xian-Long; Chen, Yi-Jiu
2006-08-15
To reconstruct the course of vehicle collision, so that to provide the reference for forensic identification and disposal of traffic accidents. Through analyzing evidences left both on passengers and vehicles, technique of momentum impulse combined with multi-dynamics was applied to simulate the motion and injury of passengers as well as the track of vehicles. Model of computer stimulation perfectly reconstructed phases of the traffic collision, which coincide with details found by forensic investigation. Computer stimulation is helpful and feasible for forensic identification in traffic accidents.
Liu, Jianfei; Jung, HaeWon; Dubra, Alfredo; Tam, Johnny
2017-09-01
Adaptive optics scanning light ophthalmoscopy (AOSLO) has enabled quantification of the photoreceptor mosaic in the living human eye using metrics such as cell density and average spacing. These rely on the identification of individual cells. Here, we demonstrate a novel approach for computer-aided identification of cone photoreceptors on nonconfocal split detection AOSLO images. Algorithms for identification of cone photoreceptors were developed, based on multiscale circular voting (MSCV) in combination with a priori knowledge that split detection images resemble Nomarski differential interference contrast images, in which dark and bright regions are present on the two sides of each cell. The proposed algorithm locates dark and bright region pairs, iteratively refining the identification across multiple scales. Identification accuracy was assessed in data from 10 subjects by comparing automated identifications with manual labeling, followed by computation of density and spacing metrics for comparison to histology and published data. There was good agreement between manual and automated cone identifications with overall recall, precision, and F1 score of 92.9%, 90.8%, and 91.8%, respectively. On average, computed density and spacing values using automated identification were within 10.7% and 11.2% of the expected histology values across eccentricities ranging from 0.5 to 6.2 mm. There was no statistically significant difference between MSCV-based and histology-based density measurements (P = 0.96, Kolmogorov-Smirnov 2-sample test). MSCV can accurately detect cone photoreceptors on split detection images across a range of eccentricities, enabling quick, objective estimation of photoreceptor mosaic metrics, which will be important for future clinical trials utilizing adaptive optics.
Van der Loos, H F Machiel; Worthen-Chaudhari, Lise; Schwandt, Douglas; Bevly, David M; Kautz, Steven A
2010-08-01
This paper presents a novel computer-controlled bicycle ergometer, the TiltCycle, for use in human biomechanics studies of locomotion. The TiltCycle has a tilting (reclining) seat and backboard, a split pedal crankshaft to isolate the left and right loads to the feet of the pedaler, and two belt-driven, computer-controlled motors to provide assistance or resistance loads independently to each crank. Sensors measure the kinematics and force production of the legs to calculate work performed, and the system allows for goniometric and electromyography signals to be recorded. The technical description presented includes the mechanical design, low-level software and control algorithms, system identification and validation test results.
An intermediate level of abstraction for computational systems chemistry.
Andersen, Jakob L; Flamm, Christoph; Merkle, Daniel; Stadler, Peter F
2017-12-28
Computational techniques are required for narrowing down the vast space of possibilities to plausible prebiotic scenarios, because precise information on the molecular composition, the dominant reaction chemistry and the conditions for that era are scarce. The exploration of large chemical reaction networks is a central aspect in this endeavour. While quantum chemical methods can accurately predict the structures and reactivities of small molecules, they are not efficient enough to cope with large-scale reaction systems. The formalization of chemical reactions as graph grammars provides a generative system, well grounded in category theory, at the right level of abstraction for the analysis of large and complex reaction networks. An extension of the basic formalism into the realm of integer hyperflows allows for the identification of complex reaction patterns, such as autocatalysis, in large reaction networks using optimization techniques.This article is part of the themed issue 'Reconceptualizing the origins of life'. © 2017 The Author(s).
The utility of ERTS-1 data for applications in land use classification. [Texas Gulf Coast
NASA Technical Reports Server (NTRS)
Dornbach, J. E.; Mckain, G. E.
1974-01-01
A comprehensive study has been undertaken to determine the extent to which conventional image interpretation and computer-aided (spectral pattern recognition) analysis techniques using ERTS-1 data could be used to detect, identify (classify), locate, and measure current land use over large geographic areas. It can be concluded that most of the level 1 and 2 categories in the USGS Circular no. 671 can be detected in the Houston-Gulf Coast area using a combination of both techniques for analysis. These capabilities could be exercised over larger geographic areas, however, certain factors such as different vegetative cover, topography, etc. may have to be considered in other geographic regions. The best results in identification (classification), location, and measurement of level 1 and 2 type categories appear to be obtainable through automatic data processing of multispectral scanner computer compatible tapes.
Cloud computing in pharmaceutical R&D: business risks and mitigations.
Geiger, Karl
2010-05-01
Cloud computing provides information processing power and business services, delivering these services over the Internet from centrally hosted locations. Major technology corporations aim to supply these services to every sector of the economy. Deploying business processes 'in the cloud' requires special attention to the regulatory and business risks assumed when running on both hardware and software that are outside the direct control of a company. The identification of risks at the correct service level allows a good mitigation strategy to be selected. The pharmaceutical industry can take advantage of existing risk management strategies that have already been tested in the finance and electronic commerce sectors. In this review, the business risks associated with the use of cloud computing are discussed, and mitigations achieved through knowledge from securing services for electronic commerce and from good IT practice are highlighted.
Adventitious sounds identification and extraction using temporal-spectral dominance-based features.
Jin, Feng; Krishnan, Sridhar Sri; Sattar, Farook
2011-11-01
Respiratory sound (RS) signals carry significant information about the underlying functioning of the pulmonary system by the presence of adventitious sounds (ASs). Although many studies have addressed the problem of pathological RS classification, only a limited number of scientific works have focused on the analysis of the evolution of symptom-related signal components in joint time-frequency (TF) plane. This paper proposes a new signal identification and extraction method for various ASs based on instantaneous frequency (IF) analysis. The presented TF decomposition method produces a noise-resistant high definition TF representation of RS signals as compared to the conventional linear TF analysis methods, yet preserving the low computational complexity as compared to those quadratic TF analysis methods. The discarded phase information in conventional spectrogram has been adopted for the estimation of IF and group delay, and a temporal-spectral dominance spectrogram has subsequently been constructed by investigating the TF spreads of the computed time-corrected IF components. The proposed dominance measure enables the extraction of signal components correspond to ASs from noisy RS signal at high noise level. A new set of TF features has also been proposed to quantify the shapes of the obtained TF contours, and therefore strongly, enhances the identification of multicomponents signals such as polyphonic wheezes. An overall accuracy of 92.4±2.9% for the classification of real RS recordings shows the promising performance of the presented method.
The nature and use of prediction skills in a biological computer simulation
NASA Astrophysics Data System (ADS)
Lavoie, Derrick R.; Good, Ron
The primary goal of this study was to examine the science process skill of prediction using qualitative research methodology. The think-aloud interview, modeled after Ericsson and Simon (1984), let to the identification of 63 program exploration and prediction behaviors.The performance of seven formal and seven concrete operational high-school biology students were videotaped during a three-phase learning sequence on water pollution. Subjects explored the effects of five independent variables on two dependent variables over time using a computer-simulation program. Predictions were made concerning the effect of the independent variables upon dependent variables through time. Subjects were identified according to initial knowledge of the subject matter and success at solving three selected prediction problems.Successful predictors generally had high initial knowledge of the subject matter and were formal operational. Unsuccessful predictors generally had low initial knowledge and were concrete operational. High initial knowledge seemed to be more important to predictive success than stage of Piagetian cognitive development.Successful prediction behaviors involved systematic manipulation of the independent variables, note taking, identification and use of appropriate independent-dependent variable relationships, high interest and motivation, and in general, higher-level thinking skills. Behaviors characteristic of unsuccessful predictors were nonsystematic manipulation of independent variables, lack of motivation and persistence, misconceptions, and the identification and use of inappropriate independent-dependent variable relationships.
NASA Technical Reports Server (NTRS)
Kratochvil, D.; Bowyer, J.; Bhushan, C.; Steinnagel, K.; Al-Kinani, G.
1983-01-01
The potential United States domestic telecommunications demand for satellite provided customer premises voice, data and video services through the year 2000 were forecast, so that this information on service demand would be available to aid in NASA program planning. To accomplish this overall purpose the following objectives were achieved: development of a forecast of the total domestic telecommunications demand, identification of that portion of the telecommunications demand suitable for transmission by satellite systems, identification of that portion of the satellite market addressable by Computer premises services systems, identification of that portion of the satellite market addressabble by Ka-band CPS system, and postulation of a Ka-band CPS network on a nationwide and local level. The approach employed included the use of a variety of forecasting models, a market distribution model and a network optimization model. Forecasts were developed for; 1980, 1990, and 2000; voice, data and video services; terrestrial and satellite delivery modes; and C, Ku and Ka-bands.
NASA Astrophysics Data System (ADS)
Kratochvil, D.; Bowyer, J.; Bhushan, C.; Steinnagel, K.; Al-Kinani, G.
1983-08-01
The potential United States domestic telecommunications demand for satellite provided customer premises voice, data and video services through the year 2000 were forecast, so that this information on service demand would be available to aid in NASA program planning. To accomplish this overall purpose the following objectives were achieved: development of a forecast of the total domestic telecommunications demand, identification of that portion of the telecommunications demand suitable for transmission by satellite systems, identification of that portion of the satellite market addressable by Computer premises services systems, identification of that portion of the satellite market addressabble by Ka-band CPS system, and postulation of a Ka-band CPS network on a nationwide and local level. The approach employed included the use of a variety of forecasting models, a market distribution model and a network optimization model. Forecasts were developed for; 1980, 1990, and 2000; voice, data and video services; terrestrial and satellite delivery modes; and C, Ku and Ka-bands.
PROGRAM FOR THE IDENTIFICATION AND REPLACEMENT OF ENDOCRINE DISRUPTING CHEMICALS
A computer software program is being developed to aid in the identification and replacement of endocrine disrupting chemicals (EDC). This program will be comprised of two distinct areas of research: identification of potential EDC nd suggstions for replacing those potential EDC. ...
21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow parameter...
21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow parameter...
21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow parameter...
21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow parameter...
Liu, Jianfei; Jung, HaeWon; Dubra, Alfredo; Tam, Johnny
2017-01-01
Purpose Adaptive optics scanning light ophthalmoscopy (AOSLO) has enabled quantification of the photoreceptor mosaic in the living human eye using metrics such as cell density and average spacing. These rely on the identification of individual cells. Here, we demonstrate a novel approach for computer-aided identification of cone photoreceptors on nonconfocal split detection AOSLO images. Methods Algorithms for identification of cone photoreceptors were developed, based on multiscale circular voting (MSCV) in combination with a priori knowledge that split detection images resemble Nomarski differential interference contrast images, in which dark and bright regions are present on the two sides of each cell. The proposed algorithm locates dark and bright region pairs, iteratively refining the identification across multiple scales. Identification accuracy was assessed in data from 10 subjects by comparing automated identifications with manual labeling, followed by computation of density and spacing metrics for comparison to histology and published data. Results There was good agreement between manual and automated cone identifications with overall recall, precision, and F1 score of 92.9%, 90.8%, and 91.8%, respectively. On average, computed density and spacing values using automated identification were within 10.7% and 11.2% of the expected histology values across eccentricities ranging from 0.5 to 6.2 mm. There was no statistically significant difference between MSCV-based and histology-based density measurements (P = 0.96, Kolmogorov-Smirnov 2-sample test). Conclusions MSCV can accurately detect cone photoreceptors on split detection images across a range of eccentricities, enabling quick, objective estimation of photoreceptor mosaic metrics, which will be important for future clinical trials utilizing adaptive optics. PMID:28873173
NASA Astrophysics Data System (ADS)
Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Suffredini, Anthony F.; Sacks, David B.; Yu, Yi-Kuo
2016-02-01
Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple `fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.
Identification of dynamic load for prosthetic structures.
Zhang, Dequan; Han, Xu; Zhang, Zhongpu; Liu, Jie; Jiang, Chao; Yoda, Nobuhiro; Meng, Xianghua; Li, Qing
2017-12-01
Dynamic load exists in numerous biomechanical systems, and its identification signifies a critical issue for characterizing dynamic behaviors and studying biomechanical consequence of the systems. This study aims to identify dynamic load in the dental prosthetic structures, namely, 3-unit implant-supported fixed partial denture (I-FPD) and teeth-supported fixed partial denture. The 3-dimensional finite element models were constructed through specific patient's computerized tomography images. A forward algorithm and regularization technique were developed for identifying dynamic load. To verify the effectiveness of the identification method proposed, the I-FPD and teeth-supported fixed partial denture structures were investigated to determine the dynamic loads. For validating the results of inverse identification, an experimental force-measuring system was developed by using a 3-dimensional piezoelectric transducer to measure the dynamic load in the I-FPD structure in vivo. The computationally identified loads were presented with different noise levels to determine their influence on the identification accuracy. The errors between the measured load and identified counterpart were calculated for evaluating the practical applicability of the proposed procedure in biomechanical engineering. This study is expected to serve as a demonstrative role in identifying dynamic loading in biomedical systems, where a direct in vivo measurement may be rather demanding in some areas of interest clinically. Copyright © 2017 John Wiley & Sons, Ltd.
Real-time skin feature identification in a time-sequential video stream
NASA Astrophysics Data System (ADS)
Kramberger, Iztok
2005-04-01
Skin color can be an important feature when tracking skin-colored objects. Particularly this is the case for computer-vision-based human-computer interfaces (HCI). Humans have a highly developed feeling of space and, therefore, it is reasonable to support this within intelligent HCI, where the importance of augmented reality can be foreseen. Joining human-like interaction techniques within multimodal HCI could, or will, gain a feature for modern mobile telecommunication devices. On the other hand, real-time processing plays an important role in achieving more natural and physically intuitive ways of human-machine interaction. The main scope of this work is the development of a stereoscopic computer-vision hardware-accelerated framework for real-time skin feature identification in the sense of a single-pass image segmentation process. The hardware-accelerated preprocessing stage is presented with the purpose of color and spatial filtering, where the skin color model within the hue-saturation-value (HSV) color space is given with a polyhedron of threshold values representing the basis of the filter model. An adaptive filter management unit is suggested to achieve better segmentation results. This enables the adoption of filter parameters to the current scene conditions in an adaptive way. Implementation of the suggested hardware structure is given at the level of filed programmable system level integrated circuit (FPSLIC) devices using an embedded microcontroller as their main feature. A stereoscopic clue is achieved using a time-sequential video stream, but this shows no difference for real-time processing requirements in terms of hardware complexity. The experimental results for the hardware-accelerated preprocessing stage are given by efficiency estimation of the presented hardware structure using a simple motion-detection algorithm based on a binary function.
Advanced imaging in acute stroke management-Part I: Computed tomographic.
Saini, Monica; Butcher, Ken
2009-01-01
Neuroimaging is fundamental to stroke diagnosis and management. Non-contrast computed tomography (NCCT) has been the primary imaging modality utilized for this purpose for almost four decades. Although NCCT does permit identification of intracranial hemorrhage and parenchymal ischemic changes, insights into blood vessel patency and cerebral perfusion are limited. Advances in reperfusion strategies have made identification of potentially salvageable brain tissue a more practical concern. Advances in CT technology now permit identification of acute and chronic arterial lesions, as well as cerebral blood flow deficits. This review outlines principles of advanced CT image acquisition and its utility in acute stroke management.
Artese, Anna; Alcaro, Stefano; Moraca, Federica; Reina, Rocco; Ventura, Marzia; Costantino, Gabriele; Beccari, Andrea R; Ortuso, Francesco
2013-05-01
During the first edition of the Computationally Driven Drug Discovery meeting, held in November 2011 at Dompé Pharma (L'Aquila, Italy), a questionnaire regarding the diffusion and the use of computational tools for drug-design purposes in both academia and industry was distributed among all participants. This is a follow-up of a previously reported investigation carried out among a few companies in 2007. The new questionnaire implemented five sections dedicated to: research group identification and classification; 18 different computational techniques; software information; hardware data; and economical business considerations. In this article, together with a detailed history of the different computational methods, a statistical analysis of the survey results that enabled the identification of the prevalent computational techniques adopted in drug-design projects is reported and a profile of the computational medicinal chemist currently working in academia and pharmaceutical companies in Italy is highlighted.
Edwards, Roger L; Edwards, Sandra L; Bryner, James; Cunningham, Kelly; Rogers, Amy; Slattery, Martha L
2008-04-01
We describe a computer-assisted data collection system developed for a multicenter cohort study of American Indian and Alaska Native people. The study computer-assisted participant evaluation system or SCAPES is built around a central database server that controls a small private network with touch screen workstations. SCAPES encompasses the self-administered questionnaires, the keyboard-based stations for interviewer-administered questionnaires, a system for inputting medical measurements, and administrative tasks such as data exporting, backup and management. Elements of SCAPES hardware/network design, data storage, programming language, software choices, questionnaire programming including the programming of questionnaires administered using audio computer-assisted self-interviewing (ACASI), and participant identification/data security system are presented. Unique features of SCAPES are that data are promptly made available to participants in the form of health feedback; data can be quickly summarized for tribes for health monitoring and planning at the community level; and data are available to study investigators for analyses and scientific evaluation.
Faller, Christina E.; Raman, E. Prabhu; MacKerell, Alexander D.; Guvench, Olgun
2015-01-01
Fragment-based drug design (FBDD) involves screening low molecular weight molecules (“fragments”) that correspond to functional groups found in larger drug-like molecules to determine their binding to target proteins or nucleic acids. Based on the principle of thermodynamic additivity, two fragments that bind non-overlapping nearby sites on the target can be combined to yield a new molecule whose binding free energy is the sum of those of the fragments. Experimental FBDD approaches, like NMR and X-ray crystallography, have proven very useful but can be expensive in terms of time, materials, and labor. Accordingly, a variety of computational FBDD approaches have been developed that provide different levels of detail and accuracy. The Site Identification by Ligand Competitive Saturation (SILCS) method of computational FBDD uses all-atom explicit-solvent molecular dynamics (MD) simulations to identify fragment binding. The target is “soaked” in an aqueous solution with multiple fragments having different identities. The resulting computational competition assay reveals what small molecule types are most likely to bind which regions of the target. From SILCS simulations, 3D probability maps of fragment binding called “FragMaps” can be produced. Based on the probabilities relative to bulk, SILCS FragMaps can be used to determine “Grid Free Energies (GFEs),” which provide per-atom contributions to fragment binding affinities. For essentially no additional computational overhead relative to the production of the FragMaps, GFEs can be used to compute Ligand Grid Free Energies (LGFEs) for arbitrarily complex molecules, and these LGFEs can be used to rank-order the molecules in accordance with binding affinities. PMID:25709034
1980-03-01
Oceanography Center (FNOC) is currently testing and evaluating a computerized flight plan system, referred to, for short, as OPARS. This sytem , developed to...replace the Lockheed Jetplan flight plan sytem , provides users at remote sites with direct access to the FNOC computer via 11 telephone lines. The...validity, but only for format. For example, an entry of ABCE , as the four- letter identification code for the destination airfield, would be accepted
Computational methods and challenges in hydrogen/deuterium exchange mass spectrometry.
Claesen, Jürgen; Burzykowski, Tomasz
2017-09-01
Hydrogen/Deuterium exchange (HDX) has been applied, since the 1930s, as an analytical tool to study the structure and dynamics of (small) biomolecules. The popularity of using HDX to study proteins increased drastically in the last two decades due to the successful combination with mass spectrometry (MS). Together with this growth in popularity, several technological advances have been made, such as improved quenching and fragmentation. As a consequence of these experimental improvements and the increased use of protein-HDXMS, large amounts of complex data are generated, which require appropriate analysis. Computational analysis of HDXMS requires several steps. A typical workflow for proteins consists of identification of (non-)deuterated peptides or fragments of the protein under study (local analysis), or identification of the deuterated protein as a whole (global analysis); determination of the deuteration level; estimation of the protection extent or exchange rates of the labile backbone amide hydrogen atoms; and a statistically sound interpretation of the estimated protection extent or exchange rates. Several algorithms, specifically designed for HDX analysis, have been proposed. They range from procedures that focus on one specific step in the analysis of HDX data to complete HDX workflow analysis tools. In this review, we provide an overview of the computational methods and discuss outstanding challenges. © 2016 Wiley Periodicals, Inc. Mass Spec Rev 36:649-667, 2017. © 2016 Wiley Periodicals, Inc.
Identification of human microRNA targets from isolated argonaute protein complexes.
Beitzinger, Michaela; Peters, Lasse; Zhu, Jia Yun; Kremmer, Elisabeth; Meister, Gunter
2007-06-01
MicroRNAs (miRNAs) constitute a class of small non-coding RNAs that regulate gene expression on the level of translation and/or mRNA stability. Mammalian miRNAs associate with members of the Argonaute (Ago) protein family and bind to partially complementary sequences in the 3' untranslated region (UTR) of specific target mRNAs. Computer algorithms based on factors such as free binding energy or sequence conservation have been used to predict miRNA target mRNAs. Based on such predictions, up to one third of all mammalian mRNAs seem to be under miRNA regulation. However, due to the low degree of complementarity between the miRNA and its target, such computer programs are often imprecise and therefore not very reliable. Here we report the first biochemical identification approach of miRNA targets from human cells. Using highly specific monoclonal antibodies against members of the Ago protein family, we co-immunoprecipitate Ago-bound mRNAs and identify them by cloning. Interestingly, most of the identified targets are also predicted by different computer programs. Moreover, we randomly analyzed six different target candidates and were able to experimentally validate five as miRNA targets. Our data clearly indicate that miRNA targets can be experimentally identified from Ago complexes and therefore provide a new tool to directly analyze miRNA function.
21 CFR 868.1730 - Oxygen uptake computer.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Oxygen uptake computer. 868.1730 Section 868.1730...) MEDICAL DEVICES ANESTHESIOLOGY DEVICES Diagnostic Devices § 868.1730 Oxygen uptake computer. (a) Identification. An oxygen uptake computer is a device intended to compute the amount of oxygen consumed by a...
21 CFR 868.1730 - Oxygen uptake computer.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Oxygen uptake computer. 868.1730 Section 868.1730...) MEDICAL DEVICES ANESTHESIOLOGY DEVICES Diagnostic Devices § 868.1730 Oxygen uptake computer. (a) Identification. An oxygen uptake computer is a device intended to compute the amount of oxygen consumed by a...
21 CFR 868.1730 - Oxygen uptake computer.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Oxygen uptake computer. 868.1730 Section 868.1730...) MEDICAL DEVICES ANESTHESIOLOGY DEVICES Diagnostic Devices § 868.1730 Oxygen uptake computer. (a) Identification. An oxygen uptake computer is a device intended to compute the amount of oxygen consumed by a...
21 CFR 868.1730 - Oxygen uptake computer.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Oxygen uptake computer. 868.1730 Section 868.1730...) MEDICAL DEVICES ANESTHESIOLOGY DEVICES Diagnostic Devices § 868.1730 Oxygen uptake computer. (a) Identification. An oxygen uptake computer is a device intended to compute the amount of oxygen consumed by a...
21 CFR 868.1730 - Oxygen uptake computer.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Oxygen uptake computer. 868.1730 Section 868.1730...) MEDICAL DEVICES ANESTHESIOLOGY DEVICES Diagnostic Devices § 868.1730 Oxygen uptake computer. (a) Identification. An oxygen uptake computer is a device intended to compute the amount of oxygen consumed by a...
DOE`s nation-wide system for access control can solve problems for the federal government
DOE Office of Scientific and Technical Information (OSTI.GOV)
Callahan, S.; Tomes, D.; Davis, G.
1996-07-01
The U.S. Department of Energy`s (DOE`s) ongoing efforts to improve its physical and personnel security systems while reducing its costs, provide a model for federal government visitor processing. Through the careful use of standardized badges, computer databases, and networks of automated access control systems, the DOE is increasing the security associated with travel throughout the DOE complex, and at the same time, eliminating paperwork, special badging, and visitor delays. The DOE is also improving badge accountability, personnel identification assurance, and access authorization timeliness and accuracy. Like the federal government, the DOE has dozens of geographically dispersed locations run by manymore » different contractors operating a wide range of security systems. The DOE has overcome these obstacles by providing data format standards, a complex-wide virtual network for security, the adoption of a standard high security system, and an open-systems-compatible link for any automated access control system. If the location`s level of security requires it, positive visitor identification is accomplished by personal identification number (PIN) and/or by biometrics. At sites with automated access control systems, this positive identification is integrated into the portals.« less
21 CFR 870.1425 - Programmable diagnostic computer.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Programmable diagnostic computer. 870.1425 Section... (CONTINUED) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Diagnostic Devices § 870.1425 Programmable diagnostic computer. (a) Identification. A programmable diagnostic computer is a device that can be...
21 CFR 870.1425 - Programmable diagnostic computer.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Programmable diagnostic computer. 870.1425 Section... (CONTINUED) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Diagnostic Devices § 870.1425 Programmable diagnostic computer. (a) Identification. A programmable diagnostic computer is a device that can be...
21 CFR 870.1425 - Programmable diagnostic computer.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Programmable diagnostic computer. 870.1425 Section... (CONTINUED) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Diagnostic Devices § 870.1425 Programmable diagnostic computer. (a) Identification. A programmable diagnostic computer is a device that can be...
21 CFR 870.1425 - Programmable diagnostic computer.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Programmable diagnostic computer. 870.1425 Section... (CONTINUED) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Diagnostic Devices § 870.1425 Programmable diagnostic computer. (a) Identification. A programmable diagnostic computer is a device that can be...
21 CFR 870.1425 - Programmable diagnostic computer.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Programmable diagnostic computer. 870.1425 Section... (CONTINUED) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Diagnostic Devices § 870.1425 Programmable diagnostic computer. (a) Identification. A programmable diagnostic computer is a device that can be...
Dysregulation in level of goal and action identification across psychological disorders.
Watkins, Edward
2011-03-01
Goals, events, and actions can be mentally represented within a hierarchical framework that ranges from more abstract to more concrete levels of identification. A more abstract level of identification involves general, superordinate, and decontextualized mental representations that convey the meaning of goals, events, and actions, "why" an action is performed, and its purpose, ends, and consequences. A more concrete level of identification involves specific and subordinate mental representations that include contextual details of goals, events, and actions, and the specific "how" details of an action. This review considers three lines of evidence for considering that dysregulation of level of goal/action identification may be a transdiagnostic process. First, there is evidence that different levels of identification have distinct functional consequences and that in non-clinical samples level of goal/action identification appears to be regulated in a flexible and adaptive way to match the level of goal/action identification to circumstances. Second, there is evidence that level of goal/action identification causally influences symptoms and processes involved in psychological disorders, including emotional response, repetitive thought, impulsivity, problem solving and procrastination. Third, there is evidence that the level of goal/action identification is biased and/or dysregulated in certain psychological disorders, with a bias towards more abstract identification for negative events in depression, GAD, PTSD, and social anxiety. Copyright © 2010 Elsevier Ltd. All rights reserved.
Dysregulation in level of goal and action identification across psychological disorders
Watkins, Edward
2011-01-01
Goals, events, and actions can be mentally represented within a hierarchical framework that ranges from more abstract to more concrete levels of identification. A more abstract level of identification involves general, superordinate, and decontextualized mental representations that convey the meaning of goals, events, and actions, “why” an action is performed, and its purpose, ends, and consequences. A more concrete level of identification involves specific and subordinate mental representations that include contextual details of goals, events, and actions, and the specific “how” details of an action. This review considers three lines of evidence for considering that dysregulation of level of goal/action identification may be a transdiagnostic process. First, there is evidence that different levels of identification have distinct functional consequences and that in non-clinical samples level of goal/action identification appears to be regulated in a flexible and adaptive way to match the level of goal/action identification to circumstances. Second, there is evidence that level of goal/action identification causally influences symptoms and processes involved in psychological disorders, including emotional response, repetitive thought, impulsivity, problem solving and procrastination. Third, there is evidence that the level of goal/action identification is biased and/or dysregulated in certain psychological disorders, with a bias towards more abstract identification for negative events in depression, GAD, PTSD, and social anxiety. PMID:20579789
Computed tomographic identification of calcified optic nerve drusen
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramirez, H.; Blatt, E.S.; Hibri, N.S.
1983-07-01
Four cases of optic disk drusen were accurately diagnosed with orbital computed tomography (CT). The radiologist should be aware of the characteristic CT finding of discrete calcification within an otherwise normal optic disk. This benign process is easily differentiated from lesions such as calcific neoplastic processes of the posterior globe. CT identification of optic disk drusen is essential in the evaluation of visual field defects, migraine-like headaches, and pseudopapilledema.
Neural Network Design on the SRC-6 Reconfigurable Computer
2006-12-01
fingerprint identification. In this field, automatic identification methods are used to save time, especially for the purpose of fingerprint matching in...grid widths and lengths and therefore was useful in producing an accurate canvas with which to create sample training images. The added benefit of...tools available free of charge and readily accessible on the computer, it was simple to design bitmap data files visually on a canvas and then
NASA Astrophysics Data System (ADS)
Wang, K.; Jönsson, P.; Gaigalas, G.; Radžiūtė, L.; Rynkun, P.; Del Zanna, G.; Chen, C. Y.
2018-04-01
The fully relativistic multiconfiguration Dirac–Hartree–Fock method is used to compute excitation energies and lifetimes for the 143 lowest states of the 3{s}23{p}3, 3s3p 4, 3{s}23{p}23d, 3s3p 33d, 3p 5, 3{s}23p3{d}2 configurations in P-like ions from Cr X to Zn XVI. Multipole (E1, M1, E2, M2) transition rates, line strengths, oscillator strengths, and branching fractions among these states are also given. Valence–valence and core–valence electron correlation effects are systematically accounted for using large basis function expansions. Computed excitation energies are compared with the NIST ASD and CHIANTI compiled values and previous calculations. The mean average absolute difference, removing obvious outliers, between computed and observed energies for the 41 lowest identified levels in Fe XII, is only 0.057%, implying that the computed energies are accurate enough to aid identification of new emission lines from the Sun and other astrophysical sources. The amount of energy and transition data of high accuracy are significantly increased for several P-like ions of astrophysics interest, where experimental data are still very scarce.
Zhan, Mei; Crane, Matthew M; Entchev, Eugeni V; Caballero, Antonio; Fernandes de Abreu, Diana Andrea; Ch'ng, QueeLim; Lu, Hang
2015-04-01
Quantitative imaging has become a vital technique in biological discovery and clinical diagnostics; a plethora of tools have recently been developed to enable new and accelerated forms of biological investigation. Increasingly, the capacity for high-throughput experimentation provided by new imaging modalities, contrast techniques, microscopy tools, microfluidics and computer controlled systems shifts the experimental bottleneck from the level of physical manipulation and raw data collection to automated recognition and data processing. Yet, despite their broad importance, image analysis solutions to address these needs have been narrowly tailored. Here, we present a generalizable formulation for autonomous identification of specific biological structures that is applicable for many problems. The process flow architecture we present here utilizes standard image processing techniques and the multi-tiered application of classification models such as support vector machines (SVM). These low-level functions are readily available in a large array of image processing software packages and programming languages. Our framework is thus both easy to implement at the modular level and provides specific high-level architecture to guide the solution of more complicated image-processing problems. We demonstrate the utility of the classification routine by developing two specific classifiers as a toolset for automation and cell identification in the model organism Caenorhabditis elegans. To serve a common need for automated high-resolution imaging and behavior applications in the C. elegans research community, we contribute a ready-to-use classifier for the identification of the head of the animal under bright field imaging. Furthermore, we extend our framework to address the pervasive problem of cell-specific identification under fluorescent imaging, which is critical for biological investigation in multicellular organisms or tissues. Using these examples as a guide, we envision the broad utility of the framework for diverse problems across different length scales and imaging methods.
Unger, Jakob; Merhof, Dorit; Renner, Susanne
2016-11-16
Global Plants, a collaborative between JSTOR and some 300 herbaria, now contains about 2.48 million high-resolution images of plant specimens, a number that continues to grow, and collections that are digitizing their specimens at high resolution are allocating considerable recourses to the maintenance of computer hardware (e.g., servers) and to acquiring digital storage space. We here apply machine learning, specifically the training of a Support-Vector-Machine, to classify specimen images into categories, ideally at the species level, using the 26 most common tree species in Germany as a test case. We designed an analysis pipeline and classification system consisting of segmentation, normalization, feature extraction, and classification steps and evaluated the system in two test sets, one with 26 species, the other with 17, in each case using 10 images per species of plants collected between 1820 and 1995, which simulates the empirical situation that most named species are represented in herbaria and databases, such as JSTOR, by few specimens. We achieved 73.21% accuracy of species assignments in the larger test set, and 84.88% in the smaller test set. The results of this first application of a computer vision algorithm trained on images of herbarium specimens shows that despite the problem of overlapping leaves, leaf-architectural features can be used to categorize specimens to species with good accuracy. Computer vision is poised to play a significant role in future rapid identification at least for frequently collected genera or species in the European flora.
NASA Astrophysics Data System (ADS)
Ramos, José A.; Mercère, Guillaume
2016-12-01
In this paper, we present an algorithm for identifying two-dimensional (2D) causal, recursive and separable-in-denominator (CRSD) state-space models in the Roesser form with deterministic-stochastic inputs. The algorithm implements the N4SID, PO-MOESP and CCA methods, which are well known in the literature on 1D system identification, but here we do so for the 2D CRSD Roesser model. The algorithm solves the 2D system identification problem by maintaining the constraint structure imposed by the problem (i.e. Toeplitz and Hankel) and computes the horizontal and vertical system orders, system parameter matrices and covariance matrices of a 2D CRSD Roesser model. From a computational point of view, the algorithm has been presented in a unified framework, where the user can select which of the three methods to use. Furthermore, the identification task is divided into three main parts: (1) computing the deterministic horizontal model parameters, (2) computing the deterministic vertical model parameters and (3) computing the stochastic components. Specific attention has been paid to the computation of a stabilised Kalman gain matrix and a positive real solution when required. The efficiency and robustness of the unified algorithm have been demonstrated via a thorough simulation example.
Blind source computer device identification from recorded VoIP calls for forensic investigation.
Jahanirad, Mehdi; Anuar, Nor Badrul; Wahab, Ainuddin Wahid Abdul
2017-03-01
The VoIP services provide fertile ground for criminal activity, thus identifying the transmitting computer devices from recorded VoIP call may help the forensic investigator to reveal useful information. It also proves the authenticity of the call recording submitted to the court as evidence. This paper extended the previous study on the use of recorded VoIP call for blind source computer device identification. Although initial results were promising but theoretical reasoning for this is yet to be found. The study suggested computing entropy of mel-frequency cepstrum coefficients (entropy-MFCC) from near-silent segments as an intrinsic feature set that captures the device response function due to the tolerances in the electronic components of individual computer devices. By applying the supervised learning techniques of naïve Bayesian, linear logistic regression, neural networks and support vector machines to the entropy-MFCC features, state-of-the-art identification accuracy of near 99.9% has been achieved on different sets of computer devices for both call recording and microphone recording scenarios. Furthermore, unsupervised learning techniques, including simple k-means, expectation-maximization and density-based spatial clustering of applications with noise (DBSCAN) provided promising results for call recording dataset by assigning the majority of instances to their correct clusters. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing
Karimi, Ramin; Hajdu, Andras
2016-01-01
Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis. PMID:26884678
HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing.
Karimi, Ramin; Hajdu, Andras
2016-01-01
Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis.
NASA Technical Reports Server (NTRS)
Irwin, R. Dennis
1988-01-01
The applicability of H infinity control theory to the problems of large space structures (LSS) control was investigated. A complete evaluation to any technique as a candidate for large space structure control involves analytical evaluation, algorithmic evaluation, evaluation via simulation studies, and experimental evaluation. The results of analytical and algorithmic evaluations are documented. The analytical evaluation involves the determination of the appropriateness of the underlying assumptions inherent in the H infinity theory, the determination of the capability of the H infinity theory to achieve the design goals likely to be imposed on an LSS control design, and the identification of any LSS specific simplifications or complications of the theory. The resuls of the analytical evaluation are presented in the form of a tutorial on the subject of H infinity control theory with the LSS control designer in mind. The algorthmic evaluation of H infinity for LSS control pertains to the identification of general, high level algorithms for effecting the application of H infinity to LSS control problems, the identification of specific, numerically reliable algorithms necessary for a computer implementation of the general algorithms, the recommendation of a flexible software system for implementing the H infinity design steps, and ultimately the actual development of the necessary computer codes. Finally, the state of the art in H infinity applications is summarized with a brief outline of the most promising areas of current research.
Computer-Aided Parallelizer and Optimizer
NASA Technical Reports Server (NTRS)
Jin, Haoqiang
2011-01-01
The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.
CHIRAL--A Computer Aided Application of the Cahn-Ingold-Prelog Rules.
ERIC Educational Resources Information Center
Meyer, Edgar F., Jr.
1978-01-01
A computer program is described for identification of chiral centers in molecules. Essential input to the program includes both atomic and bonding information. The program does not require computer graphic input-output. (BB)
Application of a fast skyline computation algorithm for serendipitous searching problems
NASA Astrophysics Data System (ADS)
Koizumi, Kenichi; Hiraki, Kei; Inaba, Mary
2018-02-01
Skyline computation is a method of extracting interesting entries from a large population with multiple attributes. These entries, called skyline or Pareto optimal entries, are known to have extreme characteristics that cannot be found by outlier detection methods. Skyline computation is an important task for characterizing large amounts of data and selecting interesting entries with extreme features. When the population changes dynamically, the task of calculating a sequence of skyline sets is called continuous skyline computation. This task is known to be difficult to perform for the following reasons: (1) information of non-skyline entries must be stored since they may join the skyline in the future; (2) the appearance or disappearance of even a single entry can change the skyline drastically; (3) it is difficult to adopt a geometric acceleration algorithm for skyline computation tasks with high-dimensional datasets. Our new algorithm called jointed rooted-tree (JR-tree) manages entries using a rooted tree structure. JR-tree delays extend the tree to deep levels to accelerate tree construction and traversal. In this study, we presented the difficulties in extracting entries tagged with a rare label in high-dimensional space and the potential of fast skyline computation in low-latency cell identification technology.
Robust uncertainty evaluation for system identification on distributed wireless platforms
NASA Astrophysics Data System (ADS)
Crinière, Antoine; Döhler, Michael; Le Cam, Vincent; Mevel, Laurent
2016-04-01
Health monitoring of civil structures by system identification procedures from automatic control is now accepted as a valid approach. These methods provide frequencies and modeshapes from the structure over time. For a continuous monitoring the excitation of a structure is usually ambient, thus unknown and assumed to be noise. Hence, all estimates from the vibration measurements are realizations of random variables with inherent uncertainty due to (unknown) process and measurement noise and finite data length. The underlying algorithms are usually running under Matlab under the assumption of large memory pool and considerable computational power. Even under these premises, computational and memory usage are heavy and not realistic for being embedded in on-site sensor platforms such as the PEGASE platform. Moreover, the current push for distributed wireless systems calls for algorithmic adaptation for lowering data exchanges and maximizing local processing. Finally, the recent breakthrough in system identification allows us to process both frequency information and its related uncertainty together from one and only one data sequence, at the expense of computational and memory explosion that require even more careful attention than before. The current approach will focus on presenting a system identification procedure called multi-setup subspace identification that allows to process both frequencies and their related variances from a set of interconnected wireless systems with all computation running locally within the limited memory pool of each system before being merged on a host supervisor. Careful attention will be given to data exchanges and I/O satisfying OGC standards, as well as minimizing memory footprints and maximizing computational efficiency. Those systems are built in a way of autonomous operations on field and could be later included in a wide distributed architecture such as the Cloud2SM project. The usefulness of these strategies is illustrated on data from a progressive damage action on a prestressed concrete bridge. References [1] E. Carden and P. Fanning. Vibration based condition monitoring: a review. Structural Health Monitoring, 3(4):355-377, 2004. [2] M. Döhler and L. Mevel. Efficient multi-order uncertainty computation for stochastic subspace identification. Mechanical Systems and Signal Processing, 38(2):346-366, 2013. [3] M.Döhler, L. Mevel. Modular subspace-based system identification from multi-setup measurements. IEEE Transactions on Automatic Control, 57(11):2951-2956, 2012. [4] M. Döhler, X.-B. Lam, and L. Mevel. Uncertainty quantification for modal parameters from stochastic subspace identification on multi-setup measurements. MechanicalSystems and Signal Processing, 36(2):562-581, 2013. [5] A Crinière, J Dumoulin, L Mevel, G Andrade-Barosso, M Simonin. The Cloud2SM Project.European Geosciences Union General Assembly (EGU2015), Apr 2015, Vienne, Austria. 2015.
A Feasibility Study of View-independent Gait Identification
2012-03-01
ice skates . For walking, the footprint records for single pixels form clusters that are well separated in space and time. (Any overlap of contact...Pattern Recognition 2007, 1-8. Cheng M-H, Ho M-F & Huang C-L (2008), "Gait Analysis for Human Identification Through Manifold Learning and HMM... Learning and Cybernetics 2005, 4516-4521 Moeslund T B & Granum E (2001), "A Survey of Computer Vision-Based Human Motion Capture", Computer Vision
Improved Targeting Through Collaborative Decision-Making and Brain Computer Interfaces
NASA Technical Reports Server (NTRS)
Stoica, Adrian; Barrero, David F.; McDonald-Maier, Klaus
2013-01-01
This paper reports a first step toward a brain-computer interface (BCI) for collaborative targeting. Specifically, we explore, from a broad perspective, how the collaboration of a group of people can increase the performance on a simple target identification task. To this end, we requested a group of people to identify the location and color of a sequence of targets appearing on the screen and measured the time and accuracy of the response. The individual results are compared to a collective identification result determined by simple majority voting, with random choice in case of drawn. The results are promising, as the identification becomes significantly more reliable even with this simple voting and a small number of people (either odd or even number) involved in the decision. In addition, the paper briefly analyzes the role of brain-computer interfaces in collaborative targeting, extending the targeting task by using a BCI instead of a mechanical response.
The UAB Informatics Institute and 2016 CEGS N-GRID de-identification shared task challenge.
Bui, Duy Duc An; Wyatt, Mathew; Cimino, James J
2017-11-01
Clinical narratives (the text notes found in patients' medical records) are important information sources for secondary use in research. However, in order to protect patient privacy, they must be de-identified prior to use. Manual de-identification is considered to be the gold standard approach but is tedious, expensive, slow, and impractical for use with large-scale clinical data. Automated or semi-automated de-identification using computer algorithms is a potentially promising alternative. The Informatics Institute of the University of Alabama at Birmingham is applying de-identification to clinical data drawn from the UAB hospital's electronic medical records system before releasing them for research. We participated in a shared task challenge by the Centers of Excellence in Genomic Science (CEGS) Neuropsychiatric Genome-Scale and RDoC Individualized Domains (N-GRID) at the de-identification regular track to gain experience developing our own automatic de-identification tool. We focused on the popular and successful methods from previous challenges: rule-based, dictionary-matching, and machine-learning approaches. We also explored new techniques such as disambiguation rules, term ambiguity measurement, and used multi-pass sieve framework at a micro level. For the challenge's primary measure (strict entity), our submissions achieved competitive results (f-measures: 87.3%, 87.1%, and 86.7%). For our preferred measure (binary token HIPAA), our submissions achieved superior results (f-measures: 93.7%, 93.6%, and 93%). With those encouraging results, we gain the confidence to improve and use the tool for the real de-identification task at the UAB Informatics Institute. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Sacks, David B.; Yu, Yi-Kuo
2018-06-01
Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.
Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y; Drake, Steven K; Gucek, Marjan; Sacks, David B; Yu, Yi-Kuo
2018-06-05
Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html . Graphical Abstract ᅟ.
How enhanced molecular ions in Cold EI improve compound identification by the NIST library.
Alon, Tal; Amirav, Aviv
2015-12-15
Library-based compound identification with electron ionization (EI) mass spectrometry (MS) is a well-established identification method which provides the names and structures of sample compounds up to the isomer level. The library (such as NIST) search algorithm compares different EI mass spectra in the library's database with the measured EI mass spectrum, assigning each of them a similarity score called 'Match' and an overall identification probability. Cold EI, electron ionization of vibrationally cold molecules in supersonic molecular beams, provides mass spectra with all the standard EI fragment ions combined with enhanced Molecular Ions and high-mass fragments. As a result, Cold EI mass spectra differ from those provided by standard EI and tend to yield lower matching scores. However, in most cases, library identification actually improves with Cold EI, as library identification probabilities for the correct library mass spectra increase, despite the lower matching factors. This research examined the way that enhanced molecular ion abundances affect library identification probability and the way that Cold EI mass spectra, which include enhanced molecular ions and high-mass fragment ions, typically improve library identification results. It involved several computer simulations, which incrementally modified the relative abundances of the various ions and analyzed the resulting mass spectra. The simulation results support previous measurements, showing that while enhanced molecular ion and high-mass fragment ions lower the matching factor of the correct library compound, the matching factors of the incorrect library candidates are lowered even more, resulting in a rise in the identification probability for the correct compound. This behavior which was previously observed by analyzing Cold EI mass spectra can be explained by the fact that high-mass ions, and especially the molecular ion, characterize a compound more than low-mass ions and therefore carries more weight in library search identification algorithms. These ions are uniquely abundant in Cold EI, which therefore enables enhanced compound characterization along with improved NIST library based identification. Copyright © 2015 John Wiley & Sons, Ltd.
25 CFR 542.11 - What are the minimum internal control standards for pari-mutuel wagering?
Code of Federal Regulations, 2011 CFR
2011-04-01
...; (ii) Gaming operation name (or identification number) and station number; (iii) Race track, race number, horse identification or event identification, as applicable; (iv) Type of bet(s), each bet amount... wagering on race events while on duty, including during break periods. (g) Computer reports standards. (1...
Electro-Optic Identification (EOID) Research Program
2002-09-30
The goal of this research is to provide computer-assisted identification of underwater mines in electro - optic imagery. Identification algorithms will...greatly reduce the time and risk to reacquire mine-like-objects for positive classification and identification. The objectives are to collect electro ... optic data under a wide range of operating and environmental conditions and develop precise algorithms that can provide accurate target recognition on this data for all possible conditions.
21 CFR 870.1110 - Blood pressure computer.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Blood pressure computer. 870.1110 Section 870.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... computer. (a) Identification. A blood pressure computer is a device that accepts the electrical signal from...
21 CFR 870.1110 - Blood pressure computer.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Blood pressure computer. 870.1110 Section 870.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... computer. (a) Identification. A blood pressure computer is a device that accepts the electrical signal from...
21 CFR 870.1110 - Blood pressure computer.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Blood pressure computer. 870.1110 Section 870.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... computer. (a) Identification. A blood pressure computer is a device that accepts the electrical signal from...
21 CFR 870.1110 - Blood pressure computer.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Blood pressure computer. 870.1110 Section 870.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... computer. (a) Identification. A blood pressure computer is a device that accepts the electrical signal from...
Elastic Multi-scale Mechanisms: Computation and Biological Evolution.
Diaz Ochoa, Juan G
2018-01-01
Explanations based on low-level interacting elements are valuable and powerful since they contribute to identify the key mechanisms of biological functions. However, many dynamic systems based on low-level interacting elements with unambiguous, finite, and complete information of initial states generate future states that cannot be predicted, implying an increase of complexity and open-ended evolution. Such systems are like Turing machines, that overlap with dynamical systems that cannot halt. We argue that organisms find halting conditions by distorting these mechanisms, creating conditions for a constant creativity that drives evolution. We introduce a modulus of elasticity to measure the changes in these mechanisms in response to changes in the computed environment. We test this concept in a population of predators and predated cells with chemotactic mechanisms and demonstrate how the selection of a given mechanism depends on the entire population. We finally explore this concept in different frameworks and postulate that the identification of predictive mechanisms is only successful with small elasticity modulus.
NASA Astrophysics Data System (ADS)
Armaroli, Clara; Duo, Enrico; Ciavola, Paolo
2017-04-01
The Emilia-Romagna coastline is located in northern Italy, facing the Adriatic sea. The area is especially exposed to the flooding hazard because of its low lying nature, high urbanisation and the large exploitation of beach resources for tourism. The identification of hotspots where marine flooding can cause significant damages is, therefore, a key issue. The methodology implemented to identify hotspots is based on the Coastal Risk Assessment Framework tool that was developed in the RISC-KIT project (www.risckit.eu). The tool combines the hazard component with different exposure indicators and is applied along predefined coastal sectors of almost 1 Km alongshore length. The coastline was divided into 106 sectors in which each component was analysed. The hazard part was evaluated through the computation of maximum water levels, obtained as the sum of wave set-up, storm surge and tide, calculated along representative beach profiles, one per sector, and for two return periods (10 and 100 years). The data for the computation of the maximum water level were extracted from the literature. The landward extension of flood-prone areas in each sector was the extension of the flood maps produced by the regional authorities for the EU Flood Directive and for the same return periods. The exposure indicators were evaluated taking into account the location and type of different assets in each sector and in flood-prone areas. Specifically, the assets that were taken into account are: the transport network, the utilities (water, gas and electricity) networks, the land use typologies, the social vulnerability status of the population and the business sector. Each component was then ranked from 1 to 5, considering a scale based on their computed value (hazard), importance and location (exposure indicators). A final coastal index (CI) was computed as the root mean square of the geometrical mean of the exposure indicators multiplied by the hazard indicator. Land use typologies were valued taking into account a classification produced by the regional authorities for the Flood Directive. The social vulnerability status of the population was derived from data produced by the National Statistic Institute. The regional managers provided the location of transport and utilities networks. The business indicator was built considering the tourist arrivals in each coastal municipality compared to the total number of arrivals. The results showed that the coast is very exposed to flooding and that the 100 year return period event leads to the identification of a large number of hotspots (65 over 106) defined as sectors with CI > 2.5. The main drivers for the hotspot identification were the hazard indicator and the land use typologies, because important transport/utilities network are not located in flood-prone areas. The most critical sectors are situated in the central-southern part of the coastline, where the most attractive tourist facilities are located and where the coastal corridor is occupied by a continuous urbanisation.
Metabolite identification through multiple kernel learning on fragmentation trees.
Shen, Huibin; Dührkop, Kai; Böcker, Sebastian; Rousu, Juho
2014-06-15
Metabolite identification from tandem mass spectrometric data is a key task in metabolomics. Various computational methods have been proposed for the identification of metabolites from tandem mass spectra. Fragmentation tree methods explore the space of possible ways in which the metabolite can fragment, and base the metabolite identification on scoring of these fragmentation trees. Machine learning methods have been used to map mass spectra to molecular fingerprints; predicted fingerprints, in turn, can be used to score candidate molecular structures. Here, we combine fragmentation tree computations with kernel-based machine learning to predict molecular fingerprints and identify molecular structures. We introduce a family of kernels capturing the similarity of fragmentation trees, and combine these kernels using recently proposed multiple kernel learning approaches. Experiments on two large reference datasets show that the new methods significantly improve molecular fingerprint prediction accuracy. These improvements result in better metabolite identification, doubling the number of metabolites ranked at the top position of the candidates list. © The Author 2014. Published by Oxford University Press.
Attitude identification for SCOLE using two infrared cameras
NASA Technical Reports Server (NTRS)
Shenhar, Joram
1991-01-01
An algorithm is presented that incorporates real time data from two infrared cameras and computes the attitude parameters of the Spacecraft COntrol Lab Experiment (SCOLE), a lab apparatus representing an offset feed antenna attached to the Space Shuttle by a flexible mast. The algorithm uses camera position data of three miniature light emitting diodes (LEDs), mounted on the SCOLE platform, permitting arbitrary camera placement and an on-line attitude extraction. The continuous nature of the algorithm allows identification of the placement of the two cameras with respect to some initial position of the three reference LEDs, followed by on-line six degrees of freedom attitude tracking, regardless of the attitude time history. A description is provided of the algorithm in the camera identification mode as well as the mode of target tracking. Experimental data from a reduced size SCOLE-like lab model, reflecting the performance of the camera identification and the tracking processes, are presented. Computer code for camera placement identification and SCOLE attitude tracking is listed.
Multi-level hot zone identification for pedestrian safety.
Lee, Jaeyoung; Abdel-Aty, Mohamed; Choi, Keechoo; Huang, Helai
2015-03-01
According to the National Highway Traffic Safety Administration (NHTSA), while fatalities from traffic crashes have decreased, the proportion of pedestrian fatalities has steadily increased from 11% to 14% over the past decade. This study aims at identifying two zonal levels factors. The first is to identify hot zones at which pedestrian crashes occurs, while the second are zones where crash-involved pedestrians came from. Bayesian Poisson lognormal simultaneous equation spatial error model (BPLSESEM) was estimated and revealed significant factors for the two target variables. Then, PSIs (potential for safety improvements) were computed using the model. Subsequently, a novel hot zone identification method was suggested to combine both hot zones from where vulnerable pedestrians originated with hot zones where many pedestrian crashes occur. For the former zones, targeted safety education and awareness campaigns can be provided as countermeasures whereas area-wide engineering treatments and enforcement may be effective safety treatments for the latter ones. Thus, it is expected that practitioners are able to suggest appropriate safety treatments for pedestrian crashes using the method and results from this study. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Garmon, Linda
1981-01-01
Describes the features of various computer chemistry programs. Utilization of computer graphics, color, digital imaging, and other innovations are discussed in programs including those which aid in the identification of unknowns, predict whether chemical reactions are feasible, and predict the biological activity of xenobiotic compounds. (CS)
2012-09-30
computational tools provide the ability to display, browse, select, filter and summarize spatio-temporal relationships of these individual-based...her research assistant at Esri, Shaun Walbridge, and members of the Marine Mammal Institute ( MMI ), including Tomas Follet and Debbie Steel. This...Genomics Laboratory, MMI , OSU. 4 As part of the geneGIS initiative, these SPLASH photo-identification records and the geneSPLASH DNA profiles
2006-06-01
Hadjiiski, and N. Petrick, "Computerized nipple identification for multiple image analysis in computer-aided diagnosis," Medical Physics 31, 2871...candidates, 3 identification of suspicious objects, 4 feature extraction and analysis, and 5 FP reduc- tion by classification of normal tissue...detection of microcalcifi- cations on digitized mammograms.41 An illustration of a La- placian decomposition tree is shown on the left-hand side of Fig. 4
Computational Acoustic Beamforming for Noise Source Identification for Small Wind Turbines.
Ma, Ping; Lien, Fue-Sang; Yee, Eugene
2017-01-01
This paper develops a computational acoustic beamforming (CAB) methodology for identification of sources of small wind turbine noise. This methodology is validated using the case of the NACA 0012 airfoil trailing edge noise. For this validation case, the predicted acoustic maps were in excellent conformance with the results of the measurements obtained from the acoustic beamforming experiment. Following this validation study, the CAB methodology was applied to the identification of noise sources generated by a commercial small wind turbine. The simulated acoustic maps revealed that the blade tower interaction and the wind turbine nacelle were the two primary mechanisms for sound generation for this small wind turbine at frequencies between 100 and 630 Hz.
A Framework for People Re-Identification in Multi-Camera Surveillance Systems
ERIC Educational Resources Information Center
Ammar, Sirine; Zaghden, Nizar; Neji, Mahmoud
2017-01-01
People re-identification has been a very active research topic recently in computer vision. It is an important application in surveillance system with disjoint cameras. This paper is focused on the implementation of a human re-identification system. First the face of detected people is divided into three parts and some soft-biometric traits are…
Dynamic Identification for Control of Large Space Structures
NASA Technical Reports Server (NTRS)
Ibrahim, S. R.
1985-01-01
This is a compilation of reports by the one author on one subject. It consists of the following five journal articles: (1) A Parametric Study of the Ibrahim Time Domain Modal Identification Algorithm; (2) Large Modal Survey Testing Using the Ibrahim Time Domain Identification Technique; (3) Computation of Normal Modes from Identified Complex Modes; (4) Dynamic Modeling of Structural from Measured Complex Modes; and (5) Time Domain Quasi-Linear Identification of Nonlinear Dynamic Systems.
25 CFR 542.11 - What are the minimum internal control standards for pari-mutuel wagering?
Code of Federal Regulations, 2013 CFR
2013-04-01
... percentage of the handle. (b) Computer applications. For any computer applications utilized, alternate.... In case of computer failure between the pari-mutuel book and the hub, no tickets shall be manually... writer/cashier shall sign on and the computer shall document gaming operation name (or identification...
25 CFR 542.11 - What are the minimum internal control standards for pari-mutuel wagering?
Code of Federal Regulations, 2012 CFR
2012-04-01
... percentage of the handle. (b) Computer applications. For any computer applications utilized, alternate.... In case of computer failure between the pari-mutuel book and the hub, no tickets shall be manually... writer/cashier shall sign on and the computer shall document gaming operation name (or identification...
25 CFR 542.11 - What are the minimum internal control standards for pari-mutuel wagering?
Code of Federal Regulations, 2014 CFR
2014-04-01
... percentage of the handle. (b) Computer applications. For any computer applications utilized, alternate.... In case of computer failure between the pari-mutuel book and the hub, no tickets shall be manually... writer/cashier shall sign on and the computer shall document gaming operation name (or identification...
ERIC Educational Resources Information Center
Farrell, Albert D.; And Others
1987-01-01
Evaluated computer interview to standardize collection of target complaints. Adult outpatients (N=103) completed computer interview, unstructured intake interview, Symptoms Checklist-90, and Minnesota Multiphasic Personality Inventory. Results provided support for the computer interview in regard to reliability and validity though there was low…
Versatile analog pulse height computer performs real-time arithmetic operations
NASA Technical Reports Server (NTRS)
Brenner, R.; Strauss, M. G.
1967-01-01
Multipurpose analog pulse height computer performs real-time arithmetic operations on relatively fast pulses. This computer can be used for identification of charged particles, pulse shape discrimination, division of signals from position sensitive detectors, and other on-line data reduction techniques.
NASA Technical Reports Server (NTRS)
Sliwa, S. M.
1984-01-01
A prime obstacle to the widespread use of adaptive control is the degradation of performance and possible instability resulting from the presence of unmodeled dynamics. The approach taken is to explicitly include the unstructured model uncertainty in the output error identification algorithm. The order of the compensator is successively increased by including identified modes. During this model building stage, heuristic rules are used to test for convergence prior to designing compensators. Additionally, the recursive identification algorithm as extended to multi-input, multi-output systems. Enhancements were also made to reduce the computational burden of an algorithm for obtaining minimal state space realizations from the inexact, multivariate transfer functions which result from the identification process. A number of potential adaptive control applications for this approach are illustrated using computer simulations. Results indicated that when speed of adaptation and plant stability are not critical, the proposed schemes converge to enhance system performance.
NASA Astrophysics Data System (ADS)
Wantuch, Andrew C.; Vita, Joshua A.; Jimenez, Edward S.; Bray, Iliana E.
2016-10-01
Despite object detection, recognition, and identification being very active areas of computer vision research, many of the available tools to aid in these processes are designed with only photographs in mind. Although some algorithms used specifically for feature detection and identification may not take explicit advantage of the colors available in the image, they still under-perform on radiographs, which are grayscale images. We are especially interested in the robustness of these algorithms, specifically their performance on a preexisting database of X-ray radiographs in compressed JPEG form, with multiple ways of describing pixel information. We will review various aspects of the performance of available feature detection and identification systems, including MATLABs Computer Vision toolbox, VLFeat, and OpenCV on our non-ideal database. In the process, we will explore possible reasons for the algorithms' lessened ability to detect and identify features from the X-ray radiographs.
Yan, W Y; Li, L; Yang, Y G; Lin, X L; Wu, J Z
2016-08-01
We designed a computer-based respiratory sound analysis system to identify pediatric normal lung sound. To verify the validity of the computer-based respiratory sound analysis system. First we downloaded the standard lung sounds from the network database (website: http: //www.easyauscultation.com/lung-sounds-reference-guide) and recorded 3 samples of abnormal loud sound (rhonchi, wheeze and crackles) from three patients of The Department of Pediatrics, the First Affiliated Hospital of Xiamen University. We regarded such lung sounds as"reference lung sounds". The"test lung sounds"were recorded from 29 children form Kindergarten of Xiamen University. we recorded lung sound by portable electronic stethoscope and valid lung sounds were selected by manual identification. We introduced Mel-frequency cepstral coefficient (MFCC) to extract lung sound features and dynamic time warping (DTW) for signal classification. We had 39 standard lung sounds, recorded 58 test lung sounds. This computer-based respiratory sound analysis system was carried out in 58 lung sound recognition, correct identification of 52 times, error identification 6 times. Accuracy was 89.7%. Based on MFCC and DTW, our computer-based respiratory sound analysis system can effectively identify healthy lung sounds of children (accuracy can reach 89.7%), fully embodies the reliability of the lung sounds analysis system.
Wilson, Karl A; Tan-Wilson, Anna
2013-01-01
Mass spectrometry (MS) has become an important tool in studying biological systems. One application is the identification of proteins and peptides by the matching of peptide and peptide fragment masses to the sequences of proteins in protein sequence databases. Often prior protein separation of complex protein mixtures by 2D-PAGE is needed, requiring more time and expertise than instructors of large laboratory classes can devote. We have developed an experimental module for our Biochemistry Laboratory course that engages students in MS-based protein identification following protein separation by one-dimensional SDS-PAGE, a technique that is usually taught in this type of course. The module is based on soybean seed storage proteins, a relatively simple mixture of proteins present in high levels in the seed, allowing the identification of the main protein bands by MS/MS and in some cases, even by peptide mass fingerprinting. Students can identify their protein bands using software available on the Internet, and are challenged to deduce post-translational modifications that have occurred upon germination. A collection of mass spectral data and tutorials that can be used as a stand-alone computer-based laboratory module were also assembled. Copyright © 2013 International Union of Biochemistry and Molecular Biology, Inc.
Large-Scale Bi-Level Strain Design Approaches and Mixed-Integer Programming Solution Techniques
Kim, Joonhoon; Reed, Jennifer L.; Maravelias, Christos T.
2011-01-01
The use of computational models in metabolic engineering has been increasing as more genome-scale metabolic models and computational approaches become available. Various computational approaches have been developed to predict how genetic perturbations affect metabolic behavior at a systems level, and have been successfully used to engineer microbial strains with improved primary or secondary metabolite production. However, identification of metabolic engineering strategies involving a large number of perturbations is currently limited by computational resources due to the size of genome-scale models and the combinatorial nature of the problem. In this study, we present (i) two new bi-level strain design approaches using mixed-integer programming (MIP), and (ii) general solution techniques that improve the performance of MIP-based bi-level approaches. The first approach (SimOptStrain) simultaneously considers gene deletion and non-native reaction addition, while the second approach (BiMOMA) uses minimization of metabolic adjustment to predict knockout behavior in a MIP-based bi-level problem for the first time. Our general MIP solution techniques significantly reduced the CPU times needed to find optimal strategies when applied to an existing strain design approach (OptORF) (e.g., from ∼10 days to ∼5 minutes for metabolic engineering strategies with 4 gene deletions), and identified strategies for producing compounds where previous studies could not (e.g., malate and serine). Additionally, we found novel strategies using SimOptStrain with higher predicted production levels (for succinate and glycerol) than could have been found using an existing approach that considers network additions and deletions in sequential steps rather than simultaneously. Finally, using BiMOMA we found novel strategies involving large numbers of modifications (for pyruvate and glutamate), which sequential search and genetic algorithms were unable to find. The approaches and solution techniques developed here will facilitate the strain design process and extend the scope of its application to metabolic engineering. PMID:21949695
Large-scale bi-level strain design approaches and mixed-integer programming solution techniques.
Kim, Joonhoon; Reed, Jennifer L; Maravelias, Christos T
2011-01-01
The use of computational models in metabolic engineering has been increasing as more genome-scale metabolic models and computational approaches become available. Various computational approaches have been developed to predict how genetic perturbations affect metabolic behavior at a systems level, and have been successfully used to engineer microbial strains with improved primary or secondary metabolite production. However, identification of metabolic engineering strategies involving a large number of perturbations is currently limited by computational resources due to the size of genome-scale models and the combinatorial nature of the problem. In this study, we present (i) two new bi-level strain design approaches using mixed-integer programming (MIP), and (ii) general solution techniques that improve the performance of MIP-based bi-level approaches. The first approach (SimOptStrain) simultaneously considers gene deletion and non-native reaction addition, while the second approach (BiMOMA) uses minimization of metabolic adjustment to predict knockout behavior in a MIP-based bi-level problem for the first time. Our general MIP solution techniques significantly reduced the CPU times needed to find optimal strategies when applied to an existing strain design approach (OptORF) (e.g., from ∼10 days to ∼5 minutes for metabolic engineering strategies with 4 gene deletions), and identified strategies for producing compounds where previous studies could not (e.g., malate and serine). Additionally, we found novel strategies using SimOptStrain with higher predicted production levels (for succinate and glycerol) than could have been found using an existing approach that considers network additions and deletions in sequential steps rather than simultaneously. Finally, using BiMOMA we found novel strategies involving large numbers of modifications (for pyruvate and glutamate), which sequential search and genetic algorithms were unable to find. The approaches and solution techniques developed here will facilitate the strain design process and extend the scope of its application to metabolic engineering.
Mind the Noise When Identifying Computational Models of Cognition from Brain Activity.
Kolossa, Antonio; Kopp, Bruno
2016-01-01
The aim of this study was to analyze how measurement error affects the validity of modeling studies in computational neuroscience. A synthetic validity test was created using simulated P300 event-related potentials as an example. The model space comprised four computational models of single-trial P300 amplitude fluctuations which differed in terms of complexity and dependency. The single-trial fluctuation of simulated P300 amplitudes was computed on the basis of one of the models, at various levels of measurement error and at various numbers of data points. Bayesian model selection was performed based on exceedance probabilities. At very low numbers of data points, the least complex model generally outperformed the data-generating model. Invalid model identification also occurred at low levels of data quality and under low numbers of data points if the winning model's predictors were closely correlated with the predictors from the data-generating model. Given sufficient data quality and numbers of data points, the data-generating model could be correctly identified, even against models which were very similar to the data-generating model. Thus, a number of variables affects the validity of computational modeling studies, and data quality and numbers of data points are among the main factors relevant to the issue. Further, the nature of the model space (i.e., model complexity, model dependency) should not be neglected. This study provided quantitative results which show the importance of ensuring the validity of computational modeling via adequately prepared studies. The accomplishment of synthetic validity tests is recommended for future applications. Beyond that, we propose to render the demonstration of sufficient validity via adequate simulations mandatory to computational modeling studies.
C-mii: a tool for plant miRNA and target identification.
Numnark, Somrak; Mhuantong, Wuttichai; Ingsriswang, Supawadee; Wichadakul, Duangdao
2012-01-01
MicroRNAs (miRNAs) have been known to play an important role in several biological processes in both animals and plants. Although several tools for miRNA and target identification are available, the number of tools tailored towards plants is limited, and those that are available have specific functionality, lack graphical user interfaces, and restrict the number of input sequences. Large-scale computational identifications of miRNAs and/or targets of several plants have been also reported. Their methods, however, are only described as flow diagrams, which require programming skills and the understanding of input and output of the connected programs to reproduce. To overcome these limitations and programming complexities, we proposed C-mii as a ready-made software package for both plant miRNA and target identification. C-mii was designed and implemented based on established computational steps and criteria derived from previous literature with the following distinguishing features. First, software is easy to install with all-in-one programs and packaged databases. Second, it comes with graphical user interfaces (GUIs) for ease of use. Users can identify plant miRNAs and targets via step-by-step execution, explore the detailed results from each step, filter the results according to proposed constraints in plant miRNA and target biogenesis, and export sequences and structures of interest. Third, it supplies bird's eye views of the identification results with infographics and grouping information. Fourth, in terms of functionality, it extends the standard computational steps of miRNA target identification with miRNA-target folding and GO annotation. Fifth, it provides helper functions for the update of pre-installed databases and automatic recovery. Finally, it supports multi-project and multi-thread management. C-mii constitutes the first complete software package with graphical user interfaces enabling computational identification of both plant miRNA genes and miRNA targets. With the provided functionalities, it can help accelerate the study of plant miRNAs and targets, especially for small and medium plant molecular labs without bioinformaticians. C-mii is freely available at http://www.biotec.or.th/isl/c-mii for both Windows and Ubuntu Linux platforms.
C-mii: a tool for plant miRNA and target identification
2012-01-01
Background MicroRNAs (miRNAs) have been known to play an important role in several biological processes in both animals and plants. Although several tools for miRNA and target identification are available, the number of tools tailored towards plants is limited, and those that are available have specific functionality, lack graphical user interfaces, and restrict the number of input sequences. Large-scale computational identifications of miRNAs and/or targets of several plants have been also reported. Their methods, however, are only described as flow diagrams, which require programming skills and the understanding of input and output of the connected programs to reproduce. Results To overcome these limitations and programming complexities, we proposed C-mii as a ready-made software package for both plant miRNA and target identification. C-mii was designed and implemented based on established computational steps and criteria derived from previous literature with the following distinguishing features. First, software is easy to install with all-in-one programs and packaged databases. Second, it comes with graphical user interfaces (GUIs) for ease of use. Users can identify plant miRNAs and targets via step-by-step execution, explore the detailed results from each step, filter the results according to proposed constraints in plant miRNA and target biogenesis, and export sequences and structures of interest. Third, it supplies bird's eye views of the identification results with infographics and grouping information. Fourth, in terms of functionality, it extends the standard computational steps of miRNA target identification with miRNA-target folding and GO annotation. Fifth, it provides helper functions for the update of pre-installed databases and automatic recovery. Finally, it supports multi-project and multi-thread management. Conclusions C-mii constitutes the first complete software package with graphical user interfaces enabling computational identification of both plant miRNA genes and miRNA targets. With the provided functionalities, it can help accelerate the study of plant miRNAs and targets, especially for small and medium plant molecular labs without bioinformaticians. C-mii is freely available at http://www.biotec.or.th/isl/c-mii for both Windows and Ubuntu Linux platforms. PMID:23281648
Nanoscale Bio-engineering Solutions for Space Exploration: The Nanopore Sequencer
NASA Technical Reports Server (NTRS)
Stolc, Viktor; Cozmuta, Ioana
2004-01-01
Characterization of biological systems at the molecular level and extraction of essential information for nano-engineering design to guide the nano-fabrication of solid-state sensors and molecular identification devices is a computational challenge. The alpha hemolysin protein ion channel is used as a model system for structural analysis of nucleic acids like DNA. Applied voltage draws a DNA strand and surrounding ionic solution through the biological nanopore. The subunits in the DNA strand block ion flow by differing amounts. Atomistic scale simulations are employed using NASA supercomputers to study DNA translocation, with the aim to enhance single DNA subunit identification. Compared to protein channels, solid-state nanopores offer a better temporal control of the translocation of DNA and the possibility to easily tune its chemistry to increase the signal resolution. Potential applications for NASA missions, besides real-time genome sequencing include astronaut health, life detection and decoding of various genomes.
Nanoscale Bioengineering Solutions for Space Exploration the Nanopore Sequencer
NASA Technical Reports Server (NTRS)
Ioana, Cozmuta; Viktor, Stoic
2005-01-01
Characterization of biological systems at the molecular level and extraction of essential information for nano-engineering design to guide the nano-fabrication of solid-state sensors and molecular identification devices is a computational challenge. The alpha hemolysin protein ion channel is used as a model system for structural analysis of nucleic acids like DNA. Applied voltage draws a DNA strand and surrounding ionic solution through the biological nanopore. The subunits in the DNA strand block ion flow by differing amounts. Atomistic scale simulations are employed using NASA supercomputers to study DNA translocation. with the aim to enhance single DNA subunit identification. Compared to protein channels, solid-state nanopores offer a better temporal control of the translocation of DNA and the possibility to easily tune its chemistry to increase the signal resolution. Potential applications for NASA missions, besides real-time genome sequencing include astronaut health, life detection and decoding of various genomes. http://phenomrph.arc.nasa.gov/index.php
Murugaiyan, Jayaseelan; Eravci, Murat; Weise, Christoph; Roesler, Uwe
2017-06-01
Here, we provide the dataset associated with our research article 'label-free quantitative proteomic analysis of harmless and pathogenic strains of infectious microalgae, Prototheca spp.' (Murugaiyan et al., 2017) [1]. This dataset describes liquid chromatography-mass spectrometry (LC-MS)-based protein identification and quantification of a non-infectious strain, Prototheca zopfii genotype 1 and two strains associated with severe and mild infections, respectively, P. zopfii genotype 2 and Prototheca blaschkeae . Protein identification and label-free quantification was carried out by analysing MS raw data using the MaxQuant-Andromeda software suit. The expressional level differences of the identified proteins among the strains were computed using Perseus software and the results were presented in [1]. This DiB provides the MaxQuant output file and raw data deposited in the PRIDE repository with the dataset identifier PXD005305.
miRNAFold: a web server for fast miRNA precursor prediction in genomes.
Tav, Christophe; Tempel, Sébastien; Poligny, Laurent; Tahi, Fariza
2016-07-08
Computational methods are required for prediction of non-coding RNAs (ncRNAs), which are involved in many biological processes, especially at post-transcriptional level. Among these ncRNAs, miRNAs have been largely studied and biologists need efficient and fast tools for their identification. In particular, ab initio methods are usually required when predicting novel miRNAs. Here we present a web server dedicated for miRNA precursors identification at a large scale in genomes. It is based on an algorithm called miRNAFold that allows predicting miRNA hairpin structures quickly with high sensitivity. miRNAFold is implemented as a web server with an intuitive and user-friendly interface, as well as a standalone version. The web server is freely available at: http://EvryRNA.ibisc.univ-evry.fr/miRNAFold. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
NASA Astrophysics Data System (ADS)
Zhang, Shou-ping; Xin, Xiao-kang
2017-07-01
Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.
Serang, Oliver; MacCoss, Michael J.; Noble, William Stafford
2010-01-01
The problem of identifying proteins from a shotgun proteomics experiment has not been definitively solved. Identifying the proteins in a sample requires ranking them, ideally with interpretable scores. In particular, “degenerate” peptides, which map to multiple proteins, have made such a ranking difficult to compute. The problem of computing posterior probabilities for the proteins, which can be interpreted as confidence in a protein’s presence, has been especially daunting. Previous approaches have either ignored the peptide degeneracy problem completely, addressed it by computing a heuristic set of proteins or heuristic posterior probabilities, or by estimating the posterior probabilities with sampling methods. We present a probabilistic model for protein identification in tandem mass spectrometry that recognizes peptide degeneracy. We then introduce graph-transforming algorithms that facilitate efficient computation of protein probabilities, even for large data sets. We evaluate our identification procedure on five different well-characterized data sets and demonstrate our ability to efficiently compute high-quality protein posteriors. PMID:20712337
Fault tolerance in computational grids: perspectives, challenges, and issues.
Haider, Sajjad; Nazir, Babar
2016-01-01
Computational grids are established with the intention of providing shared access to hardware and software based resources with special reference to increased computational capabilities. Fault tolerance is one of the most important issues faced by the computational grids. The main contribution of this survey is the creation of an extended classification of problems that incur in the computational grid environments. The proposed classification will help researchers, developers, and maintainers of grids to understand the types of issues to be anticipated. Moreover, different types of problems, such as omission, interaction, and timing related have been identified that need to be handled on various layers of the computational grid. In this survey, an analysis and examination is also performed pertaining to the fault tolerance and fault detection mechanisms. Our conclusion is that a dependable and reliable grid can only be established when more emphasis is on fault identification. Moreover, our survey reveals that adaptive and intelligent fault identification, and tolerance techniques can improve the dependability of grid working environments.
ERIC Educational Resources Information Center
Redhair, Emily
2011-01-01
This study compared a stimulus fading (SF) procedure with a constant time delay (CTD) procedure for identification of consonant-vowel-consonant (CVC) nonsense words for a participant with autism. An alternating treatments design was utilized through a computer-based format. Receptive identification of target words was evaluated using a computer…
ERIC Educational Resources Information Center
Redhair, Emily I.; McCoy, Kathleen M.; Zucker, Stanley H.; Mathur, Sarup R.; Caterino, Linda
2013-01-01
This study compared a stimulus fading (SF) procedure with a constant time delay (CTD) procedure for identification of consonant-vowel-consonant (CVC) nonsense words for a participant with autism. An alternating treatments design was utilized through a computer-based format. Receptive identification of target words was evaluated using a computer…
A Computer Program which Uses an Expert Systems Approach to Identifying Minerals.
ERIC Educational Resources Information Center
Hart, Allan Bruce; And Others
1988-01-01
Described is a mineral identification program which uses a shell system for creating expert systems of a classification nature. Discusses identification of minerals in hand specimens, thin sections, and polished sections of rocks. (Author/CW)
Kourdioukova, Elena V; Verstraete, Koenraad L; Valcke, Martin
2011-06-01
The aim of this research was to explore (1) clinical years students' perceptions about radiology case-based learning within a computer supported collaborative learning (CSCL) setting, (2) an analysis of the collaborative learning process, and (3) the learning impact of collaborative work on the radiology cases. The first part of this study focuses on a more detailed analysis of a survey study about CSCL based case-based learning, set up in the context of a broader radiology curriculum innovation. The second part centers on a qualitative and quantitative analysis of 52 online collaborative learning discussions from 5th year and nearly graduating medical students. The collaborative work was based on 26 radiology cases regarding musculoskeletal radiology. The analysis of perceptions about collaborative learning on radiology cases reflects a rather neutral attitude that also does not differ significantly in students of different grade levels. Less advanced students are more positive about CSCL as compared to last year students. Outcome evaluation shows a significantly higher level of accuracy in identification of radiology key structures and in radiology diagnosis as well as in linking the radiological signs with available clinical information in nearly graduated students. No significant differences between different grade levels were found in accuracy of using medical terminology. Students appreciate computer supported collaborative learning settings when tackling radiology case-based learning. Scripted computer supported collaborative learning groups proved to be useful for both 5th and 7th year students in view of developing components of their radiology diagnostic approaches. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Flight-Time Identification of a UH-60A Helicopter and Slung Load
NASA Technical Reports Server (NTRS)
Cicolani, Luigi S.; McCoy, Allen H.; Tischler, Mark B.; Tucker, George E.; Gatenio, Pinhas; Marmar, Dani
1998-01-01
This paper describes a flight test demonstration of a system for identification of the stability and handling qualities parameters of a helicopter-slung load configuration simultaneously with flight testing, and the results obtained.Tests were conducted with a UH-60A Black Hawk at speeds from hover to 80 kts. The principal test load was an instrumented 8 x 6 x 6 ft cargo container. The identification used frequency domain analysis in the frequency range to 2 Hz, and focussed on the longitudinal and lateral control axes since these are the axes most affected by the load pendulum modes in the frequency range of interest for handling qualities. Results were computed for stability margins, handling qualities parameters and load pendulum stability. The computations took an average of 4 minutes before clearing the aircraft to the next test point. Important reductions in handling qualities were computed in some cases, depending, on control axis and load-slung combination. A database, including load dynamics measurements, was accumulated for subsequent simulation development and validation.
Computational Acoustic Beamforming for Noise Source Identification for Small Wind Turbines
Lien, Fue-Sang
2017-01-01
This paper develops a computational acoustic beamforming (CAB) methodology for identification of sources of small wind turbine noise. This methodology is validated using the case of the NACA 0012 airfoil trailing edge noise. For this validation case, the predicted acoustic maps were in excellent conformance with the results of the measurements obtained from the acoustic beamforming experiment. Following this validation study, the CAB methodology was applied to the identification of noise sources generated by a commercial small wind turbine. The simulated acoustic maps revealed that the blade tower interaction and the wind turbine nacelle were the two primary mechanisms for sound generation for this small wind turbine at frequencies between 100 and 630 Hz. PMID:28378012
An automatic system to detect and extract texts in medical images for de-identification
NASA Astrophysics Data System (ADS)
Zhu, Yingxuan; Singh, P. D.; Siddiqui, Khan; Gillam, Michael
2010-03-01
Recently, there is an increasing need to share medical images for research purpose. In order to respect and preserve patient privacy, most of the medical images are de-identified with protected health information (PHI) before research sharing. Since manual de-identification is time-consuming and tedious, so an automatic de-identification system is necessary and helpful for the doctors to remove text from medical images. A lot of papers have been written about algorithms of text detection and extraction, however, little has been applied to de-identification of medical images. Since the de-identification system is designed for end-users, it should be effective, accurate and fast. This paper proposes an automatic system to detect and extract text from medical images for de-identification purposes, while keeping the anatomic structures intact. First, considering the text have a remarkable contrast with the background, a region variance based algorithm is used to detect the text regions. In post processing, geometric constraints are applied to the detected text regions to eliminate over-segmentation, e.g., lines and anatomic structures. After that, a region based level set method is used to extract text from the detected text regions. A GUI for the prototype application of the text detection and extraction system is implemented, which shows that our method can detect most of the text in the images. Experimental results validate that our method can detect and extract text in medical images with a 99% recall rate. Future research of this system includes algorithm improvement, performance evaluation, and computation optimization.
Reshaping Computer Literacy Teaching in Higher Education: Identification of Critical Success Factors
ERIC Educational Resources Information Center
Taylor, Estelle; Goede, Roelien; Steyn, Tjaart
2011-01-01
Purpose: Acquiring computer skills is more important today than ever before, especially in a developing country. Teaching of computer skills, however, has to adapt to new technology. This paper aims to model factors influencing the success of the learning of computer literacy by means of an e-learning environment. The research question for this…
ERIC Educational Resources Information Center
Renumol, V. G.; Janakiram, Dharanipragada; Jayaprakash, S.
2010-01-01
Identifying the set of cognitive processes (CPs) a student can go through during computer programming is an interesting research problem. It can provide a better understanding of the human aspects in computer programming process and can also contribute to the computer programming education in general. The study identified the presence of a set of…
Barchuk, A A; Podolsky, M D; Tarakanov, S A; Kotsyuba, I Yu; Gaidukov, V S; Kuznetsov, V I; Merabishvili, V M; Barchuk, A S; Levchenko, E V; Filochkina, A V; Arseniev, A I
2015-01-01
This review article analyzes data of literature devoted to the description, interpretation and classification of focal (nodal) changes in the lungs detected by computed tomography of the chest cavity. There are discussed possible criteria for determining the most likely of their character--primary and metastatic tumor processes, inflammation, scarring, and autoimmune changes, tuberculosis and others. Identification of the most characteristic, reliable and statistically significant evidences of a variety of pathological processes in the lungs including the use of modern computer-aided detection and diagnosis of sites will optimize the diagnostic measures and ensure processing of a large volume of medical data in a short time.
When the ends outweigh the means: mood and level of identification in depression.
Watkins, Edward R; Moberly, Nicholas J; Moulds, Michelle L
2011-11-01
Research in healthy controls has found that mood influences cognitive processing via level of action identification: happy moods are associated with global and abstract processing; sad moods are associated with local and concrete processing. However, this pattern seems inconsistent with the high level of abstract processing observed in depressed patients, leading Watkins (2008, 2010) to hypothesise that the association between mood and level of goal/action identification is impaired in depression. We tested this hypothesis by measuring level of identification on the Behavioural Identification Form after happy and sad mood inductions in never-depressed controls and currently depressed patients. Participants used increasingly concrete action identifications as they became sadder and less happy, but this effect was moderated by depression status. Consistent with Watkins' (2008) hypothesis, increases in sad mood and decreases in happiness were associated with shifts towards the use of more concrete action identifications in never-depressed individuals, but not in depressed patients. These findings suggest that the putatively adaptive association between mood and level of identification is impaired in major depression.
When the ends outweigh the means: Mood and level of identification in depression
Watkins, Edward R.; Moberly, Nicholas J.; Moulds, Michelle L.
2011-01-01
Research in healthy controls has found that mood influences cognitive processing via level of action identification: happy moods are associated with global and abstract processing; sad moods are associated with local and concrete processing. However, this pattern seems inconsistent with the high level of abstract processing observed in depressed patients, leading Watkins (2008, 2010) to hypothesise that the association between mood and level of goal/action identification is impaired in depression. We tested this hypothesis by measuring level of identification on the Behavioural Identification Form after happy and sad mood inductions in never-depressed controls and currently depressed patients. Participants used increasingly concrete action identifications as they became sadder and less happy, but this effect was moderated by depression status. Consistent with Watkins' (2008) hypothesis, increases in sad mood and decreases in happiness were associated with shifts towards the use of more concrete action identifications in never-depressed individuals, but not in depressed patients. These findings suggest that the putatively adaptive association between mood and level of identification is impaired in major depression. PMID:22017614
Structure-sequence based analysis for identification of conserved regions in proteins
Zemla, Adam T; Zhou, Carol E; Lam, Marisa W; Smith, Jason R; Pardes, Elizabeth
2013-05-28
Disclosed are computational methods, and associated hardware and software products for scoring conservation in a protein structure based on a computationally identified family or cluster of protein structures. A method of computationally identifying a family or cluster of protein structures in also disclosed herein.
48 CFR 252.204-7012 - Safeguarding of unclassified controlled technical information.
Code of Federal Regulations, 2014 CFR
2014-10-01
.... Cyber incident means actions taken through the use of computer networks that result in an actual or... printed within an information system. Technical information means technical data or computer software, as..., catalog-item identifications, data sets, studies and analyses and related information, and computer...
ERIC Educational Resources Information Center
Ozen, Arzu; Ergenekon, Yasemin; Ulke-Kurkcuoglu, Burcu
2017-01-01
The current study investigated the relation between simultaneous prompting (SP), computer-assisted instruction (CAI), and the receptive identification of target pictures (presented on laptop computer) for four preschool students with developmental disabilities. The students' acquisition of nontarget information through observational learning also…
Ning, Hsiao-Chen; Lin, Chia-Ni; Chiu, Daniel Tsun-Yee; Chang, Yung-Ta; Wen, Chiao-Ni; Peng, Shu-Yu; Chu, Tsung-Lan; Yu, Hsin-Ming; Wu, Tsu-Lan
2016-01-01
Background Accurate patient identification and specimen labeling at the time of collection are crucial steps in the prevention of medical errors, thereby improving patient safety. Methods All patient specimen identification errors that occurred in the outpatient department (OPD), emergency department (ED), and inpatient department (IPD) of a 3,800-bed academic medical center in Taiwan were documented and analyzed retrospectively from 2005 to 2014. To reduce such errors, the following series of strategies were implemented: a restrictive specimen acceptance policy for the ED and IPD in 2006; a computer-assisted barcode positive patient identification system for the ED and IPD in 2007 and 2010, and automated sample labeling combined with electronic identification systems introduced to the OPD in 2009. Results Of the 2000345 specimens collected in 2005, 1023 (0.0511%) were identified as having patient identification errors, compared with 58 errors (0.0015%) among 3761238 specimens collected in 2014, after serial interventions; this represents a 97% relative reduction. The total number (rate) of institutional identification errors contributed from the ED, IPD, and OPD over a 10-year period were 423 (0.1058%), 556 (0.0587%), and 44 (0.0067%) errors before the interventions, and 3 (0.0007%), 52 (0.0045%) and 3 (0.0001%) after interventions, representing relative 99%, 92% and 98% reductions, respectively. Conclusions Accurate patient identification is a challenge of patient safety in different health settings. The data collected in our study indicate that a restrictive specimen acceptance policy, computer-generated positive identification systems, and interdisciplinary cooperation can significantly reduce patient identification errors. PMID:27494020
Brain Plasticity in Speech Training in Native English Speakers Learning Mandarin Tones
NASA Astrophysics Data System (ADS)
Heinzen, Christina Carolyn
The current study employed behavioral and event-related potential (ERP) measures to investigate brain plasticity associated with second-language (L2) phonetic learning based on an adaptive computer training program. The program utilized the acoustic characteristics of Infant-Directed Speech (IDS) to train monolingual American English-speaking listeners to perceive Mandarin lexical tones. Behavioral identification and discrimination tasks were conducted using naturally recorded speech, carefully controlled synthetic speech, and non-speech control stimuli. The ERP experiments were conducted with selected synthetic speech stimuli in a passive listening oddball paradigm. Identical pre- and post- tests were administered on nine adult listeners, who completed two-to-three hours of perceptual training. The perceptual training sessions used pair-wise lexical tone identification, and progressed through seven levels of difficulty for each tone pair. The levels of difficulty included progression in speaker variability from one to four speakers and progression through four levels of acoustic exaggeration of duration, pitch range, and pitch contour. Behavioral results for the natural speech stimuli revealed significant training-induced improvement in identification of Tones 1, 3, and 4. Improvements in identification of Tone 4 generalized to novel stimuli as well. Additionally, comparison between discrimination of across-category and within-category stimulus pairs taken from a synthetic continuum revealed a training-induced shift toward more native-like categorical perception of the Mandarin lexical tones. Analysis of the Mismatch Negativity (MMN) responses in the ERP data revealed increased amplitude and decreased latency for pre-attentive processing of across-category discrimination as a result of training. There were also laterality changes in the MMN responses to the non-speech control stimuli, which could reflect reallocation of brain resources in processing pitch patterns for the across-category lexical tone contrast. Overall, the results support the use of IDS characteristics in training non-native speech contrasts and provide impetus for further research.
Game Theory Based Trust Model for Cloud Environment
Gokulnath, K.; Uthariaraj, Rhymend
2015-01-01
The aim of this work is to propose a method to establish trust at bootload level in cloud computing environment. This work proposes a game theoretic based approach for achieving trust at bootload level of both resources and users perception. Nash equilibrium (NE) enhances the trust evaluation of the first-time users and providers. It also restricts the service providers and the users to violate service level agreement (SLA). Significantly, the problem of cold start and whitewashing issues are addressed by the proposed method. In addition appropriate mapping of cloud user's application to cloud service provider for segregating trust level is achieved as a part of mapping. Thus, time complexity and space complexity are handled efficiently. Experiments were carried out to compare and contrast the performance of the conventional methods and the proposed method. Several metrics like execution time, accuracy, error identification, and undecidability of the resources were considered. PMID:26380365
Aromatario, Mariarosaria; Cappelletti, Simone; Bottoni, Edoardo; Fiore, Paola Antonella; Ciallella, Costantino
2016-01-01
An interesting case of homicide involving the use of a heavy glass ashtray is described. The victim, a 81-years-old woman, has survived for few days and died in hospital. The external examination of the victim showed extensive blunt and sharp facial injuries and defense injuries on both the hands. The autopsy examination showed numerous tears on the face, as well as multiple fractures of the facial bones. Computer tomography scan, with 3D reconstruction, performed in hospital before death, was used to identify the weapon used for the crime. In recent years new diagnostics tools such as computer tomography has been widely used, especially in cases involving sharp and blunt forces. Computer tomography has proven to be very valuable in analyzing fractures of the cranial teca for forensic purpose, in particular antemortem computer tomography with 3D reconstruction is becoming an important tool in the process of weapon identification, thanks to the possibility to identify and make comparison between the shape of the object used to commit the crime, the injury and the objects found during the investigations. No previous reports on the use of this technique, for the weapon identification process, in cases of isolated facial fractures were described. We report a case in which, despite the correct use of this technique, it was not possible for the forensic pathologist to identify the weapon used to commit the crime. Authors wants to highlight the limits encountered in the use of computer tomography with 3D reconstruction as a tool for weapon identification when facial fractures occurred. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ma, Zhi-Sai; Liu, Li; Zhou, Si-Da; Yu, Lei; Naets, Frank; Heylen, Ward; Desmet, Wim
2018-01-01
The problem of parametric output-only identification of time-varying structures in a recursive manner is considered. A kernelized time-dependent autoregressive moving average (TARMA) model is proposed by expanding the time-varying model parameters onto the basis set of kernel functions in a reproducing kernel Hilbert space. An exponentially weighted kernel recursive extended least squares TARMA identification scheme is proposed, and a sliding-window technique is subsequently applied to fix the computational complexity for each consecutive update, allowing the method to operate online in time-varying environments. The proposed sliding-window exponentially weighted kernel recursive extended least squares TARMA method is employed for the identification of a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudo-linear regression TARMA method via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics. Furthermore, the comparisons demonstrate the superior achievable accuracy, lower computational complexity and enhanced online identification capability of the proposed kernel recursive extended least squares TARMA approach.
Sadeghi, Zahra; Testolin, Alberto
2017-08-01
In humans, efficient recognition of written symbols is thought to rely on a hierarchical processing system, where simple features are progressively combined into more abstract, high-level representations. Here, we present a computational model of Persian character recognition based on deep belief networks, where increasingly more complex visual features emerge in a completely unsupervised manner by fitting a hierarchical generative model to the sensory data. Crucially, high-level internal representations emerging from unsupervised deep learning can be easily read out by a linear classifier, achieving state-of-the-art recognition accuracy. Furthermore, we tested the hypothesis that handwritten digits and letters share many common visual features: A generative model that captures the statistical structure of the letters distribution should therefore also support the recognition of written digits. To this aim, deep networks trained on Persian letters were used to build high-level representations of Persian digits, which were indeed read out with high accuracy. Our simulations show that complex visual features, such as those mediating the identification of Persian symbols, can emerge from unsupervised learning in multilayered neural networks and can support knowledge transfer across related domains.
Wei, Q; Hu, Y
2009-01-01
The major hurdle for segmenting lung lobes in computed tomographic (CT) images is to identify fissure regions, which encase lobar fissures. Accurate identification of these regions is difficult due to the variable shape and appearance of the fissures, along with the low contrast and high noise associated with CT images. This paper studies the effectiveness of two texture analysis methods - the gray level co-occurrence matrix (GLCM) and the gray level run length matrix (GLRLM) - in identifying fissure regions from isotropic CT image stacks. To classify GLCM and GLRLM texture features, we applied a feed-forward back-propagation neural network and achieved the best classification accuracy utilizing 16 quantized levels for computing the GLCM and GLRLM texture features and 64 neurons in the input/hidden layers of the neural network. Tested on isotropic CT image stacks of 24 patients with the pathologic lungs, we obtained accuracies of 86% and 87% for identifying fissure regions using the GLCM and GLRLM methods, respectively. These accuracies compare favorably with surgeons/radiologists' accuracy of 80% for identifying fissure regions in clinical settings. This shows promising potential for segmenting lung lobes using the GLCM and GLRLM methods.
Systems Toxicology: From Basic Research to Risk Assessment
2014-01-01
Systems Toxicology is the integration of classical toxicology with quantitative analysis of large networks of molecular and functional changes occurring across multiple levels of biological organization. Society demands increasingly close scrutiny of the potential health risks associated with exposure to chemicals present in our everyday life, leading to an increasing need for more predictive and accurate risk-assessment approaches. Developing such approaches requires a detailed mechanistic understanding of the ways in which xenobiotic substances perturb biological systems and lead to adverse outcomes. Thus, Systems Toxicology approaches offer modern strategies for gaining such mechanistic knowledge by combining advanced analytical and computational tools. Furthermore, Systems Toxicology is a means for the identification and application of biomarkers for improved safety assessments. In Systems Toxicology, quantitative systems-wide molecular changes in the context of an exposure are measured, and a causal chain of molecular events linking exposures with adverse outcomes (i.e., functional and apical end points) is deciphered. Mathematical models are then built to describe these processes in a quantitative manner. The integrated data analysis leads to the identification of how biological networks are perturbed by the exposure and enables the development of predictive mathematical models of toxicological processes. This perspective integrates current knowledge regarding bioanalytical approaches, computational analysis, and the potential for improved risk assessment. PMID:24446777
Systems toxicology: from basic research to risk assessment.
Sturla, Shana J; Boobis, Alan R; FitzGerald, Rex E; Hoeng, Julia; Kavlock, Robert J; Schirmer, Kristin; Whelan, Maurice; Wilks, Martin F; Peitsch, Manuel C
2014-03-17
Systems Toxicology is the integration of classical toxicology with quantitative analysis of large networks of molecular and functional changes occurring across multiple levels of biological organization. Society demands increasingly close scrutiny of the potential health risks associated with exposure to chemicals present in our everyday life, leading to an increasing need for more predictive and accurate risk-assessment approaches. Developing such approaches requires a detailed mechanistic understanding of the ways in which xenobiotic substances perturb biological systems and lead to adverse outcomes. Thus, Systems Toxicology approaches offer modern strategies for gaining such mechanistic knowledge by combining advanced analytical and computational tools. Furthermore, Systems Toxicology is a means for the identification and application of biomarkers for improved safety assessments. In Systems Toxicology, quantitative systems-wide molecular changes in the context of an exposure are measured, and a causal chain of molecular events linking exposures with adverse outcomes (i.e., functional and apical end points) is deciphered. Mathematical models are then built to describe these processes in a quantitative manner. The integrated data analysis leads to the identification of how biological networks are perturbed by the exposure and enables the development of predictive mathematical models of toxicological processes. This perspective integrates current knowledge regarding bioanalytical approaches, computational analysis, and the potential for improved risk assessment.
On the security of consumer wearable devices in the Internet of Things.
Tahir, Hasan; Tahir, Ruhma; McDonald-Maier, Klaus
2018-01-01
Miniaturization of computer hardware and the demand for network capable devices has resulted in the emergence of a new class of technology called wearable computing. Wearable devices have many purposes like lifestyle support, health monitoring, fitness monitoring, entertainment, industrial uses, and gaming. Wearable devices are hurriedly being marketed in an attempt to capture an emerging market. Owing to this, some devices do not adequately address the need for security. To enable virtualization and connectivity wearable devices sense and transmit data, therefore it is essential that the device, its data and the user are protected. In this paper the use of novel Integrated Circuit Metric (ICMetric) technology for the provision of security in wearable devices has been suggested. ICMetric technology uses the features of a device to generate an identification which is then used for the provision of cryptographic services. This paper explores how a device ICMetric can be generated by using the accelerometer and gyroscope sensor. Since wearable devices often operate in a group setting the work also focuses on generating a group identification which is then used to deliver services like authentication, confidentiality, secure admission and symmetric key generation. Experiment and simulation results prove that the scheme offers high levels of security without compromising on resource demands.
On the security of consumer wearable devices in the Internet of Things
Tahir, Hasan; Tahir, Ruhma; McDonald-Maier, Klaus
2018-01-01
Miniaturization of computer hardware and the demand for network capable devices has resulted in the emergence of a new class of technology called wearable computing. Wearable devices have many purposes like lifestyle support, health monitoring, fitness monitoring, entertainment, industrial uses, and gaming. Wearable devices are hurriedly being marketed in an attempt to capture an emerging market. Owing to this, some devices do not adequately address the need for security. To enable virtualization and connectivity wearable devices sense and transmit data, therefore it is essential that the device, its data and the user are protected. In this paper the use of novel Integrated Circuit Metric (ICMetric) technology for the provision of security in wearable devices has been suggested. ICMetric technology uses the features of a device to generate an identification which is then used for the provision of cryptographic services. This paper explores how a device ICMetric can be generated by using the accelerometer and gyroscope sensor. Since wearable devices often operate in a group setting the work also focuses on generating a group identification which is then used to deliver services like authentication, confidentiality, secure admission and symmetric key generation. Experiment and simulation results prove that the scheme offers high levels of security without compromising on resource demands. PMID:29668756
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pointer, William David
The objective of this effort is to establish a strategy and process for generation of suitable computational mesh for computational fluid dynamics simulations of departure from nucleate boiling in a 5 by 5 fuel rod assembly held in place by PWR mixing vane spacer grids. This mesh generation process will support ongoing efforts to develop, demonstrate and validate advanced multi-phase computational fluid dynamics methods that enable more robust identification of dryout conditions and DNB occurrence.Building upon prior efforts and experience, multiple computational meshes were developed using the native mesh generation capabilities of the commercial CFD code STAR-CCM+. These meshes weremore » used to simulate two test cases from the Westinghouse 5 by 5 rod bundle facility. The sensitivity of predicted quantities of interest to the mesh resolution was then established using two evaluation methods, the Grid Convergence Index method and the Least Squares method. This evaluation suggests that the Least Squares method can reliably establish the uncertainty associated with local parameters such as vector velocity components at a point in the domain or surface averaged quantities such as outlet velocity magnitude. However, neither method is suitable for characterization of uncertainty in global extrema such as peak fuel surface temperature, primarily because such parameters are not necessarily associated with a fixed point in space. This shortcoming is significant because the current generation algorithm for identification of DNB event conditions relies on identification of such global extrema. Ongoing efforts to identify DNB based on local surface conditions will address this challenge« less
Validation of Aircraft Noise Models at Lower Levels of Exposure
NASA Technical Reports Server (NTRS)
Page, Juliet A.; Plotkin, Kenneth J.; Carey, Jeffrey N.; Bradley, Kevin A.
1996-01-01
Noise levels around airports and airbases in the United States arc computed via the FAA's Integrated Noise Model (INM) or the Air Force's NOISEMAP (NMAP) program. These models were originally developed for use in the vicinity of airports, at distances which encompass a day night average sound level in decibels (Ldn) of 65 dB or higher. There is increasing interest in aircraft noise at larger distances from the airport. including en-route noise. To evaluate the applicability of INM and NMAP at larger distances, a measurement program was conducted at a major air carrier airport with monitoring sites located in areas exposed to an Ldn of 55 dB and higher. Automated Radar Terminal System (ARTS) radar tracking data were obtained to provide actual flight parameters and positive identification of aircraft. Flight operations were grouped according to aircraft type. stage length, straight versus curved flight tracks, and arrival versus departure. Sound exposure levels (SEL) were computed at monitoring locations, using the INM, and compared with measured values. While individual overflight SEL data was characterized by a high variance, analysis performed on an energy-averaging basis indicates that INM and similar models can be applied to regions exposed to an Ldn of 55 dB with no loss of reliability.
Identification and Addressing Reduction-Related Misconceptions
ERIC Educational Resources Information Center
Gal-Ezer, Judith; Trakhtenbrot, Mark
2016-01-01
Reduction is one of the key techniques used for problem-solving in computer science. In particular, in the theory of computation and complexity (TCC), mapping and polynomial reductions are used for analysis of decidability and computational complexity of problems, including the core concept of NP-completeness. Reduction is a highly abstract…
48 CFR 252.227-7017 - Identification and assertion of use, release, or disclosure restrictions.
Code of Federal Regulations, 2011 CFR
2011-10-01
... and Computer Software—Small Business Innovation Research (SBIR) Program clause. (2) If a successful offeror will not be required to deliver technical data, the Rights in Noncommercial Computer Software and Noncommercial Computer Software Documentation clause, or, if this solicitation contemplates a contract under the...
48 CFR 252.227-7017 - Identification and assertion of use, release, or disclosure restrictions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... and Computer Software—Small Business Innovation Research (SBIR) Program clause. (2) If a successful offeror will not be required to deliver technical data, the Rights in Noncommercial Computer Software and Noncommercial Computer Software Documentation clause, or, if this solicitation contemplates a contract under the...
48 CFR 252.227-7017 - Identification and assertion of use, release, or disclosure restrictions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... and Computer Software—Small Business Innovative Research (SBIR) Program clause. (2) If a successful offeror will not be required to deliver technical data, the Rights in Noncommercial Computer Software and Noncommercial Computer Software Documentation clause, or, if this solicitation contemplates a contract under the...
48 CFR 252.227-7017 - Identification and assertion of use, release, or disclosure restrictions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... and Computer Software—Small Business Innovation Research (SBIR) Program clause. (2) If a successful offeror will not be required to deliver technical data, the Rights in Noncommercial Computer Software and Noncommercial Computer Software Documentation clause, or, if this solicitation contemplates a contract under the...
48 CFR 252.227-7017 - Identification and assertion of use, release, or disclosure restrictions.
Code of Federal Regulations, 2012 CFR
2012-10-01
... and Computer Software—Small Business Innovation Research (SBIR) Program clause. (2) If a successful offeror will not be required to deliver technical data, the Rights in Noncommercial Computer Software and Noncommercial Computer Software Documentation clause, or, if this solicitation contemplates a contract under the...
Messmer, Bradley T; Raphael, Benjamin J; Aerni, Sarah J; Widhopf, George F; Rassenti, Laura Z; Gribben, John G; Kay, Neil E; Kipps, Thomas J
2009-01-01
The leukemia cells of unrelated patients with chronic lymphocytic leukemia (CLL) display a restricted repertoire of immunoglobulin (Ig) gene rearrangements with preferential usage of certain Ig gene segments. We developed a computational method to rigorously quantify biases in Ig sequence similarity in large patient databases and to identify groups of patients with unusual levels of sequence similarity. We applied our method to sequences from 1577 CLL patients through the CLL Research Consortium (CRC), and identified 67 similarity groups into which roughly 20% of all patients could be assigned. Immunoglobulin light chain class was highly correlated within all groups and light chain gene usage was similar within sets. Surprisingly, over 40% of the identified groups were composed of somatically mutated genes. This study significantly expands the evidence that antigen selection shapes the Ig repertoire in CLL. PMID:18640719
Messmer, Bradley T; Raphael, Benjamin J; Aerni, Sarah J; Widhopf, George F; Rassenti, Laura Z; Gribben, John G; Kay, Neil E; Kipps, Thomas J
2009-03-01
The leukemia cells of unrelated patients with chronic lymphocytic leukemia (CLL) display a restricted repertoire of immunoglobulin (Ig) gene rearrangements with preferential usage of certain Ig gene segments. We developed a computational method to rigorously quantify biases in Ig sequence similarity in large patient databases and to identify groups of patients with unusual levels of sequence similarity. We applied our method to sequences from 1577 CLL patients through the CLL Research Consortium (CRC), and identified 67 similarity groups into which roughly 20% of all patients could be assigned. Immunoglobulin light chain class was highly correlated within all groups and light chain gene usage was similar within sets. Surprisingly, over 40% of the identified groups were composed of somatically mutated genes. This study significantly expands the evidence that antigen selection shapes the Ig repertoire in CLL.
Price, J A
1998-08-01
An occasional but difficult problem arises in drug discovery during a chromatographic analysis in which high background activity is associated with the presence of most eluting molecular species. This makes the isolation of material of high relative activity difficult. A computational method is shown that clarifies the identification of regions of the chromatogram of interest. The data for bioactivity and absorbance are normalized to percent of maximal response, filtered to raise very small or zero values to a minimal level, and the activity/absorbance ratio is plotted per fraction. The fractions with relatively high activity become evident. This technique is a helpful adjunct to existing graphical methods and provides an objective relationship between the data sets. It is simple to implement with Visual Basic and spreadsheet data, making it widely accessible.
INfORM: Inference of NetwOrk Response Modules.
Marwah, Veer Singh; Kinaret, Pia Anneli Sofia; Serra, Angela; Scala, Giovanni; Lauerma, Antti; Fortino, Vittorio; Greco, Dario
2018-06-15
Detecting and interpreting responsive modules from gene expression data by using network-based approaches is a common but laborious task. It often requires the application of several computational methods implemented in different software packages, forcing biologists to compile complex analytical pipelines. Here we introduce INfORM (Inference of NetwOrk Response Modules), an R shiny application that enables non-expert users to detect, evaluate and select gene modules with high statistical and biological significance. INfORM is a comprehensive tool for the identification of biologically meaningful response modules from consensus gene networks inferred by using multiple algorithms. It is accessible through an intuitive graphical user interface allowing for a level of abstraction from the computational steps. INfORM is freely available for academic use at https://github.com/Greco-Lab/INfORM. Supplementary data are available at Bioinformatics online.
Pilot study of a point-of-use decision support tool for cancer clinical trials eligibility.
Breitfeld, P P; Weisburd, M; Overhage, J M; Sledge, G; Tierney, W M
1999-01-01
Many adults with cancer are not enrolled in clinical trials because caregivers do not have the time to match the patient's clinical findings with varying eligibility criteria associated with multiple trials for which the patient might be eligible. The authors developed a point-of-use portable decision support tool (DS-TRIEL) to automate this matching process. The support tool consists of a hand-held computer with a programmable relational database. A two-level hierarchic decision framework was used for the identification of eligible subjects for two open breast cancer clinical trials. The hand-held computer also provides protocol consent forms and schemas to further help the busy oncologist. This decision support tool and the decision framework on which it is based could be used for multiple trials and different cancer sites.
Pilot Study of a Point-of-use Decision Support Tool for Cancer Clinical Trials Eligibility
Breitfeld, Philip P.; Weisburd, Marina; Overhage, J. Marc; Sledge, George; Tierney, William M.
1999-01-01
Many adults with cancer are not enrolled in clinical trials because caregivers do not have the time to match the patient's clinical findings with varying eligibility criteria associated with multiple trials for which the patient might be eligible. The authors developed a point-of-use portable decision support tool (DS-TRIEL) to automate this matching process. The support tool consists of a hand-held computer with a programmable relational database. A two-level hierarchic decision framework was used for the identification of eligible subjects for two open breast cancer clinical trials. The hand-held computer also provides protocol consent forms and schemas to further help the busy oncologist. This decision support tool and the decision framework on which it is based could be used for multiple trials and different cancer sites. PMID:10579605
A Parametric Geometry Computational Fluid Dynamics (CFD) Study Utilizing Design of Experiments (DOE)
NASA Technical Reports Server (NTRS)
Rhew, Ray D.; Parker, Peter A.
2007-01-01
Design of Experiments (DOE) was applied to the LAS geometric parameter study to efficiently identify and rank primary contributors to integrated drag over the vehicles ascent trajectory in an order of magnitude fewer CFD configurations thereby reducing computational resources and solution time. SME s were able to gain a better understanding on the underlying flowphysics of different geometric parameter configurations through the identification of interaction effects. An interaction effect, which describes how the effect of one factor changes with respect to the levels of other factors, is often the key to product optimization. A DOE approach emphasizes a sequential approach to learning through successive experimentation to continuously build on previous knowledge. These studies represent a starting point for expanded experimental activities that will eventually cover the entire design space of the vehicle and flight trajectory.
How Captain Amerika uses neural networks to fight crime
NASA Technical Reports Server (NTRS)
Rogers, Steven K.; Kabrisky, Matthew; Ruck, Dennis W.; Oxley, Mark E.
1994-01-01
Artificial neural network models can make amazing computations. These models are explained along with their application in problems associated with fighting crime. Specific problems addressed are identification of people using face recognition, speaker identification, and fingerprint and handwriting analysis (biometric authentication).
Summary of research in applied mathematics, numerical analysis, and computer sciences
NASA Technical Reports Server (NTRS)
1986-01-01
The major categories of current ICASE research programs addressed include: numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; control and parameter identification problems, with emphasis on effective numerical methods; computational problems in engineering and physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and computer systems and software, especially vector and parallel computers.
ERIC Educational Resources Information Center
Knoop, Patricia A.
The purpose of this report was to determine the research areas that appear most critical to achieving man-computer symbiosis. An operational definition of man-computer symbiosis was developed by: (1) reviewing and summarizing what others have said about it, and (2) attempting to distinguish it from other types of man-computer relationships. From…
Afshar, Majid; Press, Valerie G; Robison, Rachel G; Kho, Abel N; Bandi, Sindhura; Biswas, Ashvini; Avila, Pedro C; Kumar, Harsha Vardhan Madan; Yu, Byung; Naureckas, Edward T; Nyenhuis, Sharmilee M; Codispoti, Christopher D
2017-10-13
Comprehensive, rapid, and accurate identification of patients with asthma for clinical care and engagement in research efforts is needed. The original development and validation of a computable phenotype for asthma case identification occurred at a single institution in Chicago and demonstrated excellent test characteristics. However, its application in a diverse payer mix, across different health systems and multiple electronic health record vendors, and in both children and adults was not examined. The objective of this study is to externally validate the computable phenotype across diverse Chicago institutions to accurately identify pediatric and adult patients with asthma. A cohort of 900 asthma and control patients was identified from the electronic health record between January 1, 2012 and November 30, 2014. Two physicians at each site independently reviewed the patient chart to annotate cases. The inter-observer reliability between the physician reviewers had a κ-coefficient of 0.95 (95% CI 0.93-0.97). The accuracy, sensitivity, specificity, negative predictive value, and positive predictive value of the computable phenotype were all above 94% in the full cohort. The excellent positive and negative predictive values in this multi-center external validation study establish a useful tool to identify asthma cases in in the electronic health record for research and care. This computable phenotype could be used in large-scale comparative-effectiveness trials.
Structure identification methods for atomistic simulations of crystalline materials
Stukowski, Alexander
2012-05-28
Here, we discuss existing and new computational analysis techniques to classify local atomic arrangements in large-scale atomistic computer simulations of crystalline solids. This article includes a performance comparison of typical analysis algorithms such as common neighbor analysis (CNA), centrosymmetry analysis, bond angle analysis, bond order analysis and Voronoi analysis. In addition we propose a simple extension to the CNA method that makes it suitable for multi-phase systems. Finally, we introduce a new structure identification algorithm, the neighbor distance analysis, which is designed to identify atomic structure units in grain boundaries.
77 FR 27202 - 36(b)(1) Arms Sales Notification
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-09
... includes: Electronic Warfare Systems, Command, Control, Communication, Computers and Intelligence/Communication, Navigational and Identifications (C4I/CNI), Autonomic Logistics Global Support System (ALGS... Systems, Command, Control, Communication, Computers and Intelligence/Communication, Navigational and...
Rutty, Guy N; Barber, Jade; Amoroso, Jasmin; Morgan, Bruno; Graham, Eleanor A M
2013-12-01
Post-mortem computed tomography angiography (PMCTA) involves the injection of contrast agents. This could have both a dilution effect on biological fluid samples and could affect subsequent post-contrast analytical laboratory processes. We undertook a small sample study of 10 targeted and 10 whole body PMCTA cases to consider whether or not these two methods of PMCTA could affect post-PMCTA cadaver blood based DNA identification. We used standard methodology to examine DNA from blood samples obtained before and after the PMCTA procedure. We illustrate that neither of these PMCTA methods had an effect on the alleles called following short tandem repeat based DNA profiling, and therefore the ability to undertake post-PMCTA blood based DNA identification.
de Lusignan, Simon; Teasdale, Sheila
2007-01-01
Landmark reports suggest that sharing health data between clinical computer systems should improve patient safety and the quality of care. Enhancing the use of informatics in primary care is usually a key part of these strategies. To synthesise the learning from the international use of informatics in primary care. The workshop was attended by 21 delegates drawn from all continents. There were presentations from USA, UK and the Netherlands, and informal updates from Australia, Argentina, and Sweden and the Nordic countries. These presentations were discussed in a workshop setting to identify common issues. Key principles were synthesised through a post-workshop analysis and then sorted into themes. Themes emerged about the deployment of informatics which can be applied at health service, practice and individual clinical consultation level: 1 At the health service or provider level, success appeared proportional to the extent of collaboration between a broad range of stakeholders and identification of leaders. 2 Within the practice much is currently being achieved with legacy computer systems and apparently outdated coding systems. This includes prescribing safety alerts, clinical audit and promoting computer data recording and quality. 3 In the consultation the computer is a 'big player' and may make traditional models of the consultation redundant. We should make more efforts to share learning; develop clear internationally acceptable definitions; highlight gaps between pockets of excellence and real-world practice, and most importantly suggest how they might be bridged. Knowledge synthesis from different health systems may provide a greater understanding of how the third actor (the computer) is best used in primary care.
Wilaiprasitporn, Theerawit; Yagi, Tohru
2015-01-01
This research demonstrates the orientation-modulated attention effect on visual evoked potential. We combined this finding with our previous findings about the motion-modulated attention effect and used the result to develop novel visual stimuli for a personal identification number (PIN) application based on a brain-computer interface (BCI) framework. An electroencephalography amplifier with a single electrode channel was sufficient for our application. A computationally inexpensive algorithm and small datasets were used in processing. Seven healthy volunteers participated in experiments to measure offline performance. Mean accuracy was 83.3% at 13.9 bits/min. Encouraged by these results, we plan to continue developing the BCI-based personal identification application toward real-time systems.
Identification of metabolic pathways using pathfinding approaches: a systematic review.
Abd Algfoor, Zeyad; Shahrizal Sunar, Mohd; Abdullah, Afnizanfaizal; Kolivand, Hoshang
2017-03-01
Metabolic pathways have become increasingly available for various microorganisms. Such pathways have spurred the development of a wide array of computational tools, in particular, mathematical pathfinding approaches. This article can facilitate the understanding of computational analysis of metabolic pathways in genomics. Moreover, stoichiometric and pathfinding approaches in metabolic pathway analysis are discussed. Three major types of studies are elaborated: stoichiometric identification models, pathway-based graph analysis and pathfinding approaches in cellular metabolism. Furthermore, evaluation of the outcomes of the pathways with mathematical benchmarking metrics is provided. This review would lead to better comprehension of metabolism behaviors in living cells, in terms of computed pathfinding approaches. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.
The open-source, public domain JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) API (Application Programming Interface) provides conventions and Fortran-90 modules to develop applications (computer programs) for analyzing process models. The input ...
Postmortem computed tomography (PMCT) and disaster victim identification.
Brough, A L; Morgan, B; Rutty, G N
2015-09-01
Radiography has been used for identification since 1927, and established a role in mass fatality investigations in 1949. More recently, postmortem computed tomography (PMCT) has been used for disaster victim identification (DVI). PMCT offers several advantages compared with fluoroscopy, plain film and dental X-rays, including: speed, reducing the number of on-site personnel and imaging modalities required, making it potentially more efficient. However, there are limitations that inhibit the international adoption of PMCT into routine practice. One particular problem is that due to the fact that forensic radiology is a relatively new sub-speciality, there are no internationally established standards for image acquisition, image interpretation and archiving. This is reflected by the current INTERPOL DVI form, which does not contain a PMCT section. The DVI working group of the International Society of Forensic Radiology and Imaging supports the use of imaging in mass fatality response and has published positional statements in this area. This review will discuss forensic radiology, PMCT, and its role in disaster victim identification.
Derrick, Sharon M; Raxter, Michelle H; Hipp, John A; Goel, Priya; Chan, Elaine F; Love, Jennifer C; Wiersema, Jason M; Akella, N Shastry
2015-01-01
Medical examiners and coroners (ME/C) in the United States hold statutory responsibility to identify deceased individuals who fall under their jurisdiction. The computer-assisted decedent identification (CADI) project was designed to modify software used in diagnosis and treatment of spinal injuries into a mathematically validated tool for ME/C identification of fleshed decedents. CADI software analyzes the shapes of targeted vertebral bodies imaged in an array of standard radiographs and quantifies the likelihood that any two of the radiographs contain matching vertebral bodies. Six validation tests measured the repeatability, reliability, and sensitivity of the method, and the effects of age, sex, and number of radiographs in array composition. CADI returned a 92-100% success rate in identifying the true matching pair of vertebrae within arrays of five to 30 radiographs. Further development of CADI is expected to produce a novel identification method for use in ME/C offices that is reliable, timely, and cost-effective. © 2014 American Academy of Forensic Sciences.
Development of unauthorized airborne emission source identification procedure
NASA Astrophysics Data System (ADS)
Shtripling, L. O.; Bazhenov, V. V.; Varakina, N. S.; Kupriyanova, N. P.
2018-01-01
The paper presents the procedure for searching sources of unauthorized airborne emissions. To make reasonable regulation decisions on airborne pollutant emissions and to ensure the environmental safety of population, the procedure provides for the determination of a pollutant mass emission value from the source being the cause of high pollution level and the search of a previously unrecognized contamination source in a specified area. To determine the true value of mass emission from the source, the minimum of the mean-root-square mismatch criterion between the computed and measured pollutant concentration in the given location is used.
Providing care for America's Army.
Webb, Joseph G; von Gonten, Ann Sue; Luciano, W John
2003-01-01
The Army Dental Corps' three-part mission is to maintain soldiers fit for combat, promote health, and ensure the Dental Corps ability deploy and deliver in the field. Consistent with this mission, the corps is developing innovative dental delivery systems and promoting tobacco cessation, sealants, mouth guard use, cancer detection, and identification of child, elder, and other abuse. The corps' training programs include options and benefits at the dental student, postdoctoral residency, and specialty levels. Recent technology innovations include light-weight field equipment, an integrated computer database to manage treatment, rapid ordering and delivery of supplies, and distance education.
Bhatt, Divesh; Zuckerman, Daniel M.
2010-01-01
We performed “weighted ensemble” path–sampling simulations of adenylate kinase, using several semi–atomistic protein models. The models have an all–atom backbone with various levels of residue interactions. The primary result is that full statistically rigorous path sampling required only a few weeks of single–processor computing time with these models, indicating the addition of further chemical detail should be readily feasible. Our semi–atomistic path ensembles are consistent with previous biophysical findings: the presence of two distinct pathways, identification of intermediates, and symmetry of forward and reverse pathways. PMID:21660120
Crop identification of SAR data using digital textural analysis
NASA Technical Reports Server (NTRS)
Nuesch, D. R.
1983-01-01
After preprocessing SEASAT SAR data which included slant to ground range transformation, registration to LANDSAT MSS data and appropriate filtering of the raw SAR data to minimize coherent speckle, textural features were developed based upon the spatial gray level dependence method (SGLDM) to compute entropy and inertia as textural measures. It is indicated that the consideration of texture features are very important in SAR data analysis. The SEASAT SAR data are useful for the improvement of field boundary definitions and for an earlier season estimate of corn and soybean area location than is supported by LANDSAT alone.
Human-Computer Interaction with Medical Decisions Support Systems
NASA Technical Reports Server (NTRS)
Adolf, Jurine A.; Holden, Kritina L.
1994-01-01
Decision Support Systems (DSSs) have been available to medical diagnosticians for some time, yet their acceptance and use have not increased with advances in technology and availability of DSS tools. Medical DSSs will be necessary on future long duration space missions, because access to medical resources and personnel will be limited. Human-Computer Interaction (HCI) experts at NASA's Human Factors and Ergonomics Laboratory (HFEL) have been working toward understanding how humans use DSSs, with the goal of being able to identify and solve the problems associated with these systems. Work to date consists of identification of HCI research areas, development of a decision making model, and completion of two experiments dealing with 'anchoring'. Anchoring is a phenomenon in which the decision maker latches on to a starting point and does not make sufficient adjustments when new data are presented. HFEL personnel have replicated a well-known anchoring experiment and have investigated the effects of user level of knowledge. Future work includes further experimentation on level of knowledge, confidence in the source of information and sequential decision making.
Automated texture-based identification of ovarian cancer in confocal microendoscope images
NASA Astrophysics Data System (ADS)
Srivastava, Saurabh; Rodriguez, Jeffrey J.; Rouse, Andrew R.; Brewer, Molly A.; Gmitro, Arthur F.
2005-03-01
The fluorescence confocal microendoscope provides high-resolution, in-vivo imaging of cellular pathology during optical biopsy. There are indications that the examination of human ovaries with this instrument has diagnostic implications for the early detection of ovarian cancer. The purpose of this study was to develop a computer-aided system to facilitate the identification of ovarian cancer from digital images captured with the confocal microendoscope system. To achieve this goal, we modeled the cellular-level structure present in these images as texture and extracted features based on first-order statistics, spatial gray-level dependence matrices, and spatial-frequency content. Selection of the best features for classification was performed using traditional feature selection techniques including stepwise discriminant analysis, forward sequential search, a non-parametric method, principal component analysis, and a heuristic technique that combines the results of these methods. The best set of features selected was used for classification, and performance of various machine classifiers was compared by analyzing the areas under their receiver operating characteristic curves. The results show that it is possible to automatically identify patients with ovarian cancer based on texture features extracted from confocal microendoscope images and that the machine performance is superior to that of the human observer.
A Novel Fiber Optic Based Surveillance System for Prevention of Pipeline Integrity Threats.
Tejedor, Javier; Macias-Guarasa, Javier; Martins, Hugo F; Piote, Daniel; Pastor-Graells, Juan; Martin-Lopez, Sonia; Corredera, Pedro; Gonzalez-Herraez, Miguel
2017-02-12
This paper presents a novel surveillance system aimed at the detection and classification of threats in the vicinity of a long gas pipeline. The sensing system is based on phase-sensitive optical time domain reflectometry ( ϕ -OTDR) technology for signal acquisition and pattern recognition strategies for threat identification. The proposal incorporates contextual information at the feature level and applies a system combination strategy for pattern classification. The contextual information at the feature level is based on the tandem approach (using feature representations produced by discriminatively-trained multi-layer perceptrons) by employing feature vectors that spread different temporal contexts. The system combination strategy is based on a posterior combination of likelihoods computed from different pattern classification processes. The system operates in two different modes: (1) machine + activity identification, which recognizes the activity being carried out by a certain machine, and (2) threat detection, aimed at detecting threats no matter what the real activity being conducted is. In comparison with a previous system based on the same rigorous experimental setup, the results show that the system combination from the contextual feature information improves the results for each individual class in both operational modes, as well as the overall classification accuracy, with statistically-significant improvements.
Chida, Koji; Morohashi, Gembu; Fuji, Hitoshi; Magata, Fumihiko; Fujimura, Akiko; Hamada, Koki; Ikarashi, Dai; Yamamoto, Ryuichi
2014-01-01
Background and objective While the secondary use of medical data has gained attention, its adoption has been constrained due to protection of patient privacy. Making medical data secure by de-identification can be problematic, especially when the data concerns rare diseases. We require rigorous security management measures. Materials and methods Using secure computation, an approach from cryptography, our system can compute various statistics over encrypted medical records without decrypting them. An issue of secure computation is that the amount of processing time required is immense. We implemented a system that securely computes healthcare statistics from the statistical computing software ‘R’ by effectively combining secret-sharing-based secure computation with original computation. Results Testing confirmed that our system could correctly complete computation of average and unbiased variance of approximately 50 000 records of dummy insurance claim data in a little over a second. Computation including conditional expressions and/or comparison of values, for example, t test and median, could also be correctly completed in several tens of seconds to a few minutes. Discussion If medical records are simply encrypted, the risk of leaks exists because decryption is usually required during statistical analysis. Our system possesses high-level security because medical records remain in encrypted state even during statistical analysis. Also, our system can securely compute some basic statistics with conditional expressions using ‘R’ that works interactively while secure computation protocols generally require a significant amount of processing time. Conclusions We propose a secure statistical analysis system using ‘R’ for medical data that effectively integrates secret-sharing-based secure computation and original computation. PMID:24763677
Chida, Koji; Morohashi, Gembu; Fuji, Hitoshi; Magata, Fumihiko; Fujimura, Akiko; Hamada, Koki; Ikarashi, Dai; Yamamoto, Ryuichi
2014-10-01
While the secondary use of medical data has gained attention, its adoption has been constrained due to protection of patient privacy. Making medical data secure by de-identification can be problematic, especially when the data concerns rare diseases. We require rigorous security management measures. Using secure computation, an approach from cryptography, our system can compute various statistics over encrypted medical records without decrypting them. An issue of secure computation is that the amount of processing time required is immense. We implemented a system that securely computes healthcare statistics from the statistical computing software 'R' by effectively combining secret-sharing-based secure computation with original computation. Testing confirmed that our system could correctly complete computation of average and unbiased variance of approximately 50,000 records of dummy insurance claim data in a little over a second. Computation including conditional expressions and/or comparison of values, for example, t test and median, could also be correctly completed in several tens of seconds to a few minutes. If medical records are simply encrypted, the risk of leaks exists because decryption is usually required during statistical analysis. Our system possesses high-level security because medical records remain in encrypted state even during statistical analysis. Also, our system can securely compute some basic statistics with conditional expressions using 'R' that works interactively while secure computation protocols generally require a significant amount of processing time. We propose a secure statistical analysis system using 'R' for medical data that effectively integrates secret-sharing-based secure computation and original computation. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
HOM identification by bead pulling in the Brookhaven ERL cavity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hahn H.; Calaga, R.; Jain, P.
2012-06-25
Several past measurements of the Brookhaven ERL at superconducting temperature produced a long list of higher order modes (HOMs). The Niobium 5-cell cavity is terminated with HOM ferrite dampers that successfully reduce the Q-factors to tolerable levels. However, a number of undamped resonances with Q {ge} 10{sup 6} were found at 4 K and their mode identification remained as a goal for this paper. The approach taken here consists in taking different S{sub 21} measurements on a copper cavity replica of the ERL which can be compared with the actual data and also with Microwave Studio computer simulations. Several differentmore » S{sub 21} transmission measurements are used, including those taken from the fundamental input coupler to the pick-up probe across the cavity, between probes in a single cell, and between beam-position monitor probes in the beam tubes. Mode identification is supported by bead pulling with a metallic needle or a dielectric sphere that are calibrated in the fundamental mode. This paper presents results for HOMs in the first two dipole bands with the prototypical 958 MHz trapped mode, the lowest beam tube resonances, and high-Q modes in the first quadrupole band and beyond.« less
Kuehn, Ned F
2006-05-01
Chronic nasal disease is often a challenge to diagnose. Computed tomography greatly enhances the ability to diagnose chronic nasal disease in dogs and cats. Nasal computed tomography provides detailed information regarding the extent of disease, accurate discrimination of neoplastic versus nonneoplastic diseases, and identification of areas of the nose to examine rhinoscopically and suspicious regions to target for biopsy.
Highly Reproducible Label Free Quantitative Proteomic Analysis of RNA Polymerase Complexes*
Mosley, Amber L.; Sardiu, Mihaela E.; Pattenden, Samantha G.; Workman, Jerry L.; Florens, Laurence; Washburn, Michael P.
2011-01-01
The use of quantitative proteomics methods to study protein complexes has the potential to provide in-depth information on the abundance of different protein components as well as their modification state in various cellular conditions. To interrogate protein complex quantitation using shotgun proteomic methods, we have focused on the analysis of protein complexes using label-free multidimensional protein identification technology and studied the reproducibility of biological replicates. For these studies, we focused on three highly related and essential multi-protein enzymes, RNA polymerase I, II, and III from Saccharomyces cerevisiae. We found that label-free quantitation using spectral counting is highly reproducible at the protein and peptide level when analyzing RNA polymerase I, II, and III. In addition, we show that peptide sampling does not follow a random sampling model, and we show the need for advanced computational models to predict peptide detection probabilities. In order to address these issues, we used the APEX protocol to model the expected peptide detectability based on whole cell lysate acquired using the same multidimensional protein identification technology analysis used for the protein complexes. Neither method was able to predict the peptide sampling levels that we observed using replicate multidimensional protein identification technology analyses. In addition to the analysis of the RNA polymerase complexes, our analysis provides quantitative information about several RNAP associated proteins including the RNAPII elongation factor complexes DSIF and TFIIF. Our data shows that DSIF and TFIIF are the most highly enriched RNAP accessory factors in Rpb3-TAP purifications and demonstrate our ability to measure low level associated protein abundance across biological replicates. In addition, our quantitative data supports a model in which DSIF and TFIIF interact with RNAPII in a dynamic fashion in agreement with previously published reports. PMID:21048197
Computational Issues in Damping Identification for Large Scale Problems
NASA Technical Reports Server (NTRS)
Pilkey, Deborah L.; Roe, Kevin P.; Inman, Daniel J.
1997-01-01
Two damping identification methods are tested for efficiency in large-scale applications. One is an iterative routine, and the other a least squares method. Numerical simulations have been performed on multiple degree-of-freedom models to test the effectiveness of the algorithm and the usefulness of parallel computation for the problems. High Performance Fortran is used to parallelize the algorithm. Tests were performed using the IBM-SP2 at NASA Ames Research Center. The least squares method tested incurs high communication costs, which reduces the benefit of high performance computing. This method's memory requirement grows at a very rapid rate meaning that larger problems can quickly exceed available computer memory. The iterative method's memory requirement grows at a much slower pace and is able to handle problems with 500+ degrees of freedom on a single processor. This method benefits from parallelization, and significant speedup can he seen for problems of 100+ degrees-of-freedom.
Tracking by Identification Using Computer Vision and Radio
Mandeljc, Rok; Kovačič, Stanislav; Kristan, Matej; Perš, Janez
2013-01-01
We present a novel system for detection, localization and tracking of multiple people, which fuses a multi-view computer vision approach with a radio-based localization system. The proposed fusion combines the best of both worlds, excellent computer-vision-based localization, and strong identity information provided by the radio system, and is therefore able to perform tracking by identification, which makes it impervious to propagated identity switches. We present comprehensive methodology for evaluation of systems that perform person localization in world coordinate system and use it to evaluate the proposed system as well as its components. Experimental results on a challenging indoor dataset, which involves multiple people walking around a realistically cluttered room, confirm that proposed fusion of both systems significantly outperforms its individual components. Compared to the radio-based system, it achieves better localization results, while at the same time it successfully prevents propagation of identity switches that occur in pure computer-vision-based tracking. PMID:23262485
Evolutionary Computation for the Identification of Emergent Behavior in Autonomous Systems
NASA Technical Reports Server (NTRS)
Terrile, Richard J.; Guillaume, Alexandre
2009-01-01
Over the past several years the Center for Evolutionary Computation and Automated Design at the Jet Propulsion Laboratory has developed a technique based on Evolutionary Computational Methods (ECM) that allows for the automated optimization of complex computationally modeled systems. An important application of this technique is for the identification of emergent behaviors in autonomous systems. Mobility platforms such as rovers or airborne vehicles are now being designed with autonomous mission controllers that can find trajectories over a solution space that is larger than can reasonably be tested. It is critical to identify control behaviors that are not predicted and can have surprising results (both good and bad). These emergent behaviors need to be identified, characterized and either incorporated into or isolated from the acceptable range of control characteristics. We use cluster analysis of automatically retrieved solutions to identify isolated populations of solutions with divergent behaviors.
Chavez, Juan D.; Bisson, William H.
2011-01-01
The site-specific identification of α-aminoadipic semialdehyde (AAS) and γ-glutamic semialdehyde (GGS) residues in proteins is reported. Semialdehydic protein modifications result from the metal-catalyzed oxidation of Lys or Arg and Pro residues, respectively. Most of the analytical methods for the analysis of protein carbonylation measure change to the global level of carbonylation and fail to provide details regarding protein identity, site, and chemical nature of the carbonylation. In this work, we used a targeted approach, which combines chemical labeling, enrichment, and tandem mass spectrometric analysis, for the site-specific identification of AAS and GGS sites in proteins. The approach is applied to in vitro oxidized glyceraldehyde-3-phosphate dehydrogenase (GAPDH) and an untreated biological sample, namely cardiac mitochondrial proteins. The analysis of GAPDH resulted in the site-specific identification of two AAA and four GGS residues. Computational evaluation of the identified AAS and GGS sites in GAPDH indicated that these sites are located in flexible regions, show high solvent accessibility values, and are in proximity with possible metal ion binding sites. The targeted proteomic analysis of semialdehydic modifications in cardiac mitochondria yielded nine AAS modification sites which were unambiguously assigned to distinct lysine residues in the following proteins: ATP/ATP translocase isoforms 1 and 2, ubiquinol cytochrome-c reductase core protein 2, and ATP synthase α-subunit. PMID:20957471
Chen, Y-J; Chen, S-K; Huang, H-W; Yao, C-C; Chang, H-F
2004-09-01
To compare the cephalometric landmark identification on softcopy and hardcopy of direct digital cephalography acquired by a storage-phosphor (SP) imaging system. Ten digital cephalograms and their conventional counterpart, hardcopy on a transparent blue film, were obtained by a SP imaging system and a dye sublimation printer. Twelve orthodontic residents identified 19 cephalometric landmarks on monitor-displayed SP digital images with computer-aided method and on their hardcopies with conventional method. The x- and y-coordinates for each landmark, indicating the horizontal and vertical positions, were analysed to assess the reliability of landmark identification and evaluate the concordance of the landmark locations in softcopy and hardcopy of SP digital cephalometric radiography. For each of the 19 landmarks, the location differences as well as the horizontal and vertical components were statistically significant between SP digital cephalometric radiography and its hardcopy. Smaller interobserver errors on SP digital images than those on their hardcopies were noted for all the landmarks, except point Go in vertical direction. The scatter-plots demonstrate the characteristic distribution of the interobserver error in both horizontal and vertical directions. Generally, the dispersion of interobserver error on SP digital cephalometric radiography is less than that on its hardcopy with conventional method. The SP digital cephalometric radiography could yield better or comparable level of performance in landmark identification as its hardcopy, except point Go in vertical direction.
Spinozzi, Giulio; Calabria, Andrea; Brasca, Stefano; Beretta, Stefano; Merelli, Ivan; Milanesi, Luciano; Montini, Eugenio
2017-11-25
Bioinformatics tools designed to identify lentiviral or retroviral vector insertion sites in the genome of host cells are used to address the safety and long-term efficacy of hematopoietic stem cell gene therapy applications and to study the clonal dynamics of hematopoietic reconstitution. The increasing number of gene therapy clinical trials combined with the increasing amount of Next Generation Sequencing data, aimed at identifying integration sites, require both highly accurate and efficient computational software able to correctly process "big data" in a reasonable computational time. Here we present VISPA2 (Vector Integration Site Parallel Analysis, version 2), the latest optimized computational pipeline for integration site identification and analysis with the following features: (1) the sequence analysis for the integration site processing is fully compliant with paired-end reads and includes a sequence quality filter before and after the alignment on the target genome; (2) an heuristic algorithm to reduce false positive integration sites at nucleotide level to reduce the impact of Polymerase Chain Reaction or trimming/alignment artifacts; (3) a classification and annotation module for integration sites; (4) a user friendly web interface as researcher front-end to perform integration site analyses without computational skills; (5) the time speedup of all steps through parallelization (Hadoop free). We tested VISPA2 performances using simulated and real datasets of lentiviral vector integration sites, previously obtained from patients enrolled in a hematopoietic stem cell gene therapy clinical trial and compared the results with other preexisting tools for integration site analysis. On the computational side, VISPA2 showed a > 6-fold speedup and improved precision and recall metrics (1 and 0.97 respectively) compared to previously developed computational pipelines. These performances indicate that VISPA2 is a fast, reliable and user-friendly tool for integration site analysis, which allows gene therapy integration data to be handled in a cost and time effective fashion. Moreover, the web access of VISPA2 ( http://openserver.itb.cnr.it/vispa/ ) ensures accessibility and ease of usage to researches of a complex analytical tool. We released the source code of VISPA2 in a public repository ( https://bitbucket.org/andreacalabria/vispa2 ).
High performance data acquisition, identification, and monitoring for active magnetic bearings
NASA Technical Reports Server (NTRS)
Herzog, Raoul; Siegwart, Roland
1994-01-01
Future active magnetic bearing systems (AMB) must feature easier on-site tuning, higher stiffness and damping, better robustness with respect to undesirable vibrations in housing and foundation, and enhanced monitoring and identification abilities. To get closer to these goals we developed a fast parallel link from the digitally controlled AMB to Matlab, which is used on a host computer for data processing, identification, and controller layout. This enables the magnetic bearing to take its frequency responses without using any additional measurement equipment. These measurements can be used for AMB identification.
Research in applied mathematics, numerical analysis, and computer science
NASA Technical Reports Server (NTRS)
1984-01-01
Research conducted at the Institute for Computer Applications in Science and Engineering (ICASE) in applied mathematics, numerical analysis, and computer science is summarized and abstracts of published reports are presented. The major categories of the ICASE research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software, especially vector and parallel computers.
Personal Identification by Keystroke Dynamics in Japanese Free Text Typing
NASA Astrophysics Data System (ADS)
Samura, Toshiharu; Nishimura, Haruhiko
Biometrics is classified into verification and identification. Many researchers on the keystroke dynamics have treated the verification of a fixed short password which is used for the user login. In this research, we pay attention to the identification and investigate several characteristics of the keystroke dynamics in Japanese free text typing. We developed Web-based typing software in order to collect the keystroke data on the Local Area Network and performed experiments on a total of 112 subjects, from which three groups of typing level, the beginner's level and above, the normal level and above and the middle level and above were constructed. Based on the identification methods by the weighted Euclid distance and the neural network for the extracted feature indexes in Japanese texts, we evaluated identification performances for the three groups. As a result, high accuracy of personal identification was confirmed in both methods, in proportion to the typing level of the group.
Kirchoff, Bruce K; Delaney, Peter F; Horton, Meg; Dellinger-Johnston, Rebecca
2014-01-01
Learning to identify organisms is extraordinarily difficult, yet trained field biologists can quickly and easily identify organisms at a glance. They do this without recourse to the use of traditional characters or identification devices. Achieving this type of recognition accuracy is a goal of many courses in plant systematics. Teaching plant identification is difficult because of variability in the plants' appearance, the difficulty of bringing them into the classroom, and the difficulty of taking students into the field. To solve these problems, we developed and tested a cognitive psychology-based computer program to teach plant identification. The program incorporates presentation of plant images in a homework-based, active-learning format that was developed to stimulate expert-level visual recognition. A controlled experimental test using a within-subject design was performed against traditional study methods in the context of a college course in plant systematics. Use of the program resulted in an 8-25% statistically significant improvement in final exam scores, depending on the type of identification question used (living plants, photographs, written descriptions). The software demonstrates how the use of routines to train perceptual expertise, interleaved examples, spaced repetition, and retrieval practice can be used to train identification of complex and highly variable objects. © 2014 B. K. Kirchoff et al. CBE—Life Sciences Education © 2014 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Lachaud, Laurence; Fernández-Arévalo, Anna; Normand, Anne-Cécile; Lami, Patrick; Nabet, Cécile; Donnadieu, Jean Luc; Piarroux, Martine; Djenad, Farid; Cassagne, Carole; Ravel, Christophe; Tebar, Silvia; Llovet, Teresa; Blanchet, Denis; Demar, Magalie; Harrat, Zoubir; Aoun, Karim; Bastien, Patrick; Muñoz, Carmen; Gállego, Montserrat; Piarroux, Renaud
2017-10-01
Human leishmaniases are widespread diseases with different clinical forms caused by about 20 species within the Leishmania genus. Leishmania species identification is relevant for therapeutic management and prognosis, especially for cutaneous and mucocutaneous forms. Several methods are available to identify Leishmania species from culture, but they have not been standardized for the majority of the currently described species, with the exception of multilocus enzyme electrophoresis. Moreover, these techniques are expensive, time-consuming, and not available in all laboratories. Within the last decade, mass spectrometry (MS) has been adapted for the identification of microorganisms, including Leishmania However, no commercial reference mass-spectral database is available. In this study, a reference mass-spectral library (MSL) for Leishmania isolates, accessible through a free Web-based application (mass-spectral identification [MSI]), was constructed and tested. It includes mass-spectral data for 33 different Leishmania species, including species that infect humans, animals, and phlebotomine vectors. Four laboratories on two continents evaluated the performance of MSI using 268 samples, 231 of which were Leishmania strains. All Leishmania strains, but one, were correctly identified at least to the complex level. A risk of species misidentification within the Leishmania donovani , L. guyanensis , and L. braziliensis complexes was observed, as previously reported for other techniques. The tested application was reliable, with identification results being comparable to those obtained with reference methods but with a more favorable cost-efficiency ratio. This free online identification system relies on a scalable database and can be implemented directly in users' computers. Copyright © 2017 American Society for Microbiology.
Estimation of Unsteady Aerodynamic Models from Dynamic Wind Tunnel Data
NASA Technical Reports Server (NTRS)
Murphy, Patrick; Klein, Vladislav
2011-01-01
Demanding aerodynamic modelling requirements for military and civilian aircraft have motivated researchers to improve computational and experimental techniques and to pursue closer collaboration in these areas. Model identification and validation techniques are key components for this research. This paper presents mathematical model structures and identification techniques that have been used successfully to model more general aerodynamic behaviours in single-degree-of-freedom dynamic testing. Model parameters, characterizing aerodynamic properties, are estimated using linear and nonlinear regression methods in both time and frequency domains. Steps in identification including model structure determination, parameter estimation, and model validation, are addressed in this paper with examples using data from one-degree-of-freedom dynamic wind tunnel and water tunnel experiments. These techniques offer a methodology for expanding the utility of computational methods in application to flight dynamics, stability, and control problems. Since flight test is not always an option for early model validation, time history comparisons are commonly made between computational and experimental results and model adequacy is inferred by corroborating results. An extension is offered to this conventional approach where more general model parameter estimates and their standard errors are compared.
Guided mass spectrum labelling in atom probe tomography.
Haley, D; Choi, P; Raabe, D
2015-12-01
Atom probe tomography (APT) is a valuable near-atomic scale imaging technique, which yields mass spectrographic data. Experimental correctness can often pivot on the identification of peaks within a dataset, this is a manual process where subjectivity and errors can arise. The limitations of manual procedures complicate APT experiments for the operator and furthermore are a barrier to technique standardisation. In this work we explore the capabilities of computer-guided ranging to aid identification and analysis of mass spectra. We propose a fully robust algorithm for enumeration of the possible identities of detected peak positions, which assists labelling. Furthermore, a simple ranking scheme is developed to allow for evaluation of the likelihood of each possible identity being the likely assignment from the enumerated set. We demonstrate a simple, yet complete work-chain that allows for the conversion of mass-spectra to fully identified APT spectra, with the goal of minimising identification errors, and the inter-operator variance within APT experiments. This work chain is compared to current procedures via experimental trials with different APT operators, to determine the relative effectiveness and precision of the two approaches. It is found that there is little loss of precision (and occasionally gain) when participants are given computer assistance. We find that in either case, inter-operator precision for ranging varies between 0 and 2 "significant figures" (2σ confidence in the first n digits of the reported value) when reporting compositions. Intra-operator precision is weakly tested and found to vary between 1 and 3 significant figures, depending upon species composition levels. Finally it is suggested that inconsistencies in inter-operator peak labelling may be the largest source of scatter when reporting composition data in APT. Copyright © 2015 Elsevier B.V. All rights reserved.
78 FR 23226 - 36(b)(1) Arms Sales Notification
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-18
..., Communication, Computer and Intelligence/Communication, Navigational and Identification (C4I/CNI); Autonomic.../ integration, aircraft ferry and tanker support, support equipment, tools and test equipment, communication... aircraft equipment includes: Electronic Warfare Systems; Command, Control, Communication, Computer and...
Automated drug identification system
NASA Technical Reports Server (NTRS)
Campen, C. F., Jr.
1974-01-01
System speeds up analysis of blood and urine and is capable of identifying 100 commonly abused drugs. System includes computer that controls entire analytical process by ordering various steps in specific sequences. Computer processes data output and has readout of identified drugs.
Fourth NASA Workshop on Computational Control of Flexible Aerospace Systems, part 2
NASA Technical Reports Server (NTRS)
Taylor, Lawrence W., Jr. (Compiler)
1991-01-01
A collection of papers presented at the Fourth NASA Workshop on Computational Control of Flexible Aerospace Systems is given. The papers address modeling, systems identification, and control of flexible aircraft, spacecraft and robotic systems.
Introduction to bioinformatics.
Can, Tolga
2014-01-01
Bioinformatics is an interdisciplinary field mainly involving molecular biology and genetics, computer science, mathematics, and statistics. Data intensive, large-scale biological problems are addressed from a computational point of view. The most common problems are modeling biological processes at the molecular level and making inferences from collected data. A bioinformatics solution usually involves the following steps: Collect statistics from biological data. Build a computational model. Solve a computational modeling problem. Test and evaluate a computational algorithm. This chapter gives a brief introduction to bioinformatics by first providing an introduction to biological terminology and then discussing some classical bioinformatics problems organized by the types of data sources. Sequence analysis is the analysis of DNA and protein sequences for clues regarding function and includes subproblems such as identification of homologs, multiple sequence alignment, searching sequence patterns, and evolutionary analyses. Protein structures are three-dimensional data and the associated problems are structure prediction (secondary and tertiary), analysis of protein structures for clues regarding function, and structural alignment. Gene expression data is usually represented as matrices and analysis of microarray data mostly involves statistics analysis, classification, and clustering approaches. Biological networks such as gene regulatory networks, metabolic pathways, and protein-protein interaction networks are usually modeled as graphs and graph theoretic approaches are used to solve associated problems such as construction and analysis of large-scale networks.
Liao, Wei-Ching; Chuang, Min-Chieh; Ho, Ja-An Annie
2013-12-15
Genetically modified (GM) technique, one of the modern biomolecular engineering technologies, has been deemed as profitable strategy to fight against global starvation. Yet rapid and reliable analytical method is deficient to evaluate the quality and potential risk of such resulting GM products. We herein present a biomolecular analytical system constructed with distinct biochemical activities to expedite the computational detection of genetically modified organisms (GMOs). The computational mechanism provides an alternative to the complex procedures commonly involved in the screening of GMOs. Given that the bioanalytical system is capable of processing promoter, coding and species genes, affirmative interpretations succeed to identify specified GM event in terms of both electrochemical and optical fashions. The biomolecular computational assay exhibits detection capability of genetically modified DNA below sub-nanomolar level and is found interference-free by abundant coexistence of non-GM DNA. This bioanalytical system, furthermore, sophisticates in array fashion operating multiplex screening against variable GM events. Such a biomolecular computational assay and biosensor holds great promise for rapid, cost-effective, and high-fidelity screening of GMO. Copyright © 2013 Elsevier B.V. All rights reserved.
[Measurement of intracranial hematoma volume by personal computer].
DU, Wanping; Tan, Lihua; Zhai, Ning; Zhou, Shunke; Wang, Rui; Xue, Gongshi; Xiao, An
2011-01-01
To explore the method for intracranial hematoma volume measurement by the personal computer. Forty cases of various intracranial hematomas were measured by the computer tomography with quantitative software and personal computer with Photoshop CS3 software, respectively. the data from the 2 methods were analyzed and compared. There was no difference between the data from the computer tomography and the personal computer (P>0.05). The personal computer with Photoshop CS3 software can measure the volume of various intracranial hematomas precisely, rapidly and simply. It should be recommended in the clinical medicolegal identification.
Identification of mechanisms responsible for adverse developmental effects is the first step in creating predictive toxicity models. Identification of putative mechanisms was performed by co-analyzing three datasets for the effects of ToxCast phase Ia and II chemicals: 1.In vitro...
Uncertainty Analysis via Failure Domain Characterization: Polynomial Requirement Functions
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Munoz, Cesar A.; Narkawicz, Anthony J.; Kenny, Sean P.; Giesy, Daniel P.
2011-01-01
This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. A Bernstein expansion approach is used to size hyper-rectangular subsets while a sum of squares programming approach is used to size quasi-ellipsoidal subsets. These methods are applicable to requirement functions whose functional dependency on the uncertainty is a known polynomial. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the uncertainty model assumed (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.
Uncertainty Analysis via Failure Domain Characterization: Unrestricted Requirement Functions
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2011-01-01
This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. The methods developed herein, which are based on nonlinear constrained optimization, are applicable to requirement functions whose functional dependency on the uncertainty is arbitrary and whose explicit form may even be unknown. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the assumed uncertainty model (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.
Chemical-Help Application for Classification and Identification of Stormwater Constituents
Granato, Gregory E.; Driskell, Timothy R.; Nunes, Catherine
2000-01-01
A computer application called Chemical Help was developed to facilitate review of reports for the National Highway Runoff Water-Quality Data and Methodology Synthesis (NDAMS). The application provides a tool to quickly find a proper classification for any constituent in the NDAMS review sheets. Chemical Help contents include the name of each water-quality property, constituent, or parameter, the section number within the NDAMS review sheet, the organizational levels within a classification hierarchy, the database number, and where appropriate, the chemical formula, the Chemical Abstract Service number, and a list of synonyms (for the organic chemicals). Therefore, Chemical Help provides information necessary to research available reference data for the water-quality properties and constituents of potential interest in stormwater studies. Chemical Help is implemented in the Microsoft help-system interface. (Computer files for the use and documentation of Chemical Help are included on an accompanying diskette.)
Exploring biological interaction networks with tailored weighted quasi-bicliques
2012-01-01
Background Biological networks provide fundamental insights into the functional characterization of genes and their products, the characterization of DNA-protein interactions, the identification of regulatory mechanisms, and other biological tasks. Due to the experimental and biological complexity, their computational exploitation faces many algorithmic challenges. Results We introduce novel weighted quasi-biclique problems to identify functional modules in biological networks when represented by bipartite graphs. In difference to previous quasi-biclique problems, we include biological interaction levels by using edge-weighted quasi-bicliques. While we prove that our problems are NP-hard, we also describe IP formulations to compute exact solutions for moderately sized networks. Conclusions We verify the effectiveness of our IP solutions using both simulation and empirical data. The simulation shows high quasi-biclique recall rates, and the empirical data corroborate the abilities of our weighted quasi-bicliques in extracting features and recovering missing interactions from biological networks. PMID:22759421
NASA Astrophysics Data System (ADS)
Meneghello, Gianluca; Beyhaghi, Pooriya; Bewley, Thomas
2016-11-01
The identification of an optimized hydrofoil shape depends on an accurate characterization of both its geometry and the incoming, turbulent, free-stream flow. We analyze this dependence using the computationally inexpensive vortex lattice model implemented in AVL, coupled with the recently developed global, derivative-free optimization algorithm implemented in Δ - DOGS . Particular attention will be given to the effect of the free-stream turbulence level - as modeled by a change in the viscous drag coefficients - on the optimized values of the parameters describing the three dimensional shape of the foil. Because the simplicity of AVL, when contrasted with more complex and computationally expensive LES or RANS models, may cast doubts on its usefulness, its validity and limitations will be discussed by comparison with water tank measurement, and again taking into account the effect of the uncertainty in the free-stream characterization.
Anastasia, Mario; Allevi, Pietro; Colombo, Raffaele; Giannini, Elios
2007-10-01
This paper demonstrates that the crystallization of 3beta-acetoxy-14alpha,15alpha-epoxy-5alpha-cholest-8-en-7-one from methanol affords the 3beta-acetoxy-9alpha-methoxy-15alpha-hydroxycholest-8(14)-en-7-one. The structure of this steroid, which shows an apparently anomalous UV absorption maximum, is determined by high field NMR experiments, supporting the coupling constant values assignments and the NOE contacts by a conformational study through theoretical calculations at the B3LYP/6-31G* level. The computational study also justifies the observed UV absorption of the steroid, thus demonstrating the usefulness of computer chemistry in providing support for the identification of unknown compounds.
NASA Technical Reports Server (NTRS)
Morozov, S. K.; Krasitskiy, O. P.
1978-01-01
A computational scheme and a standard program is proposed for solving systems of nonstationary spatially one-dimensional nonlinear differential equations using Newton's method. The proposed scheme is universal in its applicability and its reduces to a minimum the work of programming. The program is written in the FORTRAN language and can be used without change on electronic computers of type YeS and BESM-6. The standard program described permits the identification of nonstationary (or stationary) solutions to systems of spatially one-dimensional nonlinear (or linear) partial differential equations. The proposed method may be used to solve a series of geophysical problems which take chemical reactions, diffusion, and heat conductivity into account, to evaluate nonstationary thermal fields in two-dimensional structures when in one of the geometrical directions it can take a small number of discrete levels, and to solve problems in nonstationary gas dynamics.
Send-side matching of data communications messages
Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.
2014-06-17
Send-side matching of data communications messages in a distributed computing system comprising a plurality of compute nodes, including: issuing by a receiving node to source nodes a receive message that specifies receipt of a single message to be sent from any source node, the receive message including message matching information, a specification of a hardware-level mutual exclusion device, and an identification of a receive buffer; matching by two or more of the source nodes the receive message with pending send messages in the two or more source nodes; operating by one of the source nodes having a matching send message the mutual exclusion device, excluding messages from other source nodes with matching send messages and identifying to the receiving node the source node operating the mutual exclusion device; and sending to the receiving node from the source node operating the mutual exclusion device a matched pending message.
Gradual cut detection using low-level vision for digital video
NASA Astrophysics Data System (ADS)
Lee, Jae-Hyun; Choi, Yeun-Sung; Jang, Ok-bae
1996-09-01
Digital video computing and organization is one of the important issues in multimedia system, signal compression, or database. Video should be segmented into shots to be used for identification and indexing. This approach requires a suitable method to automatically locate cut points in order to separate shot in a video. Automatic cut detection to isolate shots in a video has received considerable attention due to many practical applications; our video database, browsing, authoring system, retrieval and movie. Previous studies are based on a set of difference mechanisms and they measured the content changes between video frames. But they could not detect more special effects which include dissolve, wipe, fade-in, fade-out, and structured flashing. In this paper, a new cut detection method for gradual transition based on computer vision techniques is proposed. And then, experimental results applied to commercial video are presented and evaluated.
NASA Astrophysics Data System (ADS)
Kumarapperuma, Lakshitha; Premaratne, Malin; Jha, Pankaj K.; Stockman, Mark I.; Agrawal, Govind P.
2018-05-01
We demonstrate that it is possible to derive an approximate analytical expression to characterize the spasing (L-L) curve of a coherently enhanced spaser with 3-level gain-medium chromophores. The utility of this solution stems from the fact that it enables optimization of the large parameter space associated with spaser designing, a functionality not offered by the methods currently available in the literature. This is vital for the advancement of spaser technology towards the level of device realization. Owing to the compact nature of the analytical expressions, our solution also facilitates the grouping and identification of key processes responsible for the spasing action, whilst providing significant physical insights. Furthermore, we show that our expression generates results within 0.1% error compared to numerically obtained results for pumping rates higher than the spasing threshold, thereby drastically reducing the computational cost associated with spaser designing.
Dourado, Jules Carlos; Pereira, Júlio Leonardo Barbosa; Albuquerque, Lucas Alverne Freitas de; Carvalho, Gervásio Teles Cardos de; Dias, Patrícia; Dias, Laura; Bicalho, Marcos; Magalhães, Pollyana; Dellaretti, Marcos
2015-08-01
The power of interpretation in the analysis of cranial computed tomography (CCT) among neurosurgeons and radiologists has rarely been studied. This study aimed to assess the rate of agreement in the interpretation of CCTs between neurosurgeons and a radiologist in an emergency department. 227 CCT were independently analyzed by two neurosurgeons (NS1 and NS2) and a radiologist (RAD). The level of agreement in interpreting the examination was studied. The Kappa values obtained between NS1 and NS2 and RAD were considered nearly perfect and substantial agreement. The highest levels of agreement when evaluating abnormalities were observed in the identification of tumors, hydrocephalus and intracranial hematomas. The worst levels of agreement were observed for leukoaraiosis and reduced brain volume. For diseases in which the emergency room procedure must be determined, agreement in the interpretation of CCTs between the radiologist and neurosurgeons was satisfactory.
The Bilingual Language Interaction Network for Comprehension of Speech*
Marian, Viorica
2013-01-01
During speech comprehension, bilinguals co-activate both of their languages, resulting in cross-linguistic interaction at various levels of processing. This interaction has important consequences for both the structure of the language system and the mechanisms by which the system processes spoken language. Using computational modeling, we can examine how cross-linguistic interaction affects language processing in a controlled, simulated environment. Here we present a connectionist model of bilingual language processing, the Bilingual Language Interaction Network for Comprehension of Speech (BLINCS), wherein interconnected levels of processing are created using dynamic, self-organizing maps. BLINCS can account for a variety of psycholinguistic phenomena, including cross-linguistic interaction at and across multiple levels of processing, cognate facilitation effects, and audio-visual integration during speech comprehension. The model also provides a way to separate two languages without requiring a global language-identification system. We conclude that BLINCS serves as a promising new model of bilingual spoken language comprehension. PMID:24363602
ERIC Educational Resources Information Center
Seo, You-Jin; Woo, Honguk
2010-01-01
Critical user interface design features of computer-assisted instruction programs in mathematics for students with learning disabilities and corresponding implementation guidelines were identified in this study. Based on the identified features and guidelines, a multimedia computer-assisted instruction program, "Math Explorer", which delivers…
Domain identification in impedance computed tomography by spline collocation method
NASA Technical Reports Server (NTRS)
Kojima, Fumio
1990-01-01
A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.
Code of Federal Regulations, 2010 CFR
2010-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
Code of Federal Regulations, 2013 CFR
2013-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
Code of Federal Regulations, 2014 CFR
2014-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
Code of Federal Regulations, 2012 CFR
2012-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
Code of Federal Regulations, 2011 CFR
2011-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
Utility of Computational Methods to Identify the Apoptosis Machinery in Unicellular Eukaryotes
Durand, Pierre Marcel; Coetzer, Theresa Louise
2008-01-01
Apoptosis is the phenotypic result of an active, regulated process of self-destruction. Following various cellular insults, apoptosis has been demonstrated in numerous unicellular eukaryotes, but very little is known about the genes and proteins that initiate and execute this process in this group of organisms. A bioinformatic approach presents an array of powerful methods to direct investigators in the identification of the apoptosis machinery in protozoans. In this review, we discuss some of the available computational methods and illustrate how they may be applied using the identification of a Plasmodium falciparum metacaspase gene as an example. PMID:19812769
An Intelligent Tutoring System for Antibody Identification
Smith, Philip J.; Miller, Thomas E.; Fraser, Jane M.
1990-01-01
Empirical studies of medical technology students indicate that there is considerable need for additional skill development in performing tasks such as antibody identification. While this need is currently met by on-the-job training after employment, computer-based tutoring systems offer an alternative or supplemental problem-based learning environment that could be more cost effective. We have developed a prototype for such a tutoring system as part of a project to develop educational tools for the field of transfusion medicine. This system provides a microworld in which students can explore and solve cases, receiving assistance and tutoring from the computer as needed.
System identification from closed-loop data with known output feedback dynamics
NASA Technical Reports Server (NTRS)
Phan, Minh; Juang, Jer-Nan; Horta, Lucas G.; Longman, Richard W.
1992-01-01
This paper presents a procedure to identify the open loop systems when it is operating under closed loop conditions. First, closed loop excitation data are used to compute the system open loop and closed loop Markov parameters. The Markov parameters, which are the pulse response samples, are then used to compute a state space representation of the open loop system. Two closed loop configurations are considered in this paper. The closed loop system can have either a linear output feedback controller or a dynamic output feedback controller. Numerical examples are provided to illustrate the proposed closed loop identification method.
Mass Conservation and Inference of Metabolic Networks from High-Throughput Mass Spectrometry Data
Bandaru, Pradeep; Bansal, Mukesh
2011-01-01
Abstract We present a step towards the metabolome-wide computational inference of cellular metabolic reaction networks from metabolic profiling data, such as mass spectrometry. The reconstruction is based on identification of irreducible statistical interactions among the metabolite activities using the ARACNE reverse-engineering algorithm and on constraining possible metabolic transformations to satisfy the conservation of mass. The resulting algorithms are validated on synthetic data from an abridged computational model of Escherichia coli metabolism. Precision rates upwards of 50% are routinely observed for identification of full metabolic reactions, and recalls upwards of 20% are also seen. PMID:21314454
Weaver, K. E.; Chaovalitwongse, W. A.; Novotny, E. J.; Poliakov, A.; Grabowski, T. G.; Ojemann, J. G.
2013-01-01
Successful resection of cortical tissue engendering seizure activity is efficacious for the treatment of refractory, focal epilepsy. The pre-operative localization of the seizure focus is therefore critical to yielding positive, post-operative outcomes. In a small proportion of focal epilepsy patients presenting with normal MRI, identification of the seizure focus is significantly more challenging. We examined the capacity of resting state functional MRI (rsfMRI) to identify the seizure focus in a group of four non-lesion, focal (NLF) epilepsy individuals. We predicted that computing patterns of local functional connectivity in and around the epileptogenic zone combined with a specific reference to the corresponding region within the contralateral hemisphere would reliably predict the location of the seizure focus. We first averaged voxel-wise regional homogeneity (ReHo) across regions of interest (ROIs) from a standardized, probabilistic atlas for each NLF subject as well as 16 age- and gender-matched controls. To examine contralateral effects, we computed a ratio of the mean pair-wise correlations of all voxels within a ROI with the corresponding contralateral region (IntraRegional Connectivity – IRC). For each subject, ROIs were ranked (from lowest to highest) on ReHo, IRC, and the mean of the two values. At the group level, we observed a significant decrease in the rank for ROI harboring the seizure focus for the ReHo rankings as well as for the mean rank. At the individual level, the seizure focus ReHo rank was within bottom 10% lowest ranked ROIs for all four NLF epilepsy patients and three out of the four for the IRC rankings. However, when the two ranks were combined (averaging across ReHo and IRC ranks and scalars), the seizure focus ROI was either the lowest or second lowest ranked ROI for three out of the four epilepsy subjects. This suggests that rsfMRI may serve as an adjunct pre-surgical tool, facilitating the identification of the seizure focus in focal epilepsy. PMID:23641233
Security Applications Of Computer Motion Detection
NASA Astrophysics Data System (ADS)
Bernat, Andrew P.; Nelan, Joseph; Riter, Stephen; Frankel, Harry
1987-05-01
An important area of application of computer vision is the detection of human motion in security systems. This paper describes the development of a computer vision system which can detect and track human movement across the international border between the United States and Mexico. Because of the wide range of environmental conditions, this application represents a stringent test of computer vision algorithms for motion detection and object identification. The desired output of this vision system is accurate, real-time locations for individual aliens and accurate statistical data as to the frequency of illegal border crossings. Because most detection and tracking routines assume rigid body motion, which is not characteristic of humans, new algorithms capable of reliable operation in our application are required. Furthermore, most current detection and tracking algorithms assume a uniform background against which motion is viewed - the urban environment along the US-Mexican border is anything but uniform. The system works in three stages: motion detection, object tracking and object identi-fication. We have implemented motion detection using simple frame differencing, maximum likelihood estimation, mean and median tests and are evaluating them for accuracy and computational efficiency. Due to the complex nature of the urban environment (background and foreground objects consisting of buildings, vegetation, vehicles, wind-blown debris, animals, etc.), motion detection alone is not sufficiently accurate. Object tracking and identification are handled by an expert system which takes shape, location and trajectory information as input and determines if the moving object is indeed representative of an illegal border crossing.
Innovative architectures for dense multi-microprocessor computers
NASA Technical Reports Server (NTRS)
Donaldson, Thomas; Doty, Karl; Engle, Steven W.; Larson, Robert E.; O'Reilly, John G.
1988-01-01
The results of a Phase I Small Business Innovative Research (SBIR) project performed for the NASA Langley Computational Structural Mechanics Group are described. The project resulted in the identification of a family of chordal-ring interconnection architectures with excellent potential to serve as the basis for new multimicroprocessor (MMP) computers. The paper presents examples of how computational algorithms from structural mechanics can be efficiently implemented on the chordal-ring architecture.
[Isolation and identification methods of enterobacteria group and its technological advancement].
Furuta, Itaru
2007-08-01
In the last half-century, isolation and identification methods of enterobacteria groups have markedly improved by technological advancement. Clinical microbiology tests have changed overtime from tube methods to commercial identification kits and automated identification. Tube methods are the original method for the identification of enterobacteria groups, that is, a basically essential method to recognize bacterial fermentation and biochemical principles. In this paper, traditional tube tests are discussed, such as the utilization of carbohydrates, indole, methyl red, and citrate and urease tests. Commercial identification kits and automated instruments by computer based analysis as current methods are also discussed, and those methods provide rapidity and accuracy. Nonculture techniques of nucleic acid typing methods using PCR analysis, and immunochemical methods using monoclonal antibodies can be further developed.
Computational analysis of Ebolavirus data: prospects, promises and challenges.
Michaelis, Martin; Rossman, Jeremy S; Wass, Mark N
2016-08-15
The ongoing Ebola virus (also known as Zaire ebolavirus, a member of the Ebolavirus family) outbreak in West Africa has so far resulted in >28000 confirmed cases compared with previous Ebolavirus outbreaks that affected a maximum of a few hundred individuals. Hence, Ebolaviruses impose a much greater threat than we may have expected (or hoped). An improved understanding of the virus biology is essential to develop therapeutic and preventive measures and to be better prepared for future outbreaks by members of the Ebolavirus family. Computational investigations can complement wet laboratory research for biosafety level 4 pathogens such as Ebolaviruses for which the wet experimental capacities are limited due to a small number of appropriate containment laboratories. During the current West Africa outbreak, sequence data from many Ebola virus genomes became available providing a rich resource for computational analysis. Here, we consider the studies that have already reported on the computational analysis of these data. A range of properties have been investigated including Ebolavirus evolution and pathogenicity, prediction of micro RNAs and identification of Ebolavirus specific signatures. However, the accuracy of the results remains to be confirmed by wet laboratory experiments. Therefore, communication and exchange between computational and wet laboratory researchers is necessary to make maximum use of computational analyses and to iteratively improve these approaches. © 2016 The Author(s). published by Portland Press Limited on behalf of the Biochemical Society.
Computational Embryology and Predictive Toxicology of Cleft Palate
Capacity to model and simulate key events in developmental toxicity using computational systems biology and biological knowledge steps closer to hazard identification across the vast landscape of untested environmental chemicals. In this context, we chose cleft palate as a model ...
40 CFR 144.7 - Identification of underground sources of drinking water and exempted aquifers.
Code of Federal Regulations, 2011 CFR
2011-07-01
... lifetime of the GS project, as informed by computational modeling performed pursuant to § 146.84(c)(1), in... exemption is of sufficient size to account for any possible revisions to the computational model during...
40 CFR 144.7 - Identification of underground sources of drinking water and exempted aquifers.
Code of Federal Regulations, 2012 CFR
2012-07-01
... lifetime of the GS project, as informed by computational modeling performed pursuant to § 146.84(c)(1), in... exemption is of sufficient size to account for any possible revisions to the computational model during...
40 CFR 144.7 - Identification of underground sources of drinking water and exempted aquifers.
Code of Federal Regulations, 2013 CFR
2013-07-01
... lifetime of the GS project, as informed by computational modeling performed pursuant to § 146.84(c)(1), in... exemption is of sufficient size to account for any possible revisions to the computational model during...
System identification using Nuclear Norm & Tabu Search optimization
NASA Astrophysics Data System (ADS)
Ahmed, Asif A.; Schoen, Marco P.; Bosworth, Ken W.
2018-01-01
In recent years, subspace System Identification (SI) algorithms have seen increased research, stemming from advanced minimization methods being applied to the Nuclear Norm (NN) approach in system identification. These minimization algorithms are based on hard computing methodologies. To the authors’ knowledge, as of now, there has been no work reported that utilizes soft computing algorithms to address the minimization problem within the nuclear norm SI framework. A linear, time-invariant, discrete time system is used in this work as the basic model for characterizing a dynamical system to be identified. The main objective is to extract a mathematical model from collected experimental input-output data. Hankel matrices are constructed from experimental data, and the extended observability matrix is employed to define an estimated output of the system. This estimated output and the actual - measured - output are utilized to construct a minimization problem. An embedded rank measure assures minimum state realization outcomes. Current NN-SI algorithms employ hard computing algorithms for minimization. In this work, we propose a simple Tabu Search (TS) algorithm for minimization. TS algorithm based SI is compared with the iterative Alternating Direction Method of Multipliers (ADMM) line search optimization based NN-SI. For comparison, several different benchmark system identification problems are solved by both approaches. Results show improved performance of the proposed SI-TS algorithm compared to the NN-SI ADMM algorithm.
Schellhas, Laura; Ostafin, Brian D; Palfai, Tibor P; de Jong, Peter J
2016-05-01
Cross-sectional and intervention research have shown that mindfulness is inversely associated with difficulties in controlling alcohol use. However, little is known regarding the mechanisms through which mindfulness is related to increased control over drinking. One potential mechanism consists of the way individuals represent their drinking behaviour. Action identification theory proposes that self-control of behaviour is improved by shifting from high-level representations regarding the meaning of a behaviour to lower-level representations regarding "how-to" aspects of a behaviour. Because mindfulness involves present-moment awareness, it may help to facilitate such shifts. We hypothesized that an inverse relation between mindfulness and dyscontrolled drinking would be partially accounted for by the way individuals mentally represent their drinking behaviour - i.e., reduced levels of high-level action identification and increased levels of low-level action identification. One hundred and twenty five undergraduate psychology students completed self-report measures of mindful awareness, action identification of alcohol use, and difficulty in controlling alcohol use. Results supported the hypothesis that high-level action identification partially mediates the relation between mindfulness and dyscontrolled drinking but did not support a mediating role for low-level action identification. These results suggest that mindfulness can improve self-control of alcohol by changing the way we think about our drinking behaviour. Copyright © 2016. Published by Elsevier Ltd.
Xu, Tao; Wang, Fei; Guo, Qiang; Nie, Xiao-Qian; Huang, Ying-Ping; Chen, Jun
2014-04-01
Transfer characteristics of heavy metals and their evaluation of potential risk were studied based on determining concentration of heavy metal in soils from water-level-fluctuating zone (altitude:145-175 m) and bank (altitude: 175-185 m) along Xiangxi River, Three Gorges Reservoir area. Factor analysis-multiple linear regression (FA-MLR) was employed for heavy metal source identification and source apportionment. Results demonstrate that, during exposing season, the concentration of soil heavy metals in water-level-fluctuation zone and bank showed the variation, and the concentration of soil heavy metals reduced in shallow soil, but increased in deep soil at water-level-fluctuation zone. However, the concentration of soil heavy metals reduced in both shallow and deep soil at bank during the same period. According to the geoaccumulation index,the pollution extent of heavy metals followed the order: Cd > Pb > Cu > Cr, Cd is the primary pollutant. FA and FA-MLR reveal that in soils from water-level-fluctuation zone, 75.60% of Pb originates from traffic, 62.03% of Cd is from agriculture, 64.71% of Cu and 75.36% of Cr are from natural rock. In soils from bank, 82.26% of Pb originates from traffic, 68.63% of Cd is from agriculture, 65.72% of Cu and 69.33% of Cr are from natural rock. In conclusion, FA-MLR can successfully identify source of heavy metal and compute source apportionment of heavy metals, meanwhile the transfer characteristic is revealed. All these information can be a reference for heavy metal pollution control.
Monterey Bay study. [analysis of Landsat 1 multispectral band scanner data
NASA Technical Reports Server (NTRS)
Bizzell, R. M.; Wade, L. C.
1975-01-01
The multispectral scanner capabilities of LANDSAT 1 were tested over California's Monterey Bay area and portions of the San Joaquin Valley. Using both computer aided and image interpretive processing techniques, the LANDSAT 1 data were analyzed to determine their potential application in terms of land use and agriculture. Utilizing LANDSAT 1 data, analysts were able to provide the identifications and areal extent of the individual land use categories ranging from very general to highly specific levels (e.g., from agricultural lands to specific field crop types and even the different stages of growth). It is shown that the LANDSAT system is useful in the identification of major crop species and the delineation of numerous land use categories on a global basis and that repeated surveillance would permit the monitoring of changes in seasonal growth characteristics of crops as well as the assessment of various cultivation practices with a minimum of onsite observation. The LANDSAT system is demonstrated to be useful in the planning and development of resource programs on earth.
TOKEN: Trustable Keystroke-Based Authentication for Web-Based Applications on Smartphones
NASA Astrophysics Data System (ADS)
Nauman, Mohammad; Ali, Tamleek
Smartphones are increasingly being used to store personal information as well as to access sensitive data from the Internet and the cloud. Establishment of the identity of a user requesting information from smartphones is a prerequisite for secure systems in such scenarios. In the past, keystroke-based user identification has been successfully deployed on production-level mobile devices to mitigate the risks associated with naïve username/password based authentication. However, these approaches have two major limitations: they are not applicable to services where authentication occurs outside the domain of the mobile device - such as web-based services; and they often overly tax the limited computational capabilities of mobile devices. In this paper, we propose a protocol for keystroke dynamics analysis which allows web-based applications to make use of remote attestation and delegated keystroke analysis. The end result is an efficient keystroke-based user identification mechanism that strengthens traditional password protected services while mitigating the risks of user profiling by collaborating malicious web services.
Network Understanding of Herb Medicine via Rapid Identification of Ingredient-Target Interactions
NASA Astrophysics Data System (ADS)
Zhang, Hai-Ping; Pan, Jian-Bo; Zhang, Chi; Ji, Nan; Wang, Hao; Ji, Zhi-Liang
2014-01-01
Today, herb medicines have become the major source for discovery of novel agents in countermining diseases. However, many of them are largely under-explored in pharmacology due to the limitation of current experimental approaches. Therefore, we proposed a computational framework in this study for network understanding of herb pharmacology via rapid identification of putative ingredient-target interactions in human structural proteome level. A marketing anti-cancer herb medicine in China, Yadanzi (Brucea javanica), was chosen for mechanistic study. Total 7,119 ingredient-target interactions were identified for thirteen Yadanzi active ingredients. Among them, about 29.5% were estimated to have better binding affinity than their corresponding marketing drug-target interactions. Further Bioinformatics analyses suggest that simultaneous manipulation of multiple proteins in the MAPK signaling pathway and the phosphorylation process of anti-apoptosis may largely answer for Yadanzi against non-small cell lung cancers. In summary, our strategy provides an efficient however economic solution for systematic understanding of herbs' power.
Network understanding of herb medicine via rapid identification of ingredient-target interactions.
Zhang, Hai-Ping; Pan, Jian-Bo; Zhang, Chi; Ji, Nan; Wang, Hao; Ji, Zhi-Liang
2014-01-16
Today, herb medicines have become the major source for discovery of novel agents in countermining diseases. However, many of them are largely under-explored in pharmacology due to the limitation of current experimental approaches. Therefore, we proposed a computational framework in this study for network understanding of herb pharmacology via rapid identification of putative ingredient-target interactions in human structural proteome level. A marketing anti-cancer herb medicine in China, Yadanzi (Brucea javanica), was chosen for mechanistic study. Total 7,119 ingredient-target interactions were identified for thirteen Yadanzi active ingredients. Among them, about 29.5% were estimated to have better binding affinity than their corresponding marketing drug-target interactions. Further Bioinformatics analyses suggest that simultaneous manipulation of multiple proteins in the MAPK signaling pathway and the phosphorylation process of anti-apoptosis may largely answer for Yadanzi against non-small cell lung cancers. In summary, our strategy provides an efficient however economic solution for systematic understanding of herbs' power.
Zhou, Yuchen; McGillick, Brian E.; Teng, Yu-Han Gary; ...
2016-07-18
Botulinum neurotoxins (BoNT) are among the most poisonous substances known, and of the 7 serotypes (A–G) identified thus far at least 4 can cause death in humans. Here, the goal of this work was identification of inhibitors that specifically target the light chain catalytic site of the highly pathogenic but lesser-studied E serotype (BoNT/E). Large-scale computational screening, employing the program DOCK, was used to perform atomic-level docking of 1.4 million small molecules to prioritize those making favorable interactions with the BoNT/E site. In particular, ‘footprint similarity’ (FPS) scoring was used to identify compounds that could potentially mimic features on themore » known substrate tetrapeptide RIME. Among 92 compounds purchased and experimentally tested, compound C562-1101 emerged as the most promising hit with an apparent IC 50 value three-fold more potent than that of the first reported BoNT/E small molecule inhibitor NSC-77053. Additional analysis showed the predicted binding pose of C562-1101 was geometrically and energetically stable over an ensemble of structures generated by molecular dynamic simulations and that many of the intended interactions seen with RIME were maintained. Finally, several analogs were also computationally designed and predicted to have further molecular mimicry thereby demonstrating the potential utility of footprint-based scoring protocols to help guide hit refinement.« less
Zhou, Yuchen; McGillick, Brian E; Teng, Yu-Han Gary; Haranahalli, Krupanandan; Ojima, Iwao; Swaminathan, Subramanyam; Rizzo, Robert C
2016-10-15
Botulinum neurotoxins (BoNT) are among the most poisonous substances known, and of the 7 serotypes (A-G) identified thus far at least 4 can cause death in humans. The goal of this work was identification of inhibitors that specifically target the light chain catalytic site of the highly pathogenic but lesser-studied E serotype (BoNT/E). Large-scale computational screening, employing the program DOCK, was used to perform atomic-level docking of 1.4 million small molecules to prioritize those making favorable interactions with the BoNT/E site. In particular, 'footprint similarity' (FPS) scoring was used to identify compounds that could potentially mimic features on the known substrate tetrapeptide RIME. Among 92 compounds purchased and experimentally tested, compound C562-1101 emerged as the most promising hit with an apparent IC 50 value three-fold more potent than that of the first reported BoNT/E small molecule inhibitor NSC-77053. Additional analysis showed the predicted binding pose of C562-1101 was geometrically and energetically stable over an ensemble of structures generated by molecular dynamic simulations and that many of the intended interactions seen with RIME were maintained. Several analogs were also computationally designed and predicted to have further molecular mimicry thereby demonstrating the potential utility of footprint-based scoring protocols to help guide hit refinement. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Yuchen; McGillick, Brian E.; Teng, Yu-Han Gary
Botulinum neurotoxins (BoNT) are among the most poisonous substances known, and of the 7 serotypes (A–G) identified thus far at least 4 can cause death in humans. Here, the goal of this work was identification of inhibitors that specifically target the light chain catalytic site of the highly pathogenic but lesser-studied E serotype (BoNT/E). Large-scale computational screening, employing the program DOCK, was used to perform atomic-level docking of 1.4 million small molecules to prioritize those making favorable interactions with the BoNT/E site. In particular, ‘footprint similarity’ (FPS) scoring was used to identify compounds that could potentially mimic features on themore » known substrate tetrapeptide RIME. Among 92 compounds purchased and experimentally tested, compound C562-1101 emerged as the most promising hit with an apparent IC 50 value three-fold more potent than that of the first reported BoNT/E small molecule inhibitor NSC-77053. Additional analysis showed the predicted binding pose of C562-1101 was geometrically and energetically stable over an ensemble of structures generated by molecular dynamic simulations and that many of the intended interactions seen with RIME were maintained. Finally, several analogs were also computationally designed and predicted to have further molecular mimicry thereby demonstrating the potential utility of footprint-based scoring protocols to help guide hit refinement.« less
Augmentation of the space station module power management and distribution breadboard
NASA Technical Reports Server (NTRS)
Walls, Bryan; Hall, David K.; Lollar, Louis F.
1991-01-01
The space station module power management and distribution (SSM/PMAD) breadboard models power distribution and management, including scheduling, load prioritization, and a fault detection, identification, and recovery (FDIR) system within a Space Station Freedom habitation or laboratory module. This 120 VDC system is capable of distributing up to 30 kW of power among more than 25 loads. In addition to the power distribution hardware, the system includes computer control through a hierarchy of processes. The lowest level consists of fast, simple (from a computing standpoint) switchgear that is capable of quickly safing the system. At the next level are local load center processors, (LLP's) which execute load scheduling, perform redundant switching, and shed loads which use more than scheduled power. Above the LLP's are three cooperating artificial intelligence (AI) systems which manage load prioritizations, load scheduling, load shedding, and fault recovery and management. Recent upgrades to hardware and modifications to software at both the LLP and AI system levels promise a drastic increase in speed, a significant increase in functionality and reliability, and potential for further examination of advanced automation techniques. The background, SSM/PMAD, interface to the Lewis Research Center test bed, the large autonomous spacecraft electrical power system, and future plans are discussed.
NASA Astrophysics Data System (ADS)
Lapierre, David; Alijah, Alexander; Kochanov, Roman; Kokoouline, Viatcheslav; Tyuterev, Vladimir
2016-10-01
Energies and lifetimes (widths) of vibrational states above the lowest dissociation limit of O163 were determined using a previously developed efficient approach, which combines hyperspherical coordinates and a complex absorbing potential. The calculations are based on a recently computed potential energy surface of ozone determined with a spectroscopic accuracy [Tyuterev et al., J. Chem. Phys. 139, 134307 (2013), 10.1063/1.4821638]. The effect of permutational symmetry on rovibrational dynamics and the density of resonance states in O3 is discussed in detail. Correspondence between quantum numbers appropriate for short- and long-range parts of wave functions of the rovibrational continuum is established. It is shown, by symmetry arguments, that the allowed purely vibrational (J =0 ) levels of O163 and O183, both made of bosons with zero nuclear spin, cannot dissociate on the ground-state potential energy surface. Energies and wave functions of bound states of the ozone isotopologue O163 with rotational angular momentum J =0 and 1 up to the dissociation threshold were also computed. For bound levels, good agreement with experimental energies is found: The rms deviation between observed and calculated vibrational energies is 1 cm-1. Rotational constants were determined and used for a simple identification of vibrational modes of calculated levels.
Crijns, C P; Martens, A; Bergman, H-J; van der Veen, H; Duchateau, L; van Bree, H J J; Gielen, I M V L
2014-01-01
Computed tomography (CT) is increasingly accessible in equine referral hospitals. To document the level of agreement within and between radiography and CT in characterising equine distal limb fractures. Retrospective descriptive study. Images from horses that underwent radiographic and CT evaluation for suspected distal limb fractures were reviewed, including 27 horses and 3 negative controls. Using Cohen's kappa and weighted kappa analysis, the level of agreement among 4 observers for a predefined set of diagnostic characteristics for radiography and CT separately and for the level of agreement between the 2 imaging modalities were documented. Both CT and radiography had very good intramodality agreement in identifying fractures, but intermodality agreement was lower. There was good intermodality and intramodality agreement for anatomical localisation and the identification of fracture displacement. Agreement for articular involvement, fracture comminution and fracture fragment number was towards the lower limit of good agreement. There was poor to fair intermodality agreement regarding fracture orientation, fracture width and coalescing cracks; intramodality agreement was higher for CT than for radiography for these features. Further studies, including comparisons with surgical and/or post mortem findings, are required to determine the sensitivity and specificity of CT and radiography in the diagnosis and characterisation of equine distal limb fractures. © 2013 EVJ Ltd.
MALDI-TOF MS versus VITEK 2 ANC card for identification of anaerobic bacteria.
Li, Yang; Gu, Bing; Liu, Genyan; Xia, Wenying; Fan, Kun; Mei, Yaning; Huang, Peijun; Pan, Shiyang
2014-05-01
Matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) is an accurate, rapid and inexpensive technique that has initiated a revolution in the clinical microbiology laboratory for identification of pathogens. The Vitek 2 anaerobe and Corynebacterium (ANC) identification card is a newly developed method for identification of corynebacteria and anaerobic species. The aim of this study was to evaluate the effectiveness of the ANC card and MALDI-TOF MS techniques for identification of clinical anaerobic isolates. Five reference strains and a total of 50 anaerobic bacteria clinical isolates comprising ten different genera and 14 species were identified and analyzed by the ANC card together with Vitek 2 identification system and Vitek MS together with version 2.0 database respectively. 16S rRNA gene sequencing was used as reference method for accuracy in the identification. Vitek 2 ANC card and Vitek MS provided comparable results at species level for the five reference strains. Of 50 clinical strains, the Vitek MS provided identification for 46 strains (92%) to the species level, 47 (94%) to genus level, one (2%) low discrimination, two (4%) no identification and one (2%) misidentification. The Vitek 2 ANC card provided identification for 43 strains (86%) correct to the species level, 47 (94%) correct to the genus level, three (6%) low discrimination, three (6%) no identification and one (2%) misidentification. Both Vitek MS and Vitek 2 ANC card can be used for accurate routine clinical anaerobe identification. Comparing to the Vitek 2 ANC card, Vitek MS is easier, faster and more economic for each test. The databases currently available for both systems should be updated and further developed to enhance performance.
[The laboratory of tomorrow. Particular reference to hematology].
Cazal, P
1985-01-01
A serious prediction can only be an extrapolation of recent developments. To be exact, the development has to continue in the same direction, which is only a probability. Probable development of hematological technology: Progress in methods. Development of new labelling methods: radio-elements, antibodies. Monoclonal antibodies. Progress in equipment: Cell counters and their adaptation to routine hemograms is a certainty. From analyzers: a promise that will perhaps become reality. Coagulometers: progress still to be made. Hemagglutination detectors and their application to grouping: good achievements, but the market is too limited. Computerization and automation: What form will the computerizing take? What will the computer do? Who will the computer control? What should the automatic analyzers be? Two current levels. Relationships between the automatic analysers and the computer. rapidity, fidelity and above all, reliability. Memory: large capacity and easy access. Disadvantages: conservatism and technical dependency. How can they be avoided? Development of the environment: Laboratory input: outside supplies, electricity, reagents, consumables. Samples and their identification. Output: distribution of results and communication problems. Centralization or decentralization? What will tomorrow's laboratory be? 3 hypotheses: optimistic, pessimistic, and balanced.
Ferrante, Michele; Blackwell, Kim T.; Migliore, Michele; Ascoli, Giorgio A.
2012-01-01
The identification and characterization of potential pharmacological targets in neurology and psychiatry is a fundamental problem at the intersection between medicinal chemistry and the neurosciences. Exciting new techniques in proteomics and genomics have fostered rapid progress, opening numerous questions as to the functional consequences of ligand binding at the systems level. Psycho- and neuro-active drugs typically work in nerve cells by affecting one or more aspects of electrophysiological activity. Thus, an integrated understanding of neuropharmacological agents requires bridging the gap between their molecular mechanisms and the biophysical determinants of neuronal function. Computational neuroscience and bioinformatics can play a major role in this functional connection. Robust quantitative models exist describing all major active membrane properties under endogenous and exogenous chemical control. These include voltage-dependent ionic channels (sodium, potassium, calcium, etc.), synaptic receptor channels (e.g. glutamatergic, GABAergic, cholinergic), and G protein coupled signaling pathways (protein kinases, phosphatases, and other enzymatic cascades). This brief review of neuromolecular medicine from the computational perspective provides compelling examples of how simulations can elucidate, explain, and predict the effect of chemical agonists, antagonists, and modulators in the nervous system. PMID:18855673
A Debugger for Computational Grid Applications
NASA Technical Reports Server (NTRS)
Hood, Robert; Jost, Gabriele
2000-01-01
The p2d2 project at NAS has built a debugger for applications running on heterogeneous computational grids. It employs a client-server architecture to simplify the implementation. Its user interface has been designed to provide process control and state examination functions on a computation containing a large number of processes. It can find processes participating in distributed computations even when those processes were not created under debugger control. These process identification techniques work both on conventional distributed executions as well as those on a computational grid.
Framework for Computer Assisted Instruction Courseware: A Case Study.
ERIC Educational Resources Information Center
Betlach, Judith A.
1987-01-01
Systematically investigates, defines, and organizes variables related to production of internally designed and implemented computer assisted instruction (CAI) courseware: special needs of users; costs; identification and definition of realistic training needs; CAI definition and design methodology; hardware and software requirements; and general…
DEVELOPMENT OF COMPUTATIONAL TOOLS FOR OPTIMAL IDENTIFICATION OF BIOLOGICAL NETWORKS
Following the theoretical analysis and computer simulations, the next step for the development of SNIP will be a proof-of-principle laboratory application. Specifically, we have obtained a synthetic transcriptional cascade (harbored in Escherichia coli...
Detection of intra-articular osteochondral bodies in the knee using computed arthrotomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sartoris, D.J.; Kursunoglu, S.; Pineda, C.
1985-05-01
A new technique using air arthrography followed by computed tomography enables identification of free osteocartilaginous fragments in the knee joint. Clinical examples with useful diagnostic information are presented, and potential pitfalls in the interpretation of this information are discussed.
44 CFR 65.7 - Floodway revisions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program IDENTIFICATION AND... described below: (i) The floodway analysis must be performed using the hydraulic computer model used to... output data from the original and modified computer models must be submitted. (5) Delineation of the...
44 CFR 65.7 - Floodway revisions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program IDENTIFICATION AND... described below: (i) The floodway analysis must be performed using the hydraulic computer model used to... output data from the original and modified computer models must be submitted. (5) Delineation of the...
44 CFR 65.7 - Floodway revisions.
Code of Federal Regulations, 2012 CFR
2012-10-01
... HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program IDENTIFICATION AND... described below: (i) The floodway analysis must be performed using the hydraulic computer model used to... output data from the original and modified computer models must be submitted. (5) Delineation of the...
44 CFR 65.7 - Floodway revisions.
Code of Federal Regulations, 2011 CFR
2011-10-01
... HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program IDENTIFICATION AND... described below: (i) The floodway analysis must be performed using the hydraulic computer model used to... output data from the original and modified computer models must be submitted. (5) Delineation of the...
44 CFR 65.7 - Floodway revisions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program IDENTIFICATION AND... described below: (i) The floodway analysis must be performed using the hydraulic computer model used to... output data from the original and modified computer models must be submitted. (5) Delineation of the...
Bytes and Bugs: Integrating Computer Programming with Bacteria Identification.
ERIC Educational Resources Information Center
Danciger, Michael
1986-01-01
By using a computer program to identify bacteria, students sharpen their analytical skills and gain familiarity with procedures used in laboratories outside the university. Although it is ideal for identifying a bacterium, the program can be adapted to many other disciplines. (Author)
NASA Astrophysics Data System (ADS)
Zhuang, Wei; Mountrakis, Giorgos
2014-09-01
Large footprint waveform LiDAR sensors have been widely used for numerous airborne studies. Ground peak identification in a large footprint waveform is a significant bottleneck in exploring full usage of the waveform datasets. In the current study, an accurate and computationally efficient algorithm was developed for ground peak identification, called Filtering and Clustering Algorithm (FICA). The method was evaluated on Land, Vegetation, and Ice Sensor (LVIS) waveform datasets acquired over Central NY. FICA incorporates a set of multi-scale second derivative filters and a k-means clustering algorithm in order to avoid detecting false ground peaks. FICA was tested in five different land cover types (deciduous trees, coniferous trees, shrub, grass and developed area) and showed more accurate results when compared to existing algorithms. More specifically, compared with Gaussian decomposition, the RMSE ground peak identification by FICA was 2.82 m (5.29 m for GD) in deciduous plots, 3.25 m (4.57 m for GD) in coniferous plots, 2.63 m (2.83 m for GD) in shrub plots, 0.82 m (0.93 m for GD) in grass plots, and 0.70 m (0.51 m for GD) in plots of developed areas. FICA performance was also relatively consistent under various slope and canopy coverage (CC) conditions. In addition, FICA showed better computational efficiency compared to existing methods. FICA's major computational and accuracy advantage is a result of the adopted multi-scale signal processing procedures that concentrate on local portions of the signal as opposed to the Gaussian decomposition that uses a curve-fitting strategy applied in the entire signal. The FICA algorithm is a good candidate for large-scale implementation on future space-borne waveform LiDAR sensors.
Singh, Amit; Rhee, Kyung E; Brennan, Jesse J; Kuelbs, Cynthia; El-Kareh, Robert; Fisher, Erin S
2016-03-01
Increase parent/caregiver ability to correctly identify the attending in charge and define terminology of treatment team members (TTMs). We hypothesized that correct TTM identification would increase with use of an electronic communication tool. Secondary aims included assessing subjects' satisfaction with and trust of TTM and interest in computer activities during hospitalization. Two similar groups of parents/legal guardians/primary caregivers of children admitted to the Pediatric Hospital Medicine teaching service with an unplanned first admission were surveyed before (Phase 1) and after (Phase 2) implementation of a novel electronic medical record (EMR)-based tool with names, photos, and definitions of TTMs. Physicians were also surveyed only during Phase 1. Surveys assessed TTM identification, satisfaction, trust, and computer use. More subjects in Phase 2 correctly identified attending physicians by name (71% vs. 28%, P < .001) and correctly defined terms intern, resident, and attending (P ≤ .03) compared with Phase 1. Almost all subjects (>79%) and TTMs (>87%) reported that subjects' ability to identify TTMs moderately or strongly impacted satisfaction and trust. The majority of subjects expressed interest in using computers to understand TTMs in each phase. Subjects' ability to correctly identify attending physicians and define TTMs was significantly greater for those who used our tool. In our study, subjects reported that TTM identification impacted aspects of the TTM relationship, yet few could correctly identify TTMs before tool use. This pilot study showed early success in engaging subjects with the EMR in the hospital and suggests that families would engage in computer-based activities in this setting. Copyright © 2016 by the American Academy of Pediatrics.
Semiannual report, 1 April - 30 September 1991
NASA Technical Reports Server (NTRS)
1991-01-01
The major categories of the current Institute for Computer Applications in Science and Engineering (ICASE) research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification problems, with emphasis on effective numerical methods; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software for parallel computers. Research in these areas is discussed.
Computational Lipidomics and Lipid Bioinformatics: Filling In the Blanks.
Pauling, Josch; Klipp, Edda
2016-12-22
Lipids are highly diverse metabolites of pronounced importance in health and disease. While metabolomics is a broad field under the omics umbrella that may also relate to lipids, lipidomics is an emerging field which specializes in the identification, quantification and functional interpretation of complex lipidomes. Today, it is possible to identify and distinguish lipids in a high-resolution, high-throughput manner and simultaneously with a lot of structural detail. However, doing so may produce thousands of mass spectra in a single experiment which has created a high demand for specialized computational support to analyze these spectral libraries. The computational biology and bioinformatics community has so far established methodology in genomics, transcriptomics and proteomics but there are many (combinatorial) challenges when it comes to structural diversity of lipids and their identification, quantification and interpretation. This review gives an overview and outlook on lipidomics research and illustrates ongoing computational and bioinformatics efforts. These efforts are important and necessary steps to advance the lipidomics field alongside analytic, biochemistry, biomedical and biology communities and to close the gap in available computational methodology between lipidomics and other omics sub-branches.
Authentication of Radio Frequency Identification Devices Using Electronic Characteristics
ERIC Educational Resources Information Center
Chinnappa Gounder Periaswamy, Senthilkumar
2010-01-01
Radio frequency identification (RFID) tags are low-cost devices that are used to uniquely identify the objects to which they are attached. Due to the low cost and size that is driving the technology, a tag has limited computational capabilities and resources. This limitation makes the implementation of conventional security protocols to prevent…
Applied Computational Electromagnetics Society Journal. Volume 7, Number 1, Summer 1992
1992-01-01
previously-solved computational problem in electrical engineering, physics, or related fields of study. The technical activities promoted by this...in solution technique or in data input/output; identification of new applica- tions for electromagnetics modeling codes and techniques; integration of...papers will represent the computational electromagnetics aspects of research in electrical engineering, physics, or related disciplines. However, papers
Control Law Design in a Computational Aeroelasticity Environment
NASA Technical Reports Server (NTRS)
Newsom, Jerry R.; Robertshaw, Harry H.; Kapania, Rakesh K.
2003-01-01
A methodology for designing active control laws in a computational aeroelasticity environment is given. The methodology involves employing a systems identification technique to develop an explicit state-space model for control law design from the output of a computational aeroelasticity code. The particular computational aeroelasticity code employed in this paper solves the transonic small disturbance aerodynamic equation using a time-accurate, finite-difference scheme. Linear structural dynamics equations are integrated simultaneously with the computational fluid dynamics equations to determine the time responses of the structure. These structural responses are employed as the input to a modern systems identification technique that determines the Markov parameters of an "equivalent linear system". The Eigensystem Realization Algorithm is then employed to develop an explicit state-space model of the equivalent linear system. The Linear Quadratic Guassian control law design technique is employed to design a control law. The computational aeroelasticity code is modified to accept control laws and perform closed-loop simulations. Flutter control of a rectangular wing model is chosen to demonstrate the methodology. Various cases are used to illustrate the usefulness of the methodology as the nonlinearity of the aeroelastic system is increased through increased angle-of-attack changes.
Hierarchical Artificial Bee Colony Algorithm for RFID Network Planning Optimization
Ma, Lianbo; Chen, Hanning; Hu, Kunyuan; Zhu, Yunlong
2014-01-01
This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization, called HABC, to tackle the radio frequency identification network planning (RNP) problem. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operators is applied to enhance the global search ability between species. Experiments are conducted on a set of 10 benchmark optimization problems. The results demonstrate that the proposed HABC obtains remarkable performance on most chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms. Then HABC is used for solving the real-world RNP problem on two instances with different scales. Simulation results show that the proposed algorithm is superior for solving RNP, in terms of optimization accuracy and computation robustness. PMID:24592200
Hierarchical artificial bee colony algorithm for RFID network planning optimization.
Ma, Lianbo; Chen, Hanning; Hu, Kunyuan; Zhu, Yunlong
2014-01-01
This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization, called HABC, to tackle the radio frequency identification network planning (RNP) problem. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operators is applied to enhance the global search ability between species. Experiments are conducted on a set of 10 benchmark optimization problems. The results demonstrate that the proposed HABC obtains remarkable performance on most chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms. Then HABC is used for solving the real-world RNP problem on two instances with different scales. Simulation results show that the proposed algorithm is superior for solving RNP, in terms of optimization accuracy and computation robustness.
High Excitation Rydberg Levels of Fe I from the ATMOS Solar Spectrum at 2.5 and 7 microns
NASA Technical Reports Server (NTRS)
Schoenfeld, W. G.; Chang, E. S.; Geller, M.; Johansson, S.; Nave, G.; Sauval, A. J.; Grevesse, N.
1995-01-01
The quadrupole-polarization theory has been applied to the 3d(sup 6)4S(D-6)4f and 5g subconfigurations of Fe I by a parametric fit, and the fitted parameters are used to predict levels in the 6g and 6h subconfigurations. Using the predicted values, we have computed the 4f-6g and 5g-6h transition arrays and made identifications in the ATMOS infrared solar spectrum. The newly identified 6g and 6h levels, based on ATMOS wavenumbers, are combined with the 5g levels and found to agree with the theoretical values with a root mean-squared-deviation of 0.042/ cm. Our approach yields a polarizability of 28.07 a(sub o, sup 2) and a quadrupole moment of 0.4360 +/- 0.0010 ea(sup 2, sub o) for Fe II, as well as an improved ionization potential of 63737.700 +/- 0.010/ cm for Fe I.
Sigma: Strain-level inference of genomes from metagenomic analysis for biosurveillance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, Tae-Hyuk; Chai, Juanjuan; Pan, Chongle
Motivation: Metagenomic sequencing of clinical samples provides a promising technique for direct pathogen detection and characterization in biosurveillance. Taxonomic analysis at the strain level can be used to resolve serotypes of a pathogen in biosurveillance. Sigma was developed for strain-level identification and quantification of pathogens using their reference genomes based on metagenomic analysis. Results: Sigma provides not only accurate strain-level inferences, but also three unique capabilities: (i) Sigma quantifies the statistical uncertainty of its inferences, which includes hypothesis testing of identified genomes and confidence interval estimation of their relative abundances; (ii) Sigma enables strain variant calling by assigning metagenomic readsmore » to their most likely reference genomes; and (iii) Sigma supports parallel computing for fast analysis of large datasets. In conclusion, the algorithm performance was evaluated using simulated mock communities and fecal samples with spike-in pathogen strains. Availability and Implementation: Sigma was implemented in C++ with source codes and binaries freely available at http://sigma.omicsbio.org.« less
Sigma: Strain-level inference of genomes from metagenomic analysis for biosurveillance
Ahn, Tae-Hyuk; Chai, Juanjuan; Pan, Chongle
2014-09-29
Motivation: Metagenomic sequencing of clinical samples provides a promising technique for direct pathogen detection and characterization in biosurveillance. Taxonomic analysis at the strain level can be used to resolve serotypes of a pathogen in biosurveillance. Sigma was developed for strain-level identification and quantification of pathogens using their reference genomes based on metagenomic analysis. Results: Sigma provides not only accurate strain-level inferences, but also three unique capabilities: (i) Sigma quantifies the statistical uncertainty of its inferences, which includes hypothesis testing of identified genomes and confidence interval estimation of their relative abundances; (ii) Sigma enables strain variant calling by assigning metagenomic readsmore » to their most likely reference genomes; and (iii) Sigma supports parallel computing for fast analysis of large datasets. In conclusion, the algorithm performance was evaluated using simulated mock communities and fecal samples with spike-in pathogen strains. Availability and Implementation: Sigma was implemented in C++ with source codes and binaries freely available at http://sigma.omicsbio.org.« less
Kim, Young Mee; Kang, Seung Wan; Kim, Se Young
2014-04-01
This research was an empirical study designed to identify precursors and interaction effects related to nurses' patient identification behavior. A multilevel analysis methodology was used. A self-report survey was administered to registered nurses (RNs) of a university hospital in South Korea. Of the questionnaires, 1114 were analyzed. The individual-level factors that had a significantly positive association with patient identification behavior were person-organization value congruence, organizational commitment, occupational commitment, tenure at the hospital, and tenure at the unit. Significantly negative group-level precursors of patient identification behavior were burnout climate and the number of RNs. Two interaction effects of the person-organization value congruence climate were identified. The first was a group-level moderating effect in which the negative relationship between the number of RNs and patient identification behavior was weaker when the nursing unit's value congruence climate was high. The second was a cross-level moderating effect in which the positive relationship between tenure at the unit and patient identification behavior was weaker when value congruence climate was high. This study simultaneously tested both individual-level and group-level factors that potentially influence patient identification behavior and identified the moderating role of person-organization value congruence climate. Implications of these results are discussed.
Report: Unsupervised identification of malaria parasites using computer vision.
Khan, Najeed Ahmed; Pervaz, Hassan; Latif, Arsalan; Musharaff, Ayesha
2017-01-01
Malaria in human is a serious and fatal tropical disease. This disease results from Anopheles mosquitoes that are infected by Plasmodium species. The clinical diagnosis of malaria based on the history, symptoms and clinical findings must always be confirmed by laboratory diagnosis. Laboratory diagnosis of malaria involves identification of malaria parasite or its antigen / products in the blood of the patient. Manual diagnosis of malaria parasite by the pathologists has proven to become cumbersome. Therefore, there is a need of automatic, efficient and accurate identification of malaria parasite. In this paper, we proposed a computer vision based approach to identify the malaria parasite from light microscopy images. This research deals with the challenges involved in the automatic detection of malaria parasite tissues. Our proposed method is based on the pixel-based approach. We used K-means clustering (unsupervised approach) for the segmentation to identify malaria parasite tissues.
Computational mass spectrometry for small molecules
2013-01-01
The identification of small molecules from mass spectrometry (MS) data remains a major challenge in the interpretation of MS data. This review covers the computational aspects of identifying small molecules, from the identification of a compound searching a reference spectral library, to the structural elucidation of unknowns. In detail, we describe the basic principles and pitfalls of searching mass spectral reference libraries. Determining the molecular formula of the compound can serve as a basis for subsequent structural elucidation; consequently, we cover different methods for molecular formula identification, focussing on isotope pattern analysis. We then discuss automated methods to deal with mass spectra of compounds that are not present in spectral libraries, and provide an insight into de novo analysis of fragmentation spectra using fragmentation trees. In addition, this review shortly covers the reconstruction of metabolic networks using MS data. Finally, we list available software for different steps of the analysis pipeline. PMID:23453222
Multivariable frequency domain identification via 2-norm minimization
NASA Technical Reports Server (NTRS)
Bayard, David S.
1992-01-01
The author develops a computational approach to multivariable frequency domain identification, based on 2-norm minimization. In particular, a Gauss-Newton (GN) iteration is developed to minimize the 2-norm of the error between frequency domain data and a matrix fraction transfer function estimate. To improve the global performance of the optimization algorithm, the GN iteration is initialized using the solution to a particular sequentially reweighted least squares problem, denoted as the SK iteration. The least squares problems which arise from both the SK and GN iterations are shown to involve sparse matrices with identical block structure. A sparse matrix QR factorization method is developed to exploit the special block structure, and to efficiently compute the least squares solution. A numerical example involving the identification of a multiple-input multiple-output (MIMO) plant having 286 unknown parameters is given to illustrate the effectiveness of the algorithm.
Automatic network coupling analysis for dynamical systems based on detailed kinetic models.
Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich
2005-10-01
We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.
Gradient augmented level set method for phase change simulations
NASA Astrophysics Data System (ADS)
Anumolu, Lakshman; Trujillo, Mario F.
2018-01-01
A numerical method for the simulation of two-phase flow with phase change based on the Gradient-Augmented-Level-set (GALS) strategy is presented. Sharp capturing of the vaporization process is enabled by: i) identification of the vapor-liquid interface, Γ (t), at the subgrid level, ii) discontinuous treatment of thermal physical properties (except for μ), and iii) enforcement of mass, momentum, and energy jump conditions, where the gradients of the dependent variables are obtained at Γ (t) and are consistent with their analytical expression, i.e. no local averaging is applied. Treatment of the jump in velocity and pressure at Γ (t) is achieved using the Ghost Fluid Method. The solution of the energy equation employs the sub-grid knowledge of Γ (t) to discretize the temperature Laplacian using second-order one-sided differences, i.e. the numerical stencil completely resides within each respective phase. To carefully evaluate the benefits or disadvantages of the GALS approach, the standard level set method is implemented and compared against the GALS predictions. The results show the expected trend that interface identification and transport are predicted noticeably better with GALS over the standard level set. This benefit carries over to the prediction of the Laplacian and temperature gradients in the neighborhood of the interface, which are directly linked to the calculation of the vaporization rate. However, when combining the calculation of interface transport and reinitialization with two-phase momentum and energy, the benefits of GALS are to some extent neutralized, and the causes for this behavior are identified and analyzed. Overall the additional computational costs associated with GALS are almost the same as those using the standard level set technique.
Lemieux, Sébastien
2006-08-25
The identification of differentially expressed genes (DEGs) from Affymetrix GeneChips arrays is currently done by first computing expression levels from the low-level probe intensities, then deriving significance by comparing these expression levels between conditions. The proposed PL-LM (Probe-Level Linear Model) method implements a linear model applied on the probe-level data to directly estimate the treatment effect. A finite mixture of Gaussian components is then used to identify DEGs using the coefficients estimated by the linear model. This approach can readily be applied to experimental design with or without replication. On a wholly defined dataset, the PL-LM method was able to identify 75% of the differentially expressed genes within 10% of false positives. This accuracy was achieved both using the three replicates per conditions available in the dataset and using only one replicate per condition. The method achieves, on this dataset, a higher accuracy than the best set of tools identified by the authors of the dataset, and does so using only one replicate per condition.
GAMUT: GPU accelerated microRNA analysis to uncover target genes through CUDA-miRanda
2014-01-01
Background Non-coding sequences such as microRNAs have important roles in disease processes. Computational microRNA target identification (CMTI) is becoming increasingly important since traditional experimental methods for target identification pose many difficulties. These methods are time-consuming, costly, and often need guidance from computational methods to narrow down candidate genes anyway. However, most CMTI methods are computationally demanding, since they need to handle not only several million query microRNA and reference RNA pairs, but also several million nucleotide comparisons within each given pair. Thus, the need to perform microRNA identification at such large scale has increased the demand for parallel computing. Methods Although most CMTI programs (e.g., the miRanda algorithm) are based on a modified Smith-Waterman (SW) algorithm, the existing parallel SW implementations (e.g., CUDASW++ 2.0/3.0, SWIPE) are unable to meet this demand in CMTI tasks. We present CUDA-miRanda, a fast microRNA target identification algorithm that takes advantage of massively parallel computing on Graphics Processing Units (GPU) using NVIDIA's Compute Unified Device Architecture (CUDA). CUDA-miRanda specifically focuses on the local alignment of short (i.e., ≤ 32 nucleotides) sequences against longer reference sequences (e.g., 20K nucleotides). Moreover, the proposed algorithm is able to report multiple alignments (up to 191 top scores) and the corresponding traceback sequences for any given (query sequence, reference sequence) pair. Results Speeds over 5.36 Giga Cell Updates Per Second (GCUPs) are achieved on a server with 4 NVIDIA Tesla M2090 GPUs. Compared to the original miRanda algorithm, which is evaluated on an Intel Xeon E5620@2.4 GHz CPU, the experimental results show up to 166 times performance gains in terms of execution time. In addition, we have verified that the exact same targets were predicted in both CUDA-miRanda and the original miRanda implementations through multiple test datasets. Conclusions We offer a GPU-based alternative to high performance compute (HPC) that can be developed locally at a relatively small cost. The community of GPU developers in the biomedical research community, particularly for genome analysis, is still growing. With increasing shared resources, this community will be able to advance CMTI in a very significant manner. Our source code is available at https://sourceforge.net/projects/cudamiranda/. PMID:25077821
Arjunan, Sridhar P; Kumar, Dinesh K; Naik, Ganesh R
2010-01-01
This research paper reports an experimental study on identification of the changes in fractal properties of surface Electromyogram (sEMG) with the changes in the force levels during low-level finger flexions. In the previous study, the authors have identified a novel fractal feature, Maximum fractal length (MFL) as a measure of strength of low-level contractions and has used this feature to identify various wrist and finger movements. This study has tested the relationship between the MFL and force of contraction. The results suggest that changes in MFL is correlated with the changes in contraction levels (20%, 50% and 80% maximum voluntary contraction (MVC)) during low-level muscle activation such as finger flexions. From the statistical analysis and by visualisation using box-plot, it is observed that MFL (p ≈ 0.001) is a more correlated to force of contraction compared to RMS (p≈0.05), even when the muscle contraction is less than 50% MVC during low-level finger flexions. This work has established that this fractal feature will be useful in providing information about changes in levels of force during low-level finger movements for prosthetic control or human computer interface.
ERIC Educational Resources Information Center
Smith, Mike U.
1991-01-01
Criticizes an article by Browning and Lehman (1988) for (1) using "gene" instead of allele, (2) misusing the word "misconception," and (3) the possible influences of the computer environment on the results of the study. (PR)
Deadbeat Predictive Controllers
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Phan, Minh
1997-01-01
Several new computational algorithms are presented to compute the deadbeat predictive control law. The first algorithm makes use of a multi-step-ahead output prediction to compute the control law without explicitly calculating the controllability matrix. The system identification must be performed first and then the predictive control law is designed. The second algorithm uses the input and output data directly to compute the feedback law. It combines the system identification and the predictive control law into one formulation. The third algorithm uses an observable-canonical form realization to design the predictive controller. The relationship between all three algorithms is established through the use of the state-space representation. All algorithms are applicable to multi-input, multi-output systems with disturbance inputs. In addition to the feedback terms, feed forward terms may also be added for disturbance inputs if they are measurable. Although the feedforward terms do not influence the stability of the closed-loop feedback law, they enhance the performance of the controlled system.
Foley, Finbar; Rajagopalan, Srinivasan; Raghunath, Sushravya M; Boland, Jennifer M; Karwoski, Ronald A; Maldonado, Fabien; Bartholmai, Brian J; Peikert, Tobias
2016-01-01
Increased clinical use of chest high-resolution computed tomography results in increased identification of lung adenocarcinomas and persistent subsolid opacities. However, these lesions range from very indolent to extremely aggressive tumors. Clinically relevant diagnostic tools to noninvasively risk stratify and guide individualized management of these lesions are lacking. Research efforts investigating semiquantitative measures to decrease interrater and intrarater variability are emerging, and in some cases steps have been taken to automate this process. However, many such methods currently are still suboptimal, require validation and are not yet clinically applicable. The computer-aided nodule assessment and risk yield software application represents a validated tool for the automated, quantitative, and noninvasive tool for risk stratification of adenocarcinoma lung nodules. Computer-aided nodule assessment and risk yield correlates well with consensus histology and postsurgical patient outcomes, and therefore may help to guide individualized patient management, for example, in identification of nodules amenable to radiological surveillance, or in need of adjunctive therapy. Copyright © 2016 Elsevier Inc. All rights reserved.
Structure Computation of Quiet Spike[Trademark] Flight-Test Data During Envelope Expansion
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2008-01-01
System identification or mathematical modeling is used in the aerospace community for development of simulation models for robust control law design. These models are often described as linear time-invariant processes. Nevertheless, it is well known that the underlying process is often nonlinear. The reason for using a linear approach has been due to the lack of a proper set of tools for the identification of nonlinear systems. Over the past several decades, the controls and biomedical communities have made great advances in developing tools for the identification of nonlinear systems. These approaches are robust and readily applicable to aerospace systems. In this paper, we show the application of one such nonlinear system identification technique, structure detection, for the analysis of F-15B Quiet Spike(TradeMark) aeroservoelastic flight-test data. Structure detection is concerned with the selection of a subset of candidate terms that best describe the observed output. This is a necessary procedure to compute an efficient system description that may afford greater insight into the functionality of the system or a simpler controller design. Structure computation as a tool for black-box modeling may be of critical importance for the development of robust parsimonious models for the flight-test community. Moreover, this approach may lead to efficient strategies for rapid envelope expansion, which may save significant development time and costs. The objectives of this study are to demonstrate via analysis of F-15B Quiet Spike aeroservoelastic flight-test data for several flight conditions that 1) linear models are inefficient for modeling aeroservoelastic data, 2) nonlinear identification provides a parsimonious model description while providing a high percent fit for cross-validated data, and 3) the model structure and parameters vary as the flight condition is altered.
Inverse problems and optimal experiment design in unsteady heat transfer processes identification
NASA Technical Reports Server (NTRS)
Artyukhin, Eugene A.
1991-01-01
Experimental-computational methods for estimating characteristics of unsteady heat transfer processes are analyzed. The methods are based on the principles of distributed parameter system identification. The theoretical basis of such methods is the numerical solution of nonlinear ill-posed inverse heat transfer problems and optimal experiment design problems. Numerical techniques for solving problems are briefly reviewed. The results of the practical application of identification methods are demonstrated when estimating effective thermophysical characteristics of composite materials and thermal contact resistance in two-layer systems.
Analysis and application of minimum variance discrete time system identification
NASA Technical Reports Server (NTRS)
Kaufman, H.; Kotob, S.
1975-01-01
An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.
Optimal sensor placement for time-domain identification using a wavelet-based genetic algorithm
NASA Astrophysics Data System (ADS)
Mahdavi, Seyed Hossein; Razak, Hashim Abdul
2016-06-01
This paper presents a wavelet-based genetic algorithm strategy for optimal sensor placement (OSP) effective for time-domain structural identification. Initially, the GA-based fitness evaluation is significantly improved by using adaptive wavelet functions. Later, a multi-species decimal GA coding system is modified to be suitable for an efficient search around the local optima. In this regard, a local operation of mutation is introduced in addition with regeneration and reintroduction operators. It is concluded that different characteristics of applied force influence the features of structural responses, and therefore the accuracy of time-domain structural identification is directly affected. Thus, the reliable OSP strategy prior to the time-domain identification will be achieved by those methods dealing with minimizing the distance of simulated responses for the entire system and condensed system considering the force effects. The numerical and experimental verification on the effectiveness of the proposed strategy demonstrates the considerably high computational performance of the proposed OSP strategy, in terms of computational cost and the accuracy of identification. It is deduced that the robustness of the proposed OSP algorithm lies in the precise and fast fitness evaluation at larger sampling rates which result in the optimum evaluation of the GA-based exploration and exploitation phases towards the global optimum solution.
MIMO system identification using frequency response data
NASA Technical Reports Server (NTRS)
Medina, Enrique A.; Irwin, R. D.; Mitchell, Jerrel R.; Bukley, Angelia P.
1992-01-01
A solution to the problem of obtaining a multi-input, multi-output statespace model of a system from its individual input/output frequency responses is presented. The Residue Identification Algorithm (RID) identifies the system poles from a transfer function model of the determinant of the frequency response data matrix. Next, the residue matrices of the modes are computed guaranteeing that each input/output frequency response is fitted in the least squares sense. Finally, a realization of the system is computed. Results of the application of RID to experimental frequency responses of a large space structure ground test facility are presented and compared to those obtained via the Eigensystem Realization Algorithm.
Automatic pattern identification of rock moisture based on the Staff-RF model
NASA Astrophysics Data System (ADS)
Zheng, Wei; Tao, Kai; Jiang, Wei
2018-04-01
Studies on the moisture and damage state of rocks generally focus on the qualitative description and mechanical information of rocks. This method is not applicable to the real-time safety monitoring of rock mass. In this study, a musical staff computing model is used to quantify the acoustic emission signals of rocks with different moisture patterns. Then, the random forest (RF) method is adopted to form the staff-RF model for the real-time pattern identification of rock moisture. The entire process requires only the computing information of the AE signal and does not require the mechanical conditions of rocks.
Li, Nailu; Mu, Anle; Yang, Xiyun; Magar, Kaman T; Liu, Chao
2018-05-01
The optimal tuning of adaptive flap controller can improve adaptive flap control performance on uncertain operating environments, but the optimization process is usually time-consuming and it is difficult to design proper optimal tuning strategy for the flap control system (FCS). To solve this problem, a novel adaptive flap controller is designed based on a high-efficient differential evolution (DE) identification technique and composite adaptive internal model control (CAIMC) strategy. The optimal tuning can be easily obtained by DE identified inverse of the FCS via CAIMC structure. To achieve fast tuning, a high-efficient modified adaptive DE algorithm is proposed with new mutant operator and varying range adaptive mechanism for the FCS identification. A tradeoff between optimized adaptive flap control and low computation cost is successfully achieved by proposed controller. Simulation results show the robustness of proposed method and its superiority to conventional adaptive IMC (AIMC) flap controller and the CAIMC flap controllers using other DE algorithms on various uncertain operating conditions. The high computation efficiency of proposed controller is also verified based on the computation time on those operating cases. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Leaf epidermis images for robust identification of plants
da Silva, Núbia Rosa; Oliveira, Marcos William da Silva; Filho, Humberto Antunes de Almeida; Pinheiro, Luiz Felipe Souza; Rossatto, Davi Rodrigo; Kolb, Rosana Marta; Bruno, Odemir Martinez
2016-01-01
This paper proposes a methodology for plant analysis and identification based on extracting texture features from microscopic images of leaf epidermis. All the experiments were carried out using 32 plant species with 309 epidermal samples captured by an optical microscope coupled to a digital camera. The results of the computational methods using texture features were compared to the conventional approach, where quantitative measurements of stomatal traits (density, length and width) were manually obtained. Epidermis image classification using texture has achieved a success rate of over 96%, while success rate was around 60% for quantitative measurements taken manually. Furthermore, we verified the robustness of our method accounting for natural phenotypic plasticity of stomata, analysing samples from the same species grown in different environments. Texture methods were robust even when considering phenotypic plasticity of stomatal traits with a decrease of 20% in the success rate, as quantitative measurements proved to be fully sensitive with a decrease of 77%. Results from the comparison between the computational approach and the conventional quantitative measurements lead us to discover how computational systems are advantageous and promising in terms of solving problems related to Botany, such as species identification. PMID:27217018
Parametric identification of the process of preparing ceramic mixture as an object of control
NASA Astrophysics Data System (ADS)
Galitskov, Stanislav; Nazarov, Maxim; Galitskov, Konstantin
2017-10-01
Manufacture of ceramic materials and products largely depends on the preparation of clay raw materials. The main process here is the process of mixing, which in industrial production is mostly done in cross-compound clay mixers of continuous operation with steam humidification. The authors identified features of dynamics of this technological stage, which in itself is a non-linear control object with distributed parameters. When solving practical tasks for automation of a certain class of ceramic materials production it is important to make parametric identification of moving clay. In this paper the task is solved with the use of computational models, approximated to a particular section of a clay mixer along its length. The research introduces a methodology of computational experiments as applied to the designed computational model. Parametric identification of dynamic links was carried out according to transient characteristics. The experiments showed that the control object in question is to a great extent a non-stationary one. The obtained results are problematically oriented on synthesizing a multidimensional automatic control system for preparation of ceramic mixture with specified values of humidity and temperature exposed to the technological process of major disturbances.
Watts, K.C.; Hassemer, J.R.
1989-01-01
A reconnaissance geochemical survey of stream drainages within 21,000 km2 of southeastern Arizona and southwestern New Mexico shows broad zones of low-level to moderate contrast anomalies, many associated with mid-Tertiary eruptive centers and Tertiary fault zones. Of these eruptive centers, few are known to contain metallic deposits, and most of those known are minor. This, however, may be more a function of shallow erosion level than an indication of the absence of mineralization, since hydrothermal alteration and Fe-Mn-oxide staining are widespread, and geochemical anomalies are pervasive over a larger part of the region than outcrop observations would predict. Accordingly, interpretations of the geochemical data use considerations of relative erosion levels, and inferred element zonalities, to focus on possible undiscovered deposits in the subsurface of base-, precious-, and rare-metal deposits of plutonic-volcanic association. In order to enhance the identification of specific deep targets, we use the empirically determined ratio: Ag+Mn+Pb+Zn+Ba Au+Mo+Cu+Bi+W This ratio is based on reported metal contents of nonmagnetic heavy-mineral samples from the drainage sediment, determined by emission spectrographic analysis. Before the ratio was computed for each sample site, the data were normalized to a previously estimated regional threshold value. A regional isopleth map was then prepared, using a cell-averaging computer routine, with contours drawn at the 25th, 50th, 75th, 80th, 90th, 95th and 99th percentiles of the computed data. ?? 1989.
Games, Patrícia Dias; daSilva, Elói Quintas Gonçalves; Barbosa, Meire de Oliveira; Almeida-Souza, Hebréia Oliveira; Fontes, Patrícia Pereira; deMagalhães, Marcos Jorge; Pereira, Paulo Roberto Gomes; Prates, Maura Vianna; Franco, Gloria Regina; Faria-Campos, Alessandra; Campos, Sérgio Vale Aguiar; Baracat-Pereira, Maria Cristina
2016-12-15
Antimicrobial peptides from plants present mechanisms of action that are different from those of conventional defense agents. They are under-explored but have a potential as commercial antimicrobials. Bell pepper leaves ('Magali R') are discarded after harvesting the fruit and are sources of bioactive peptides. This work reports the isolation by peptidomics tools, and the identification and partially characterization by computational tools of an antimicrobial peptide from bell pepper leaves, and evidences the usefulness of records and the in silico analysis for the study of plant peptides aiming biotechnological uses. Aqueous extracts from leaves were enriched in peptide by salt fractionation and ultrafiltration. An antimicrobial peptide was isolated by tandem chromatographic procedures. Mass spectrometry, automated peptide sequencing and bioinformatics tools were used alternately for identification and partial characterization of the Hevein-like peptide, named HEV-CANN. The computational tools that assisted to the identification of the peptide included BlastP, PSI-Blast, ClustalOmega, PeptideCutter, and ProtParam; conventional protein databases (DB) as Mascot, Protein-DB, GenBank-DB, RefSeq, Swiss-Prot, and UniProtKB; specific for peptides DB as Amper, APD2, CAMP, LAMPs, and PhytAMP; other tools included in ExPASy for Proteomics; The Bioactive Peptide Databases, and The Pepper Genome Database. The HEV-CANN sequence presented 40 amino acid residues, 4258.8 Da, theoretical pI-value of 8.78, and four disulfide bonds. It was stable, and it has inhibited the growth of phytopathogenic bacteria and a fungus. HEV-CANN presented a chitin-binding domain in their sequence. There was a high identity and a positive alignment of HEV-CANN sequence in various databases, but there was not a complete identity, suggesting that HEV-CANN may be produced by ribosomal synthesis, which is in accordance with its constitutive nature. Computational tools for proteomics and databases are not adjusted for short sequences, which hampered HEV-CANN identification. The adjustment of statistical tests in large databases for proteins is an alternative to promote the significant identification of peptides. The development of specific DB for plant antimicrobial peptides, with information about peptide sequences, functional genomic data, structural motifs and domains of molecules, functional domains, and peptide-biomolecule interactions are valuable and necessary.
Ward, Jodie; Gilmore, Simon R; Robertson, James; Peakall, Rod
2009-11-01
Plant material is frequently encountered in criminal investigations but often overlooked as potential evidence. We designed a DNA-based molecular identification system for 100 Australian grasses that consisted of a series of polymerase chain reaction assays that enabled the progressive identification of grasses to different taxonomic levels. The identification system was based on DNA sequence variation at four chloroplast and two mitochondrial loci. Seventeen informative indels and 68 single-nucleotide polymorphisms were utilized as molecular markers for subfamily to species-level identification. To identify an unknown sample to subfamily level required a minimum of four markers or nine markers for species identification. The accuracy of the system was confirmed by blind tests. We have demonstrated "proof of concept" of a molecular identification system for trace botanical samples. Our evaluation suggests that the adoption of a system that combines this approach with DNA sequencing could assist the morphological identification of grasses found as forensic evidence.
UAV-borne X-band radar for MAV collision avoidance
NASA Astrophysics Data System (ADS)
Moses, Allistair A.; Rutherford, Matthew J.; Kontitsis, Michail; Valavanis, Kimon P.
2011-05-01
Increased use of Miniature (Unmanned) Aerial Vehicles (MAVs) is coincidentally accompanied by a notable lack of sensors suitable for enabling further increases in levels of autonomy and consequently, integration into the National Airspace System (NAS). The majority of available sensors suitable for MAV integration are based on infrared detectors, focal plane arrays, optical and ultrasonic rangefinders, etc. These sensors are generally not able to detect or identify other MAV-sized targets and, when detection is possible, considerable computational power is typically required for successful identification. Furthermore, performance of visual-range optical sensor systems can suffer greatly when operating in the conditions that are typically encountered during search and rescue, surveillance, combat, and most common MAV applications. However, the addition of a miniature radar system can, in consort with other sensors, provide comprehensive target detection and identification capabilities for MAVs. This trend is observed in manned aviation where radar systems are the primary detection and identification sensor system. Within this document a miniature, lightweight X-Band radar system for use on a miniature (710mm rotor diameter) rotorcraft is described. We present analyses of the performance of the system in a realistic scenario with two MAVs. Additionally, an analysis of MAV navigation and collision avoidance behaviors is performed to determine the effect of integrating radar systems into MAV-class vehicles.
Computational tools for exploring sequence databases as a resource for antimicrobial peptides.
Porto, W F; Pires, A S; Franco, O L
Data mining has been recognized by many researchers as a hot topic in different areas. In the post-genomic era, the growing number of sequences deposited in databases has been the reason why these databases have become a resource for novel biological information. In recent years, the identification of antimicrobial peptides (AMPs) in databases has gained attention. The identification of unannotated AMPs has shed some light on the distribution and evolution of AMPs and, in some cases, indicated suitable candidates for developing novel antimicrobial agents. The data mining process has been performed mainly by local alignments and/or regular expressions. Nevertheless, for the identification of distant homologous sequences, other techniques such as antimicrobial activity prediction and molecular modelling are required. In this context, this review addresses the tools and techniques, and also their limitations, for mining AMPs from databases. These methods could be helpful not only for the development of novel AMPs, but also for other kinds of proteins, at a higher level of structural genomics. Moreover, solving the problem of unannotated proteins could bring immeasurable benefits to society, especially in the case of AMPs, which could be helpful for developing novel antimicrobial agents and combating resistant bacteria. Copyright © 2017 Elsevier Inc. All rights reserved.
A mathematical model of vowel identification by users of cochlear implants
Sagi, Elad; Meyer, Ted A.; Kaiser, Adam R.; Teoh, Su Wooi; Svirsky, Mario A.
2010-01-01
A simple mathematical model is presented that predicts vowel identification by cochlear implant users based on these listeners’ resolving power for the mean locations of first, second, and∕or third formant energies along the implanted electrode array. This psychophysically based model provides hypotheses about the mechanism cochlear implant users employ to encode and process the input auditory signal to extract information relevant for identifying steady-state vowels. Using one free parameter, the model predicts most of the patterns of vowel confusions made by users of different cochlear implant devices and stimulation strategies, and who show widely different levels of speech perception (from near chance to near perfect). Furthermore, the model can predict results from the literature, such as Skinner, et al. [(1995). Ann. Otol. Rhinol. Laryngol. 104, 307–311] frequency mapping study, and the general trend in the vowel results of Zeng and Galvin’s [(1999). Ear Hear. 20, 60–74] studies of output electrical dynamic range reduction. The implementation of the model presented here is specific to vowel identification by cochlear implant users, but the framework of the model is more general. Computational models such as the one presented here can be useful for advancing knowledge about speech perception in hearing impaired populations, and for providing a guide for clinical research and clinical practice. PMID:20136228
Davidson, Shaun M; Docherty, Paul D; Murray, Rua
2017-03-01
Parameter identification is an important and widely used process across the field of biomedical engineering. However, it is susceptible to a number of potential difficulties, such as parameter trade-off, causing premature convergence at non-optimal parameter values. The proposed Dimensional Reduction Method (DRM) addresses this issue by iteratively reducing the dimension of hyperplanes where trade off occurs, and running subsequent identification processes within these hyperplanes. The DRM was validated using clinical data to optimize 4 parameters of the widely used Bergman Minimal Model of glucose and insulin kinetics, as well as in-silico data to optimize 5 parameters of the Pulmonary Recruitment (PR) Model. Results were compared with the popular Levenberg-Marquardt (LMQ) Algorithm using a Monte-Carlo methodology, with both methods afforded equivalent computational resources. The DRM converged to a lower or equal residual value in all tests run using the Bergman Minimal Model and actual patient data. For the PR model, the DRM attained significantly lower overall median parameter error values and lower residuals in the vast majority of tests. This shows the DRM has potential to provide better resolution of optimum parameter values for the variety of biomedical models in which significant levels of parameter trade-off occur. Copyright © 2017 Elsevier Inc. All rights reserved.
Automated Microbiological Detection/Identification System
Aldridge, C.; Jones, P. W.; Gibson, S.; Lanham, J.; Meyer, M.; Vannest, R.; Charles, R.
1977-01-01
An automated, computerized system, the AutoMicrobic System, has been developed for the detection, enumeration, and identification of bacteria and yeasts in clinical specimens. The biological basis for the system resides in lyophilized, highly selective and specific media enclosed in wells of a disposable plastic cuvette; introduction of a suitable specimen rehydrates and inoculates the media in the wells. An automated optical system monitors, and the computer interprets, changes in the media, with enumeration and identification results automatically obtained in 13 h. Sixteen different selective media were developed and tested with a variety of seeded (simulated) and clinical specimens. The AutoMicrobic System has been extensively tested with urine specimens, using a urine test kit (Identi-Pak) that contains selective media for Escherichia coli, Proteus species, Pseudomonas aeruginosa, Klebsiella-Enterobacter species, Serratia species, Citrobacter freundii, group D enterococci, Staphylococcus aureus, and yeasts (Candida species and Torulopsis glabrata). The system has been tested with 3,370 seeded urine specimens and 1,486 clinical urines. Agreement with simultaneous conventional (manual) cultures, at levels of 70,000 colony-forming units per ml (or more), was 92% or better for seeded specimens; clinical specimens yielded results of 93% or better for all organisms except P. aeruginosa, where agreement was 86%. System expansion in progress includes antibiotic susceptibility testing and compatibility with most types of clinical specimens. Images PMID:334798
49 CFR Appendix A to Part 1511 - Aviation Security Infrastructure Fee
Code of Federal Regulations, 2012 CFR
2012-10-01
.... Please also submit the same information in Microsoft Word either on a computer disk or by e-mail to TSA..., including Checkpoint Screening Supervisors. 7. All associated expensed non-labor costs including computers, communications equipment, time management systems, supplies, parking, identification badging, furniture, fixtures...
49 CFR Appendix A to Part 1511 - Aviation Security Infrastructure Fee
Code of Federal Regulations, 2014 CFR
2014-10-01
.... Please also submit the same information in Microsoft Word either on a computer disk or by e-mail to TSA..., including Checkpoint Screening Supervisors. 7. All associated expensed non-labor costs including computers, communications equipment, time management systems, supplies, parking, identification badging, furniture, fixtures...
49 CFR Appendix A to Part 1511 - Aviation Security Infrastructure Fee
Code of Federal Regulations, 2013 CFR
2013-10-01
.... Please also submit the same information in Microsoft Word either on a computer disk or by e-mail to TSA..., including Checkpoint Screening Supervisors. 7. All associated expensed non-labor costs including computers, communications equipment, time management systems, supplies, parking, identification badging, furniture, fixtures...
Causal Reasoning in Medicine: Analysis of a Protocol.
ERIC Educational Resources Information Center
Kuipers, Benjamin; Kassirer, Jerome P.
1984-01-01
Describes the construction of a knowledge representation from the identification of the problem (nephrotic syndrome) to a running computer simulation of causal reasoning to provide a vertical slice of the construction of a cognitive model. Interactions between textbook knowledge, observations of human experts, and computational requirements are…
Thermoelectric pump performance analysis computer code
NASA Technical Reports Server (NTRS)
Johnson, J. L.
1973-01-01
A computer program is presented that was used to analyze and design dual-throat electromagnetic dc conduction pumps for the 5-kwe ZrH reactor thermoelectric system. In addition to a listing of the code and corresponding identification of symbols, the bases for this analytical model are provided.
For operation of the Computer Software Management and Information Center (COSMIC)
NASA Technical Reports Server (NTRS)
Carmon, J. L.
1983-01-01
During the month of June, the Survey Research Center (SRC) at the University of Georgia designed new benefits questionnaires for computer software management and information center (COSMIC). As a test of their utility, these questionnaires are now used in the benefits identification process.
Visual perception-based criminal identification: a query-based approach
NASA Astrophysics Data System (ADS)
Singh, Avinash Kumar; Nandi, G. C.
2017-01-01
The visual perception of eyewitness plays a vital role in criminal identification scenario. It helps law enforcement authorities in searching particular criminal from their previous record. It has been reported that searching a criminal record manually requires too much time to get the accurate result. We have proposed a query-based approach which minimises the computational cost along with the reduction of search space. A symbolic database has been created to perform a stringent analysis on 150 public (Bollywood celebrities and Indian cricketers) and 90 local faces (our data-set). An expert knowledge has been captured to encapsulate every criminal's anatomical and facial attributes in the form of symbolic representation. A fast query-based searching strategy has been implemented using dynamic decision tree data structure which allows four levels of decomposition to fetch respective criminal records. Two types of case studies - viewed and forensic sketches have been considered to evaluate the strength of our proposed approach. We have derived 1200 views of the entire population by taking into consideration 80 participants as eyewitness. The system demonstrates an accuracy level of 98.6% for test case I and 97.8% for test case II. It has also been reported that experimental results reduce the search space up to 30 most relevant records.
Bhardwaj, Tulika; Haque, Shafiul; Somvanshi, Pallavi
2018-05-12
Bacterial pathogens invade and disrupt the host defense system by means of protein sequences structurally similar at global and local level both. The sharing of homologous sequences between the host and the pathogenic bacteria mediates the infection and defines the concept of molecular mimicry. In this study, various computational approaches were employed to elucidate the pathogenicity of Clostridium botulinum ATCC 3502 at genome-wide level. Genome-wide study revealed that the pathogen mimics the host (Homo sapiens) and unraveled the complex pathogenic pathway of causing infection. The comparative 'omics' approaches helped in selective screening of 'molecular mimicry' candidates followed by the qualitative assessment of the virulence potential and functional enrichment. Overall, this study provides a deep insight into the emergence and surveillance of multidrug resistant C. botulinum ATCC 3502 caused infections. This is the very first report identifying C. botulinum ATCC 3502 proteome enriched similarities to the human host proteins and resulted in the identification of 20 potential mimicry candidates, which were further characterized qualitatively by sub-cellular organization prediction and functional annotation. This study will provide a variety of avenues for future studies related to infectious agents, host-pathogen interactions and the evolution of pathogenesis process. Copyright © 2018. Published by Elsevier Ltd.
Kun, Ádám; Papp, Balázs; Szathmáry, Eörs
2008-01-01
Background If chemical A is necessary for the synthesis of more chemical A, then A has the power of replication (such systems are known as autocatalytic systems). We provide the first systems-level analysis searching for small-molecular autocatalytic components in the metabolisms of diverse organisms, including an inferred minimal metabolism. Results We find that intermediary metabolism is invariably autocatalytic for ATP. Furthermore, we provide evidence for the existence of additional, organism-specific autocatalytic metabolites in the forms of coenzymes (NAD+, coenzyme A, tetrahydrofolate, quinones) and sugars. Although the enzymatic reactions of a number of autocatalytic cycles are present in most of the studied organisms, they display obligatorily autocatalytic behavior in a few networks only, hence demonstrating the need for a systems-level approach to identify metabolic replicators embedded in large networks. Conclusion Metabolic replicators are apparently common and potentially both universal and ancestral: without their presence, kick-starting metabolic networks is impossible, even if all enzymes and genes are present in the same cell. Identification of metabolic replicators is also important for attempts to create synthetic cells, as some of these autocatalytic molecules will presumably be needed to be added to the system as, by definition, the system cannot synthesize them without their initial presence. PMID:18331628
Demir, E; Babur, O; Dogrusoz, U; Gursoy, A; Nisanci, G; Cetin-Atalay, R; Ozturk, M
2002-07-01
Availability of the sequences of entire genomes shifts the scientific curiosity towards the identification of function of the genomes in large scale as in genome studies. In the near future, data produced about cellular processes at molecular level will accumulate with an accelerating rate as a result of proteomics studies. In this regard, it is essential to develop tools for storing, integrating, accessing, and analyzing this data effectively. We define an ontology for a comprehensive representation of cellular events. The ontology presented here enables integration of fragmented or incomplete pathway information and supports manipulation and incorporation of the stored data, as well as multiple levels of abstraction. Based on this ontology, we present the architecture of an integrated environment named Patika (Pathway Analysis Tool for Integration and Knowledge Acquisition). Patika is composed of a server-side, scalable, object-oriented database and client-side editors to provide an integrated, multi-user environment for visualizing and manipulating network of cellular events. This tool features automated pathway layout, functional computation support, advanced querying and a user-friendly graphical interface. We expect that Patika will be a valuable tool for rapid knowledge acquisition, microarray generated large-scale data interpretation, disease gene identification, and drug development. A prototype of Patika is available upon request from the authors.
NASA Astrophysics Data System (ADS)
Kakkos, I.; Gkiatis, K.; Bromis, K.; Asvestas, P. A.; Karanasiou, I. S.; Ventouras, E. M.; Matsopoulos, G. K.
2017-11-01
The detection of an error is the cognitive evaluation of an action outcome that is considered undesired or mismatches an expected response. Brain activity during monitoring of correct and incorrect responses elicits Event Related Potentials (ERPs) revealing complex cerebral responses to deviant sensory stimuli. Development of accurate error detection systems is of great importance both concerning practical applications and in investigating the complex neural mechanisms of decision making. In this study, data are used from an audio identification experiment that was implemented with two levels of complexity in order to investigate neurophysiological error processing mechanisms in actors and observers. To examine and analyse the variations of the processing of erroneous sensory information for each level of complexity we employ Support Vector Machines (SVM) classifiers with various learning methods and kernels using characteristic ERP time-windowed features. For dimensionality reduction and to remove redundant features we implement a feature selection framework based on Sequential Forward Selection (SFS). The proposed method provided high accuracy in identifying correct and incorrect responses both for actors and for observers with mean accuracy of 93% and 91% respectively. Additionally, computational time was reduced and the effects of the nesting problem usually occurring in SFS of large feature sets were alleviated.
Application of permanents of square matrices for DNA identification in multiple-fatality cases
2013-01-01
Background DNA profiling is essential for individual identification. In forensic medicine, the likelihood ratio (LR) is commonly used to identify individuals. The LR is calculated by comparing two hypotheses for the sample DNA: that the sample DNA is identical or related to a reference DNA, and that it is randomly sampled from a population. For multiple-fatality cases, however, identification should be considered as an assignment problem, and a particular sample and reference pair should therefore be compared with other possibilities conditional on the entire dataset. Results We developed a new method to compute the probability via permanents of square matrices of nonnegative entries. As the exact permanent is known as a #P-complete problem, we applied the Huber–Law algorithm to approximate the permanents. We performed a computer simulation to evaluate the performance of our method via receiver operating characteristic curve analysis compared with LR under the assumption of a closed incident. Differences between the two methods were well demonstrated when references provided neither obligate alleles nor impossible alleles. The new method exhibited higher sensitivity (0.188 vs. 0.055) at a threshold value of 0.999, at which specificity was 1, and it exhibited higher area under a receiver operating characteristic curve (0.990 vs. 0.959, P = 9.6E-15). Conclusions Our method therefore offers a solution for a computationally intensive assignment problem and may be a viable alternative to LR-based identification for closed-incident multiple-fatality cases. PMID:23962363
ERIC Educational Resources Information Center
Özbek, Necdet Sinan; Eker, Ilyas
2015-01-01
This study describes a set of real-time interactive experiments that address system identification and model reference adaptive control (MRAC) techniques. In constructing laboratory experiments that contribute to efficient teaching, experimental design and instructional strategy are crucial, but a process for doing this has yet to be defined. This…
Hypermedia in the Plant Sciences: The Weed Key and Identification System/Videodisc.
ERIC Educational Resources Information Center
Ragan, Lawrence C.
1991-01-01
In cooperation with a university educational technology unit, an agronomy professor used hypercard and videodisk technology to develop a computer program for identification of 181 weed species based on user-selected characteristics. This solution was found during a search for a way to organize course content in a concise, manageable system. (MSE)
Live Specimens More Effective than World Wide Web for Learning Plant Material
ERIC Educational Resources Information Center
Taraban, Roman; McKenney, Cynthia; Peffley, Ellen; Applegarth, Ashley
2004-01-01
The World Wide Web and other computer-based media are new teaching resources for plant identification. The purpose of the experiments reported here was to test whether learning plant identification for woody and herbaceous plant material over the web was as effective, more effective, or preferred by undergraduate students when compared with…
Matrix Infrared Spectra of Manganese and Iron Isocyanide Complexes.
Chen, Xiuting; Li, Qingnuan; Andrews, Lester; Gong, Yu
2017-11-22
Mono and diisocyanide complexes of manganese and iron were prepared via the reactions of laser-ablated manganese and iron atoms with (CN) 2 in an argon matrix. Product identifications were performed based on the characteristic infrared absorptions from isotopically labeled (CN) 2 experiments as compared with computed values for both cyanides and isocyanides. Manganese atoms reacted with (CN) 2 to produce Mn(NC) 2 upon λ > 220 nm irradiation, during which MnNC was formed mainly as a result of the photoinduced decomposition of Mn(NC) 2 . Similar reaction products FeNC and Fe(NC) 2 were formed during the reactions of Fe and (CN) 2 . All the product molecules together with the unobserved cyanide isomers were predicted to have linear geometries at the B3LYP level of theory. The cyanide complexes of manganese and iron were computed to be more stable than the isocyanide isomers with energy differences between 0.4 and 4 kcal/mol at the CCSD(T) level. Although manganese and iron cyanide molecules are slightly more stable according to the theory, no absorption can be assigned to these isomers in the region above the isocyanides possibly due to their low infrared intensities.
Design for interaction between humans and intelligent systems during real-time fault management
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Schreckenghost, Debra L.; Thronesbery, Carroll G.
1992-01-01
Initial results are reported to provide guidance and assistance for designers of intelligent systems and their human interfaces. The objective is to achieve more effective human-computer interaction (HCI) for real time fault management support systems. Studies of the development of intelligent fault management systems within NASA have resulted in a new perspective of the user. If the user is viewed as one of the subsystems in a heterogeneous, distributed system, system design becomes the design of a flexible architecture for accomplishing system tasks with both human and computer agents. HCI requirements and design should be distinguished from user interface (displays and controls) requirements and design. Effective HCI design for multi-agent systems requires explicit identification of activities and information that support coordination and communication between agents. The effects are characterized of HCI design on overall system design and approaches are identified to addressing HCI requirements in system design. The results include definition of (1) guidance based on information level requirements analysis of HCI, (2) high level requirements for a design methodology that integrates the HCI perspective into system design, and (3) requirements for embedding HCI design tools into intelligent system development environments.
Real-time flutter identification
NASA Technical Reports Server (NTRS)
Roy, R.; Walker, R.
1985-01-01
The techniques and a FORTRAN 77 MOdal Parameter IDentification (MOPID) computer program developed for identification of the frequencies and damping ratios of multiple flutter modes in real time are documented. Physically meaningful model parameterization was combined with state of the art recursive identification techniques and applied to the problem of real time flutter mode monitoring. The performance of the algorithm in terms of convergence speed and parameter estimation error is demonstrated for several simulated data cases, and the results of actual flight data analysis from two different vehicles are presented. It is indicated that the algorithm is capable of real time monitoring of aircraft flutter characteristics with a high degree of reliability.
McElvania Tekippe, Erin; Shuey, Sunni; Winkler, David W; Butler, Meghan A; Burnham, Carey-Ann D
2013-05-01
Matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) can be used as a method for the rapid identification of microorganisms. This study evaluated the Bruker Biotyper (MALDI-TOF MS) system for the identification of clinically relevant Gram-positive organisms. We tested 239 aerobic Gram-positive organisms isolated from clinical specimens. We evaluated 4 direct-smear methods, including "heavy" (H) and "light" (L) smears, with and without a 1-μl direct formic acid (FA) overlay. The quality measure assigned to a MALDI-TOF MS identification is a numerical value or "score." We found that a heavy smear with a formic acid overlay (H+FA) produced optimal MALDI-TOF MS identification scores and the highest percentage of correctly identified organisms. Using a score of ≥2.0, we identified 183 of the 239 isolates (76.6%) to the genus level, and of the 181 isolates resolved to the species level, 141 isolates (77.9%) were correctly identified. To maximize the number of correct identifications while minimizing misidentifications, the data were analyzed using a score of ≥1.7 for genus- and species-level identification. Using this score, 220 of the 239 isolates (92.1%) were identified to the genus level, and of the 181 isolates resolved to the species level, 167 isolates (92.2%) could be assigned an accurate species identification. We also evaluated a subset of isolates for preanalytic factors that might influence MALDI-TOF MS identification. Frequent subcultures increased the number of unidentified isolates. Incubation temperatures and subcultures of the media did not alter the rate of identification. These data define the ideal bacterial preparation, identification score, and medium conditions for optimal identification of Gram-positive bacteria by use of MALDI-TOF MS.
McElvania TeKippe, Erin; Shuey, Sunni; Winkler, David W.; Butler, Meghan A.
2013-01-01
Matrix-assisted laser desorption ionization–time of flight mass spectrometry (MALDI-TOF MS) can be used as a method for the rapid identification of microorganisms. This study evaluated the Bruker Biotyper (MALDI-TOF MS) system for the identification of clinically relevant Gram-positive organisms. We tested 239 aerobic Gram-positive organisms isolated from clinical specimens. We evaluated 4 direct-smear methods, including “heavy” (H) and “light” (L) smears, with and without a 1-μl direct formic acid (FA) overlay. The quality measure assigned to a MALDI-TOF MS identification is a numerical value or “score.” We found that a heavy smear with a formic acid overlay (H+FA) produced optimal MALDI-TOF MS identification scores and the highest percentage of correctly identified organisms. Using a score of ≥2.0, we identified 183 of the 239 isolates (76.6%) to the genus level, and of the 181 isolates resolved to the species level, 141 isolates (77.9%) were correctly identified. To maximize the number of correct identifications while minimizing misidentifications, the data were analyzed using a score of ≥1.7 for genus- and species-level identification. Using this score, 220 of the 239 isolates (92.1%) were identified to the genus level, and of the 181 isolates resolved to the species level, 167 isolates (92.2%) could be assigned an accurate species identification. We also evaluated a subset of isolates for preanalytic factors that might influence MALDI-TOF MS identification. Frequent subcultures increased the number of unidentified isolates. Incubation temperatures and subcultures of the media did not alter the rate of identification. These data define the ideal bacterial preparation, identification score, and medium conditions for optimal identification of Gram-positive bacteria by use of MALDI-TOF MS. PMID:23426925
Mirhendi, H; Ghiasian, A; Vismer, Hf; Asgary, Mr; Jalalizand, N; Arendrup, Mc; Makimura, K
2010-01-01
Fusarium species are capable of causing a wide range of crop plants infections as well as uncommon human infections. Many species of the genus produce mycotoxins, which are responsible for acute or chronic diseases in animals and humans. Identification of Fusaria to the species level is necessary for biological, epidemiological, pathological, and toxicological purposes. In this study, we undertook a computer-based analysis of ITS1-5.8SrDNA-ITS2 in 192 GenBank sequences from 36 Fusarium species to achieve data for establishing a molecular method for specie-specific identification. Sequence data and 610 restriction enzymes were analyzed for choosing RFLP profiles, and subsequently designed and validated a PCR-restriction enzyme system for identification and typing of species. DNA extracted from 32 reference strains of 16 species were amplified using ITS1 and ITS4 universal primers followed by sequencing and restriction enzyme digestion of PCR products. The following 3 restriction enzymes TasI, ItaI and CfoI provide the best discriminatory power. Using ITS1 and ITS4 primers a product of approximately 550bp was observed for all Fusarium strains, as expected regarding the sequence analyses. After RFLP of the PCR products, some species were definitely identified by the method and some strains had different patterns in same species. Our profile has potential not only for identification of species, but also for genotyping of strains. On the other hand, some Fusarium species were 100% identical in their ITS-5.8SrDNA-ITS2 sequences, therefore differentiation of these species is impossible regarding this target alone. ITS-PCR-RFLP method might be useful for preliminary differentiation and typing of most common Fusarium species.
Frisch, Stefan A.; Pisoni, David B.
2012-01-01
Objective Computational simulations were carried out to evaluate the appropriateness of several psycholinguistic theories of spoken word recognition for children who use cochlear implants. These models also investigate the interrelations of commonly used measures of closed-set and open-set tests of speech perception. Design A software simulation of phoneme recognition performance was developed that uses feature identification scores as input. Two simulations of lexical access were developed. In one, early phoneme decisions are used in a lexical search to find the best matching candidate. In the second, phoneme decisions are made only when lexical access occurs. Simulated phoneme and word identification performance was then applied to behavioral data from the Phonetically Balanced Kindergarten test and Lexical Neighborhood Test of open-set word recognition. Simulations of performance were evaluated for children with prelingual sensorineural hearing loss who use cochlear implants with the MPEAK or SPEAK coding strategies. Results Open-set word recognition performance can be successfully predicted using feature identification scores. In addition, we observed no qualitative differences in performance between children using MPEAK and SPEAK, suggesting that both groups of children process spoken words similarly despite differences in input. Word recognition ability was best predicted in the model in which phoneme decisions were delayed until lexical access. Conclusions Closed-set feature identification and open-set word recognition focus on different, but related, levels of language processing. Additional insight for clinical intervention may be achieved by collecting both types of data. The most successful model of performance is consistent with current psycholinguistic theories of spoken word recognition. Thus it appears that the cognitive process of spoken word recognition is fundamentally the same for pediatric cochlear implant users and children and adults with normal hearing. PMID:11132784
7 CFR 1.427 - Filing; identification of parties of record; service; and computation of time.
Code of Federal Regulations, 2010 CFR
2010-01-01
... ADMINISTRATIVE REGULATIONS Rules of Practice Governing Adjudication of Sourcing Area Applications and Formal Review of Sourcing Areas Pursuant to the Forest Resources Conservation and Shortage Relief Act of 1990... officer or employee. (e) Computations of time. Saturdays, Sundays and Federal holidays shall be included...
ERIC Educational Resources Information Center
Browning, Mark; Lehman, James D.
1991-01-01
Authors respond to criticisms by Smith in the same issue and defend their use of the term "gene" and "misconception." Authors indicate that they did not believe that the use of computers significantly skewed their data concerning student errors. (PR)
Practical Problem-Based Learning in Computing Education
ERIC Educational Resources Information Center
O'Grady, Michael J.
2012-01-01
Computer Science (CS) is a relatively new disciple and how best to introduce it to new students remains an open question. Likewise, the identification of appropriate instructional strategies for the diverse topics that constitute the average curriculum remains open to debate. One approach considered by a number of practitioners in CS education…
Computer Aided Battery Engineering Consortium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pesaran, Ahmad
A multi-national lab collaborative team was assembled that includes experts from academia and industry to enhance recently developed Computer-Aided Battery Engineering for Electric Drive Vehicles (CAEBAT)-II battery crush modeling tools and to develop microstructure models for electrode design - both computationally efficient. Task 1. The new Multi-Scale Multi-Domain model framework (GH-MSMD) provides 100x to 1,000x computation speed-up in battery electrochemical/thermal simulation while retaining modularity of particles and electrode-, cell-, and pack-level domains. The increased speed enables direct use of the full model in parameter identification. Task 2. Mechanical-electrochemical-thermal (MECT) models for mechanical abuse simulation were simultaneously coupled, enabling simultaneous modelingmore » of electrochemical reactions during the short circuit, when necessary. The interactions between mechanical failure and battery cell performance were studied, and the flexibility of the model for various batteries structures and loading conditions was improved. Model validation is ongoing to compare with test data from Sandia National Laboratories. The ABDT tool was established in ANSYS. Task 3. Microstructural modeling was conducted to enhance next-generation electrode designs. This 3- year project will validate models for a variety of electrodes, complementing Advanced Battery Research programs. Prototype tools have been developed for electrochemical simulation and geometric reconstruction.« less
Ng, L S Y; Sim, J H C; Eng, L C; Menon, S; Tan, T Y
2012-08-01
Aero-tolerant Actinomyces spp. are an under-recognised cause of cutaneous infections, in part because identification using conventional phenotypic methods is difficult and may be inaccurate. Matrix-assisted laser desorption ionisation time-of-flight mass spectrometry (MALDI-TOF MS) is a promising new technique for bacterial identification, but with limited data on the identification of aero-tolerant Actinomyces spp. This study evaluated the accuracy of a phenotypic biochemical kit, MALDI-TOF MS and genotypic identification methods for the identification of this problematic group of organisms. Thirty aero-tolerant Actinomyces spp. were isolated from soft-tissue infections over a 2-year period. Species identification was performed by 16 s rRNA sequencing and genotypic results were compared with results obtained by API Coryne and MALDI-TOF MS. There was poor agreement between API Coryne and genotypic identification, with only 33% of isolates correctly identified to the species level. MALDI-TOF MS correctly identified 97% of isolates to the species level, with 33% of identifications achieved with high confidence scores. MALDI-TOF MS is a promising new tool for the identification of aero-tolerant Actinomyces spp., but improvement of the database is required in order to increase the confidence level of identification.
Utility of 16S rDNA Sequencing for Identification of Rare Pathogenic Bacteria.
Loong, Shih Keng; Khor, Chee Sieng; Jafar, Faizatul Lela; AbuBakar, Sazaly
2016-11-01
Phenotypic identification systems are established methods for laboratory identification of bacteria causing human infections. Here, the utility of phenotypic identification systems was compared against 16S rDNA identification method on clinical isolates obtained during a 5-year study period, with special emphasis on isolates that gave unsatisfactory identification. One hundred and eighty-seven clinical bacteria isolates were tested with commercial phenotypic identification systems and 16S rDNA sequencing. Isolate identities determined using phenotypic identification systems and 16S rDNA sequencing were compared for similarity at genus and species level, with 16S rDNA sequencing as the reference method. Phenotypic identification systems identified ~46% (86/187) of the isolates with identity similar to that identified using 16S rDNA sequencing. Approximately 39% (73/187) and ~15% (28/187) of the isolates showed different genus identity and could not be identified using the phenotypic identification systems, respectively. Both methods succeeded in determining the species identities of 55 isolates; however, only ~69% (38/55) of the isolates matched at species level. 16S rDNA sequencing could not determine the species of ~20% (37/187) of the isolates. The 16S rDNA sequencing is a useful method over the phenotypic identification systems for the identification of rare and difficult to identify bacteria species. The 16S rDNA sequencing method, however, does have limitation for species-level identification of some bacteria highlighting the need for better bacterial pathogen identification tools. © 2016 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Cibula, William G.; Nyquist, Maurice O.
1987-01-01
An unsupervised computer classification of vegetation/landcover of Olympic National Park and surrounding environs was initially carried out using four bands of Landsat MSS data. The primary objective of the project was to derive a level of landcover classifications useful for park management applications while maintaining an acceptably high level of classification accuracy. Initially, nine generalized vegetation/landcover classes were derived. Overall classification accuracy was 91.7 percent. In an attempt to refine the level of classification, a geographic information system (GIS) approach was employed. Topographic data and watershed boundaries (inferred precipitation/temperature) data were registered with the Landsat MSS data. The resultant boolean operations yielded 21 vegetation/landcover classes while maintaining the same level of classification accuracy. The final classification provided much better identification and location of the major forest types within the park at the same high level of accuracy, and these met the project objective. This classification could now become inputs into a GIS system to help provide answers to park management coupled with other ancillary data programs such as fire management.
NASA Technical Reports Server (NTRS)
Stroke, G. W.
1972-01-01
Applications of the optical computer include an approach for increasing the sharpness of images obtained from the most powerful electron microscopes and fingerprint/credit card identification. The information-handling capability of the various optical computing processes is very great. Modern synthetic-aperture radars scan upward of 100,000 resolvable elements per second. Fields which have assumed major importance on the basis of optical computing principles are optical image deblurring, coherent side-looking synthetic-aperture radar, and correlative pattern recognition. Some examples of the most dramatic image deblurring results are shown.
Identified state-space prediction model for aero-optical wavefronts
NASA Astrophysics Data System (ADS)
Faghihi, Azin; Tesch, Jonathan; Gibson, Steve
2013-07-01
A state-space disturbance model and associated prediction filter for aero-optical wavefronts are described. The model is computed by system identification from a sequence of wavefronts measured in an airborne laboratory. Estimates of the statistics and flow velocity of the wavefront data are shown and can be computed from the matrices in the state-space model without returning to the original data. Numerical results compare velocity values and power spectra computed from the identified state-space model with those computed from the aero-optical data.
NASA Technical Reports Server (NTRS)
Huck, F. O.; Davis, R. E.; Fales, C. L.; Aherron, R. M.
1982-01-01
A computational model of the deterministic and stochastic processes involved in remote sensing is used to study spectral feature identification techniques for real-time onboard processing of data acquired with advanced earth-resources sensors. Preliminary results indicate that: Narrow spectral responses are advantageous; signal normalization improves mean-square distance (MSD) classification accuracy but tends to degrade maximum-likelihood (MLH) classification accuracy; and MSD classification of normalized signals performs better than the computationally more complex MLH classification when imaging conditions change appreciably from those conditions during which reference data were acquired. The results also indicate that autonomous categorization of TM signals into vegetation, bare land, water, snow and clouds can be accomplished with adequate reliability for many applications over a reasonably wide range of imaging conditions. However, further analysis is required to develop computationally efficient boundary approximation algorithms for such categorization.
NASA Astrophysics Data System (ADS)
Tokarczyk, Jarosław
2016-12-01
Method for identification the effects of dynamic overload affecting the people, which may occur in the emergency state of suspended monorail is presented in the paper. The braking curve using MBS (Multi-Body System) simulation was determined. For this purpose a computational model (MBS) of suspended monorail was developed and two different variants of numerical calculations were carried out. An algorithm of conducting numerical simulations to assess the effects of dynamic overload acting on the suspended monorails' users is also posted in the paper. An example of computational model FEM (Finite Element Method) composed of technical mean and the anthropometrical model ATB (Articulated Total Body) is shown. The simulation results are presented: graph of HIC (Head Injury Criterion) parameter and successive phases of dislocation of ATB model. Generator of computational models for safety criterion, which enables preparation of input data and remote starting the simulation, is proposed.
NASA Astrophysics Data System (ADS)
Springer, D. W.
Bell Helicopter Textron, Incorporated (BHTI) installed two Digital Equipment Corporation PDP-11 computers and an American Can Inc. Ink Jet printer in 1980 as the cornerstone of the Wire Harness Automated Manufacturing System (WHAMS). WHAMS is based upon the electrical assembly philosophy of continuous filament harness forming. This installation provided BHTI with a 3 to 1 return-on-investment by reducing wire and cable identification cycle time by 80 percent and harness forming, on dedicated layout tooling, by 40 percent. Yet, this improvement in harness forming created a bottle neck in connector assembly. To remove this bottle neck, BHTI has installed a prototype connector assembly cell that integrates the WHAMS' data base and innovative computer technologies to cut harness connector assembly cycle time. This novel connector assembly cell uses voice recognition, laser identification, and animated computer graphics to help the electrician in the correct assembly of harness connectors.
Classification of cancerous cells based on the one-class problem approach
NASA Astrophysics Data System (ADS)
Murshed, Nabeel A.; Bortolozzi, Flavio; Sabourin, Robert
1996-03-01
One of the most important factors in reducing the effect of cancerous diseases is the early diagnosis, which requires a good and a robust method. With the advancement of computer technologies and digital image processing, the development of a computer-based system has become feasible. In this paper, we introduce a new approach for the detection of cancerous cells. This approach is based on the one-class problem approach, through which the classification system need only be trained with patterns of cancerous cells. This reduces the burden of the training task by about 50%. Based on this approach, a computer-based classification system is developed, based on the Fuzzy ARTMAP neural networks. Experimental results were performed using a set of 542 patterns taken from a sample of breast cancer. Results of the experiment show 98% correct identification of cancerous cells and 95% correct identification of non-cancerous cells.
Optimizations for the EcoPod field identification tool
Manoharan, Aswath; Stamberger, Jeannie; Yu, YuanYuan; Paepcke, Andreas
2008-01-01
Background We sketch our species identification tool for palm sized computers that helps knowledgeable observers with census activities. An algorithm turns an identification matrix into a minimal length series of questions that guide the operator towards identification. Historic observation data from the census geographic area helps minimize question volume. We explore how much historic data is required to boost performance, and whether the use of history negatively impacts identification of rare species. We also explore how characteristics of the matrix interact with the algorithm, and how best to predict the probability of observing a previously unseen species. Results Point counts of birds taken at Stanford University's Jasper Ridge Biological Preserve between 2000 and 2005 were used to examine the algorithm. A computer identified species by correctly answering, and counting the algorithm's questions. We also explored how the character density of the key matrix and the theoretical minimum number of questions for each bird in the matrix influenced the algorithm. Our investigation of the required probability smoothing determined whether Laplace smoothing of observation probabilities was sufficient, or whether the more complex Good-Turing technique is required. Conclusion Historic data improved identification speed, but only impacted the top 25% most frequently observed birds. For rare birds the history based algorithms did not impose a noticeable penalty in the number of questions required for identification. For our dataset neither age of the historic data, nor the number of observation years impacted the algorithm. Density of characters for different taxa in the identification matrix did not impact the algorithms. Intrinsic differences in identifying different birds did affect the algorithm, but the differences affected the baseline method of not using historic data to exactly the same degree. We found that Laplace smoothing performed better for rare species than Simple Good-Turing, and that, contrary to expectation, the technique did not then adversely affect identification performance for frequently observed birds. PMID:18366649
On using the Hilbert transform for blind identification of complex modes: A practical approach
NASA Astrophysics Data System (ADS)
Antunes, Jose; Debut, Vincent; Piteau, Pilippe; Delaune, Xavier; Borsoi, Laurent
2018-01-01
The modal identification of dynamical systems under operational conditions, when subjected to wide-band unmeasured excitations, is today a viable alternative to more traditional modal identification approaches based on processing sets of measured FRFs or impulse responses. Among current techniques for performing operational modal identification, the so-called blind identification methods are the subject of considerable investigation. In particular, the SOBI (Second-Order Blind Identification) method was found to be quite efficient. SOBI was originally developed for systems with normal modes. To address systems with complex modes, various extension approaches have been proposed, in particular: (a) Using a first-order state-space formulation for the system dynamics; (b) Building complex analytic signals from the measured responses using the Hilbert transform. In this paper we further explore the latter option, which is conceptually interesting while preserving the model order and size. Focus is on applicability of the SOBI technique for extracting the modal responses from analytic signals built from a set of vibratory responses. The novelty of this work is to propose a straightforward computational procedure for obtaining the complex cross-correlation response matrix to be used for the modal identification procedure. After clarifying subtle aspects of the general theoretical framework, we demonstrate that the correlation matrix of the analytic responses can be computed through a Hilbert transform of the real correlation matrix, so that the actual time-domain responses are no longer required for modal identification purposes. The numerical validation of the proposed technique is presented based on time-domain simulations of a conceptual physical multi-modal system, designed to display modes ranging from normal to highly complex, while keeping modal damping low and nearly independent of the modal complexity, and which can prove very interesting in test bench applications. Numerical results for complex modal identifications are presented, and the quality of the identified modal matrix and modal responses, extracted using the complex SOBI technique and implementing the proposed formulation, is assessed.
Counter-Stereotypes and Feminism Promote Leadership Aspirations in Highly Identified Women.
Leicht, Carola; Gocłowska, Małgorzata A; Van Breen, Jolien A; de Lemus, Soledad; Randsley de Moura, Georgina
2017-01-01
Although women who highly identify with other women are more susceptible to stereotype threat effects, women's identification might associate with greater leadership aspirations contingent on (1) counter-stereotype salience and (2) feminist identification. When gender counter-stereotypes are salient, women's identification should associate with greater leadership aspiration regardless of feminism, while when gender stereotypes are salient, women's identification would predict greater leadership aspirations contingent on a high level of feminist identification. In our study US-based women ( N = 208) attended to gender stereotypic (vs. counter-stereotypic) content. We measured identification with women and identification with feminism, and, following the manipulation, leadership aspirations in an imagined work scenario. The interaction between identification with women, identification with feminism, and attention to stereotypes (vs. counter-stereotypes) significantly predicted leadership aspirations. In the counter-stereotypic condition women's identification associated with greater leadership aspirations regardless of feminist identification. In the stereotypic condition women's identification predicted leadership aspirations only at high levels of feminist identification. We conclude that salient counter-stereotypes and a strong identification with feminism may help high women identifiers increase their leadership aspirations.
Counter-Stereotypes and Feminism Promote Leadership Aspirations in Highly Identified Women
Leicht, Carola; Gocłowska, Małgorzata A.; Van Breen, Jolien A.; de Lemus, Soledad; Randsley de Moura, Georgina
2017-01-01
Although women who highly identify with other women are more susceptible to stereotype threat effects, women's identification might associate with greater leadership aspirations contingent on (1) counter-stereotype salience and (2) feminist identification. When gender counter-stereotypes are salient, women's identification should associate with greater leadership aspiration regardless of feminism, while when gender stereotypes are salient, women's identification would predict greater leadership aspirations contingent on a high level of feminist identification. In our study US-based women (N = 208) attended to gender stereotypic (vs. counter-stereotypic) content. We measured identification with women and identification with feminism, and, following the manipulation, leadership aspirations in an imagined work scenario. The interaction between identification with women, identification with feminism, and attention to stereotypes (vs. counter-stereotypes) significantly predicted leadership aspirations. In the counter-stereotypic condition women's identification associated with greater leadership aspirations regardless of feminist identification. In the stereotypic condition women's identification predicted leadership aspirations only at high levels of feminist identification. We conclude that salient counter-stereotypes and a strong identification with feminism may help high women identifiers increase their leadership aspirations. PMID:28626437
NASA Technical Reports Server (NTRS)
Nez, G. (Principal Investigator); Mutter, D.
1977-01-01
The author has identified the following significant results. New LANDSAT analysis software and linkages with other computer mapping software were developed. Significant results were also achieved in training, communication, and identification of needs for developing the LANDSAT/computer mapping technologies into operational tools for use by decision makers.
A new model to compute the desired steering torque for steer-by-wire vehicles and driving simulators
NASA Astrophysics Data System (ADS)
Fankem, Steve; Müller, Steffen
2014-05-01
This paper deals with the control of the hand wheel actuator in steer-by-wire (SbW) vehicles and driving simulators (DSs). A novel model for the computation of the desired steering torque is presented. The introduced steering torque computation does not only aim to generate a realistic steering feel, which means that the driver should not miss the basic steering functionality of a modern conventional steering system such as an electric power steering (EPS) or hydraulic power steering (HPS), and this in every driving situation. In addition, the modular structure of the steering torque computation combined with suitably selected tuning parameters has the objective to offer a high degree of customisability of the steering feel and thus to provide each driver with his preferred steering feel in a very intuitive manner. The task and the tuning of each module are firstly described. Then, the steering torque computation is parameterised such that the steering feel of a series EPS system is reproduced. For this purpose, experiments are conducted in a hardware-in-the-loop environment where a test EPS is mounted on a steering test bench coupled with a vehicle simulator and parameter identification techniques are applied. Subsequently, how appropriate the steering torque computation mimics the test EPS system is objectively evaluated with respect to criteria concerning the steering torque level and gradient, the feedback behaviour and the steering return ability. Finally, the intuitive tuning of the modular steering torque computation is demonstrated for deriving a sportier steering feel configuration.
Computer animations stimulate contagious yawning in chimpanzees
Campbell, Matthew W.; Carter, J. Devyn; Proctor, Darby; Eisenberg, Michelle L.; de Waal, Frans B. M.
2009-01-01
People empathize with fictional displays of behaviour, including those of cartoons and computer animations, even though the stimuli are obviously artificial. However, the extent to which other animals also may respond empathetically to animations has yet to be determined. Animations provide a potentially useful tool for exploring non-human behaviour, cognition and empathy because computer-generated stimuli offer complete control over variables and the ability to program stimuli that could not be captured on video. Establishing computer animations as a viable tool requires that non-human subjects identify with and respond to animations in a way similar to the way they do to images of actual conspecifics. Contagious yawning has been linked to empathy and poses a good test of involuntary identification and motor mimicry. We presented 24 chimpanzees with three-dimensional computer-animated chimpanzees yawning or displaying control mouth movements. The apes yawned significantly more in response to the yawn animations than to the controls, implying identification with the animations. These results support the phenomenon of contagious yawning in chimpanzees and suggest an empathic response to animations. Understanding how chimpanzees connect with animations, to both empathize and imitate, may help us to understand how humans do the same. PMID:19740888
ERIC Educational Resources Information Center
Stagg, Bethan C.; Donkin, Maria E.; Smith, Alison M.
2015-01-01
Bryophytes are a rewarding study group in field biology and the UK bryophyte flora has international importance to biodiversity conservation. We designed an identification key to common woodland moss species and compared the usability of two formats, web-based multi-access and printed dichotomous key, with undergraduate students. The rate of…
Modeling of Diffuse Photometric Signatures of Satellites for Space Object Identification.
1982-12-01
to provide the groundwork for devel- opment of a computer program which could serve as an aid to tactical space object identification and analysis ...I Photometric Analysis Capability at the ADIC. . . . . .. 2 Operational Limitations of the Photometric Data Analysis Module (PDA...7 PDAM Diffuse Analysis . . . . . . . . . . . . . . . . . 7 Real World SOI Requirements vs POAN Capabilities . . . . 16 Statement of the Problem
Learning about Bird Species on the Primary Level
ERIC Educational Resources Information Center
Randler, Christoph
2009-01-01
Animal species identification is often emphasized as a basic prerequisite for an understanding of ecology because ecological interactions are based on interactions between species at least as it is taught on the school level. Therefore, training identification skills or using identification books seems a worthwhile task in biology education, and…
Organizational Identification and Social Motivation: A Field Descriptive Study in Two Organizations.
ERIC Educational Resources Information Center
Barge, J. Kevin
A study examined the relationships between leadership conversation and its impact upon organizational members' levels of organizational identification and behavior. It was hypothesized (1) that effective leader conversation would be associated with higher levels of role, means, goal and overall organizational identification, and (2) that…
Impact of PECS tablet computer app on receptive identification of pictures given a verbal stimulus.
Ganz, Jennifer B; Hong, Ee Rea; Goodwyn, Fara; Kite, Elizabeth; Gilliland, Whitney
2015-04-01
The purpose of this brief report was to determine the effect on receptive identification of photos of a tablet computer-based augmentative and alternative communication (AAC) system with voice output. A multiple baseline single-case experimental design across vocabulary words was implemented. One participant, a preschool-aged boy with autism and little intelligible verbal language, was included in the study. Although a functional relation between the intervention and the dependent variable was not established, the intervention did appear to result in mild improvement for two of the three vocabulary words selected. The authors recommend further investigations of the collateral impacts of AAC on skills other than expressive language.
ProteoCloud: a full-featured open source proteomics cloud computing pipeline.
Muth, Thilo; Peters, Julian; Blackburn, Jonathan; Rapp, Erdmann; Martens, Lennart
2013-08-02
We here present the ProteoCloud pipeline, a freely available, full-featured cloud-based platform to perform computationally intensive, exhaustive searches in a cloud environment using five different peptide identification algorithms. ProteoCloud is entirely open source, and is built around an easy to use and cross-platform software client with a rich graphical user interface. This client allows full control of the number of cloud instances to initiate and of the spectra to assign for identification. It also enables the user to track progress, and to visualize and interpret the results in detail. Source code, binaries and documentation are all available at http://proteocloud.googlecode.com. Copyright © 2012 Elsevier B.V. All rights reserved.
Bayesian Modeling for Identification and Estimation of the Learning Effects of Pointing Tasks
NASA Astrophysics Data System (ADS)
Kyo, Koki
Recently, in the field of human-computer interaction, a model containing the systematic factor and human factor has been proposed to evaluate the performance of the input devices of a computer. This is called the SH-model. In this paper, in order to extend the range of application of the SH-model, we propose some new models based on the Box-Cox transformation and apply a Bayesian modeling method for identification and estimation of the learning effects of pointing tasks. We consider the parameters describing the learning effect as random variables and introduce smoothness priors for them. Illustrative results show that the newly-proposed models work well.
Computer game as a tool for training the identification of phonemic length.
Pennala, Riitta; Richardson, Ulla; Ylinen, Sari; Lyytinen, Heikki; Martin, Maisa
2014-12-01
Computer-assisted training of Finnish phonemic length was conducted with 7-year-old Russian-speaking second-language learners of Finnish. Phonemic length plays a different role in these two languages. The training included game activities with two- and three-syllable word and pseudo-word minimal pairs with prototypical vowel durations. The lowest accuracy scores were recorded for two-syllable words. Accuracy scores were higher for the minimal pairs with larger rather than smaller differences in duration. Accuracy scores were lower for long duration than for short duration. The ability to identify quantity degree was generalized to stimuli used in the identification test in two of the children. Ideas for improving the game are introduced.
NASA Technical Reports Server (NTRS)
1980-01-01
The current system and subsystem used by the Identification Division are described. System constraints that dictate the system environment are discussed and boundaries within which solutions must be found are described. The functional requirements were related to the performance requirements. These performance requirements were then related to their applicable subsystems. The flow of data, documents, or other pieces of information from one subsystem to another or from the external world into the identification system is described. Requirements and design standards for a computer based system are presented.
Tracking at High Level Trigger in CMS
NASA Astrophysics Data System (ADS)
Tosi, M.
2016-04-01
The trigger systems of the LHC detectors play a crucial role in determining the physics capabilities of experiments. A reduction of several orders of magnitude of the event rate is needed to reach values compatible with detector readout, offline storage and analysis capability. The CMS experiment has been designed with a two-level trigger system: the Level-1 Trigger (L1T), implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a trade-off between the complexity of the algorithms, the sustainable output rate, and the selection efficiency. With the computing power available during the 2012 data taking the maximum reconstruction time at HLT was about 200 ms per event, at the nominal L1T rate of 100 kHz. Track reconstruction algorithms are widely used in the HLT, for the reconstruction of the physics objects as well as in the identification of b-jets and lepton isolation. Reconstructed tracks are also used to distinguish the primary vertex, which identifies the hard interaction process, from the pileup ones. This task is particularly important in the LHC environment given the large number of interactions per bunch crossing: on average 25 in 2012, and expected to be around 40 in Run II. We will present the performance of HLT tracking algorithms, discussing its impact on CMS physics program, as well as new developments done towards the next data taking in 2015.
Massire, Christian; Buelow, Daelynn R.; Zhang, Sean X.; Lovari, Robert; Matthews, Heather E.; Toleno, Donna M.; Ranken, Raymond R.; Hall, Thomas A.; Metzgar, David; Sampath, Rangarajan; Blyn, Lawrence B.; Ecker, David J.; Gu, Zhengming; Walsh, Thomas J.
2013-01-01
Invasive fungal infections are a significant cause of morbidity and mortality among immunocompromised patients. Early and accurate identification of these pathogens is central to direct therapy and to improve overall outcome. PCR coupled with electrospray ionization mass spectrometry (PCR/ESI-MS) was evaluated as a novel means for identification of fungal pathogens. Using a database grounded by 60 ATCC reference strains, a total of 394 clinical fungal isolates (264 molds and 130 yeasts) were analyzed by PCR/ESI-MS; results were compared to phenotypic identification, and discrepant results were sequence confirmed. PCR/ESI-MS identified 81.4% of molds to either the genus or species level, with concordance rates of 89.7% and 87.4%, respectively, to phenotypic identification. Likewise, PCR/ESI-MS was able to identify 98.4% of yeasts to either the genus or species level, agreeing with 100% of phenotypic results at both the genus and species level. PCR/ESI-MS performed best with Aspergillus and Candida isolates, generating species-level identification in 94.4% and 99.2% of isolates, respectively. PCR/ESI-MS is a promising new technology for broad-range detection and identification of medically important fungal pathogens that cause invasive mycoses. PMID:23303501
Distributing an executable job load file to compute nodes in a parallel computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gooding, Thomas M.
Distributing an executable job load file to compute nodes in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: determining, by a compute node in the parallel computer, whether the compute node is participating in a job; determining, by the compute node in the parallel computer, whether a descendant compute node is participating in the job; responsive to determining that the compute node is participating in the job or that the descendant compute node is participating in the job, communicating, by the compute node to a parent compute node, an identification of a data communications linkmore » over which the compute node receives data from the parent compute node; constructing a class route for the job, wherein the class route identifies all compute nodes participating in the job; and broadcasting the executable load file for the job along the class route for the job.« less
Li, Xuyan; Hou, Yanming; Zhang, Li; Zhang, Wenhao; Quan, Chen; Cui, Yuhai; Bian, Shaomin
2014-01-01
MicroRNAs (miRNAs) are a class of endogenous, approximately 21nt in length, non-coding RNA, which mediate the expression of target genes primarily at post-transcriptional levels. miRNAs play critical roles in almost all plant cellular and metabolic processes. Although numerous miRNAs have been identified in the plant kingdom, the miRNAs in blueberry, which is an economically important small fruit crop, still remain totally unknown. In this study, we reported a computational identification of miRNAs and their targets in blueberry. By conducting an EST-based comparative genomics approach, 9 potential vco-miRNAs were discovered from 22,402 blueberry ESTs according to a series of filtering criteria, designated as vco-miR156–5p, vco-miR156–3p, vco-miR1436, vco-miR1522, vco-miR4495, vco-miR5120, vco-miR5658, vco-miR5783, and vco-miR5986. Based on sequence complementarity between miRNA and its target transcript, 34 target ESTs from blueberry and 70 targets from other species were identified for the vco-miRNAs. The targets were found to be involved in transcription, RNA splicing and binding, DNA duplication, signal transduction, transport and trafficking, stress response, as well as synthesis and metabolic process. These findings will greatly contribute to future research in regard to functions and regulatory mechanisms of blueberry miRNAs. PMID:25763692
Li, Xuyan; Hou, Yanming; Zhang, Li; Zhang, Wenhao; Quan, Chen; Cui, Yuhai; Bian, Shaomin
2014-01-01
MicroRNAs (miRNAs) are a class of endogenous, approximately 21nt in length, non-coding RNA, which mediate the expression of target genes primarily at post-transcriptional levels. miRNAs play critical roles in almost all plant cellular and metabolic processes. Although numerous miRNAs have been identified in the plant kingdom, the miRNAs in blueberry, which is an economically important small fruit crop, still remain totally unknown. In this study, we reported a computational identification of miRNAs and their targets in blueberry. By conducting an EST-based comparative genomics approach, 9 potential vco-miRNAs were discovered from 22,402 blueberry ESTs according to a series of filtering criteria, designated as vco-miR156-5p, vco-miR156-3p, vco-miR1436, vco-miR1522, vco-miR4495, vco-miR5120, vco-miR5658, vco-miR5783, and vco-miR5986. Based on sequence complementarity between miRNA and its target transcript, 34 target ESTs from blueberry and 70 targets from other species were identified for the vco-miRNAs. The targets were found to be involved in transcription, RNA splicing and binding, DNA duplication, signal transduction, transport and trafficking, stress response, as well as synthesis and metabolic process. These findings will greatly contribute to future research in regard to functions and regulatory mechanisms of blueberry miRNAs.
Fang, Jiansong; Wu, Zengrui; Cai, Chuipu; Wang, Qi; Tang, Yun; Cheng, Feixiong
2017-11-27
Natural products with diverse chemical scaffolds have been recognized as an invaluable source of compounds in drug discovery and development. However, systematic identification of drug targets for natural products at the human proteome level via various experimental assays is highly expensive and time-consuming. In this study, we proposed a systems pharmacology infrastructure to predict new drug targets and anticancer indications of natural products. Specifically, we reconstructed a global drug-target network with 7,314 interactions connecting 751 targets and 2,388 natural products and built predictive network models via a balanced substructure-drug-target network-based inference approach. A high area under receiver operating characteristic curve of 0.96 was yielded for predicting new targets of natural products during cross-validation. The newly predicted targets of natural products (e.g., resveratrol, genistein, and kaempferol) with high scores were validated by various literature studies. We further built the statistical network models for identification of new anticancer indications of natural products through integration of both experimentally validated and computationally predicted drug-target interactions of natural products with known cancer proteins. We showed that the significantly predicted anticancer indications of multiple natural products (e.g., naringenin, disulfiram, and metformin) with new mechanism-of-action were validated by various published experimental evidence. In summary, this study offers powerful computational systems pharmacology approaches and tools for the development of novel targeted cancer therapies by exploiting the polypharmacology of natural products.
Alves, Gelio; Yu, Yi-Kuo
2016-09-01
There is a growing trend for biomedical researchers to extract evidence and draw conclusions from mass spectrometry based proteomics experiments, the cornerstone of which is peptide identification. Inaccurate assignments of peptide identification confidence thus may have far-reaching and adverse consequences. Although some peptide identification methods report accurate statistics, they have been limited to certain types of scoring function. The extreme value statistics based method, while more general in the scoring functions it allows, demands accurate parameter estimates and requires, at least in its original design, excessive computational resources. Improving the parameter estimate accuracy and reducing the computational cost for this method has two advantages: it provides another feasible route to accurate significance assessment, and it could provide reliable statistics for scoring functions yet to be developed. We have formulated and implemented an efficient algorithm for calculating the extreme value statistics for peptide identification applicable to various scoring functions, bypassing the need for searching large random databases. The source code, implemented in C ++ on a linux system, is available for download at ftp://ftp.ncbi.nlm.nih.gov/pub/qmbp/qmbp_ms/RAId/RAId_Linux_64Bit yyu@ncbi.nlm.nih.gov Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the US.
Optimization-Based Inverse Identification of the Parameters of a Concrete Cap Material Model
NASA Astrophysics Data System (ADS)
Král, Petr; Hokeš, Filip; Hušek, Martin; Kala, Jiří; Hradil, Petr
2017-10-01
Issues concerning the advanced numerical analysis of concrete building structures in sophisticated computing systems currently require the involvement of nonlinear mechanics tools. The efforts to design safer, more durable and mainly more economically efficient concrete structures are supported via the use of advanced nonlinear concrete material models and the geometrically nonlinear approach. The application of nonlinear mechanics tools undoubtedly presents another step towards the approximation of the real behaviour of concrete building structures within the framework of computer numerical simulations. However, the success rate of this application depends on having a perfect understanding of the behaviour of the concrete material models used and having a perfect understanding of the used material model parameters meaning. The effective application of nonlinear concrete material models within computer simulations often becomes very problematic because these material models very often contain parameters (material constants) whose values are difficult to obtain. However, getting of the correct values of material parameters is very important to ensure proper function of a concrete material model used. Today, one possibility, which permits successful solution of the mentioned problem, is the use of optimization algorithms for the purpose of the optimization-based inverse material parameter identification. Parameter identification goes hand in hand with experimental investigation while it trying to find parameter values of the used material model so that the resulting data obtained from the computer simulation will best approximate the experimental data. This paper is focused on the optimization-based inverse identification of the parameters of a concrete cap material model which is known under the name the Continuous Surface Cap Model. Within this paper, material parameters of the model are identified on the basis of interaction between nonlinear computer simulations, gradient based and nature inspired optimization algorithms and experimental data, the latter of which take the form of a load-extension curve obtained from the evaluation of uniaxial tensile test results. The aim of this research was to obtain material model parameters corresponding to the quasi-static tensile loading which may be further used for the research involving dynamic and high-speed tensile loading. Based on the obtained results it can be concluded that the set goal has been reached.
SAR exposure from UHF RFID reader in adult, child, pregnant woman, and fetus anatomical models.
Fiocchi, Serena; Markakis, Ioannis A; Ravazzani, Paolo; Samaras, Theodoros
2013-09-01
The spread of radio frequency identification (RFID) devices in ubiquitous applications without their simultaneous exposure assessment could give rise to public concerns about their potential adverse health effects. Among the various RFID system categories, the ultra high frequency (UHF) RFID systems have recently started to be widely used in many applications. This study addresses a computational exposure assessment of the electromagnetic radiation generated by a realistic UHF RFID reader, quantifying the exposure levels in different exposure scenarios and subjects (two adults, four children, and two anatomical models of women 7 and 9 months pregnant). The results of the computations are presented in terms of the whole-body and peak spatial specific absorption rate (SAR) averaged over 10 g of tissue to allow comparison with the basic restrictions of the exposure guidelines. The SAR levels in the adults and children were below 0.02 and 0.8 W/kg in whole-body SAR and maximum peak SAR levels, respectively, for all tested positions of the antenna. On the contrary, exposure of pregnant women and fetuses resulted in maximum peak SAR(10 g) values close to the values suggested by the guidelines (2 W/kg) in some of the exposure scenarios with the antenna positioned in front of the abdomen and with a 100% duty cycle and 1 W radiated power. Copyright © 2013 Wiley Periodicals, Inc.
Three dimensional identification card and applications
NASA Astrophysics Data System (ADS)
Zhou, Changhe; Wang, Shaoqing; Li, Chao; Li, Hao; Liu, Zhao
2016-10-01
Three dimensional Identification Card, with its three-dimensional personal image displayed and stored for personal identification, is supposed be the advanced version of the present two-dimensional identification card in the future [1]. Three dimensional Identification Card means that there are three-dimensional optical techniques are used, the personal image on ID card is displayed to be three-dimensional, so we can see three dimensional personal face. The ID card also stores the three-dimensional face information in its inside electronics chip, which might be recorded by using two-channel cameras, and it can be displayed in computer as three-dimensional images for personal identification. Three-dimensional ID card might be one interesting direction to update the present two-dimensional card in the future. Three-dimension ID card might be widely used in airport custom, entrance of hotel, school, university, as passport for on-line banking, registration of on-line game, etc...
Person-Centered Emotional Support and Gender Attributions in Computer-Mediated Communication
ERIC Educational Resources Information Center
Spottswood, Erin L.; Walther, Joseph B.; Holmstrom, Amanda J.; Ellison, Nicole B.
2013-01-01
Without physical appearance, identification in computer-mediated communication is relatively ambiguous and may depend on verbal cues such as usernames, content, and/or style. This is important when gender-linked differences exist in the effects of messages, as in emotional support. This study examined gender attribution for online support…
Computer Series, 82. The Application of Expert Systems in the General Chemistry Laboratory.
ERIC Educational Resources Information Center
Settle, Frank A., Jr.
1987-01-01
Describes the construction of expert computer systems using artificial intelligence technology and commercially available software, known as an expert system shell. Provides two applications; a simple one, the identification of seven white substances, and a more complicated one involving the qualitative analysis of six metal ions. (TW)
Identification of Factors That Affect Software Complexity.
ERIC Educational Resources Information Center
Kaiser, Javaid
A survey of computer scientists was conducted to identify factors that affect software complexity. A total of 160 items were selected from the literature to include in a questionnaire sent to 425 individuals who were employees of computer-related businesses in Lawrence and Kansas City. The items were grouped into nine categories called system…
In-Flight Pitot-Static Calibration
NASA Technical Reports Server (NTRS)
Foster, John V. (Inventor); Cunningham, Kevin (Inventor)
2016-01-01
A GPS-based pitot-static calibration system uses global output-error optimization. High data rate measurements of static and total pressure, ambient air conditions, and GPS-based ground speed measurements are used to compute pitot-static pressure errors over a range of airspeed. System identification methods rapidly compute optimal pressure error models with defined confidence intervals.
ERIC Educational Resources Information Center
Stokes-Huby, Heather; Vitale, Dale E.
2007-01-01
This exercise integrates the infrared unknown identification ("IR-ID") experiment common to most organic laboratory syllabi with computer molecular modeling. In this modification students are still required to identify unknown compounds from their IR spectra, but must additionally match some of the absorptions with computed frequencies they…
A Simple Computer Application for the Identification of Conifer Genera
ERIC Educational Resources Information Center
Strain, Steven R.; Chmielewski, Jerry G.
2010-01-01
The National Science Education Standards prescribe that an understanding of the importance of classifying organisms be one component of a student's educational experience in the life sciences. The use of a classification scheme to identify organisms is one way of addressing this goal. We describe Conifer ID, a computer application that assists…
24 CFR 5.234 - Requests for information from SWICAs and Federal agencies; restrictions on use.
Code of Federal Regulations, 2013 CFR
2013-04-01
...; WAIVERS Disclosure and Verification of Social Security Numbers and Employer Identification Numbers... obtained through computer matching agreements between HUD and a SWICA or Federal agency, or between a PHA... Privacy Act notice is required, as follows: (1) When HUD requests the computer match, the processing...
24 CFR 5.234 - Requests for information from SWICAs and Federal agencies; restrictions on use.
Code of Federal Regulations, 2010 CFR
2010-04-01
...; WAIVERS Disclosure and Verification of Social Security Numbers and Employer Identification Numbers... obtained through computer matching agreements between HUD and a SWICA or Federal agency, or between a PHA... Privacy Act notice is required, as follows: (1) When HUD requests the computer match, the processing...
24 CFR 5.234 - Requests for information from SWICAs and Federal agencies; restrictions on use.
Code of Federal Regulations, 2011 CFR
2011-04-01
...; WAIVERS Disclosure and Verification of Social Security Numbers and Employer Identification Numbers... obtained through computer matching agreements between HUD and a SWICA or Federal agency, or between a PHA... Privacy Act notice is required, as follows: (1) When HUD requests the computer match, the processing...
24 CFR 5.234 - Requests for information from SWICAs and Federal agencies; restrictions on use.
Code of Federal Regulations, 2012 CFR
2012-04-01
...; WAIVERS Disclosure and Verification of Social Security Numbers and Employer Identification Numbers... obtained through computer matching agreements between HUD and a SWICA or Federal agency, or between a PHA... Privacy Act notice is required, as follows: (1) When HUD requests the computer match, the processing...
24 CFR 5.234 - Requests for information from SWICAs and Federal agencies; restrictions on use.
Code of Federal Regulations, 2014 CFR
2014-04-01
...; WAIVERS Disclosure and Verification of Social Security Numbers and Employer Identification Numbers... obtained through computer matching agreements between HUD and a SWICA or Federal agency, or between a PHA... Privacy Act notice is required, as follows: (1) When HUD requests the computer match, the processing...
Difference-Equation/Flow-Graph Circuit Analysis
NASA Technical Reports Server (NTRS)
Mcvey, I. M.
1988-01-01
Numerical technique enables rapid, approximate analyses of electronic circuits containing linear and nonlinear elements. Practiced in variety of computer languages on large and small computers; for circuits simple enough, programmable hand calculators used. Although some combinations of circuit elements make numerical solutions diverge, enables quick identification of divergence and correction of circuit models to make solutions converge.
Phantom feet on digital radionuclide images and other scary computer tales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freitas, J.E.; Dworkin, H.J.; Dees, S.M.
1989-09-01
Malfunction of a computer-assisted digital gamma camera is reported. Despite what appeared to be adequate acceptance testing, an error in the system gave rise to switching of images and identification text. A suggestion is made for using a hot marker, which would avoid the potential error of misinterpretation of patient images.
2 Internet-Savvy Students Help Track Down the Hacker of an NCAA Web Site.
ERIC Educational Resources Information Center
Wanat, Thomas
1997-01-01
A Duke University (North Carolina) student witnessing vandalism to the National Collegiate Athletic Association's (NCAA) World Wide Web site and a University of Massachusetts, Amherst student, both studying computer science, have contributed substantially to the identification of a computer hacker destroying the NCAA site. The students' rapid…
Collection and analysis of NASA clean room air samples
NASA Technical Reports Server (NTRS)
Sheldon, L. S.; Keever, J.
1985-01-01
The environment of the HALOE assembly clean room at NASA Langley Research Center is analyzed to determine the background levels of airborne organic compounds. Sampling is accomplished by pumping the clean room air through absorbing cartridges. For volatile organics, cartridges are thermally desorbed and then analyzed by gas chromatography and mass spectrometry, compounds are identified by searching the EPA/NIH data base using an interactive operator INCOS computer search algorithm. For semivolatile organics, cartridges are solvent entracted and concentrated extracts are analyzed by gas chromatography-electron capture detection, compound identification is made by matching gas chromatogram retention times with known standards. The detection limits for the semivolatile organics are; 0.89 ng cu m for dioctylphlhalate (DOP) and 1.6 ng cu m for polychlorinated biphenyls (PCB). The detection limit for volatile organics ranges from 1 to 50 parts per trillion. Only trace quantities of organics are detected, the DOP levels do not exceed 2.5 ng cu m and the PCB levels do not exceed 454 ng cu m.
Parallel processing for digital picture comparison
NASA Technical Reports Server (NTRS)
Cheng, H. D.; Kou, L. T.
1987-01-01
In picture processing an important problem is to identify two digital pictures of the same scene taken under different lighting conditions. This kind of problem can be found in remote sensing, satellite signal processing and the related areas. The identification can be done by transforming the gray levels so that the gray level histograms of the two pictures are closely matched. The transformation problem can be solved by using the packing method. Researchers propose a VLSI architecture consisting of m x n processing elements with extensive parallel and pipelining computation capabilities to speed up the transformation with the time complexity 0(max(m,n)), where m and n are the numbers of the gray levels of the input picture and the reference picture respectively. If using uniprocessor and a dynamic programming algorithm, the time complexity will be 0(m(3)xn). The algorithm partition problem, as an important issue in VLSI design, is discussed. Verification of the proposed architecture is also given.
[Systematic review on the physical activity level and nutritional status of Brazilian children].
Graziele Bento, Gisele; Cascaes da Silva, Franciele; Gonçalves, Elizandra; Domingos Dos Santos, Patrícia; da Silva, Rudney
2016-08-01
Objective To systematically review the literature on the prevalence and the factors associated with physical activity level and nutritional status of Brazilian children. Methods The electronic database MEDLINE (via PubMed), SciELO, SCOPUS and Web of Science were selected. The search strategy included the descriptors proposed in the Medical Subject Headings (MeSH): "Motor Activity", "Activities", "Nutritional Status", "Overweight", "Obesity", "Body Mass Index", "Child", "Brazil". Results The search allowed the identification of 141 articles, of which 16 studies were considered potentially relevant and were included in the review. Conclusions Studies about nutritional status and physical activity levels in Brazilian children are still scarce, but the work on this has increased in recent years, especially those that use cross designs, as well as questionnaires to measure physical activity; BMI for nutritional status is still widely used. Furthermore, studies that analyzed the amount of hours designated to sedentary behaviors such as watching TV, playing video-games and using the computer, found that these activities took more than two hours every day.
Blind quantum computation with identity authentication
NASA Astrophysics Data System (ADS)
Li, Qin; Li, Zhulin; Chan, Wai Hong; Zhang, Shengyu; Liu, Chengdong
2018-04-01
Blind quantum computation (BQC) allows a client with relatively few quantum resources or poor quantum technologies to delegate his computational problem to a quantum server such that the client's input, output, and algorithm are kept private. However, all existing BQC protocols focus on correctness verification of quantum computation but neglect authentication of participants' identity which probably leads to man-in-the-middle attacks or denial-of-service attacks. In this work, we use quantum identification to overcome such two kinds of attack for BQC, which will be called QI-BQC. We propose two QI-BQC protocols based on a typical single-server BQC protocol and a double-server BQC protocol. The two protocols can ensure both data integrity and mutual identification between participants with the help of a third trusted party (TTP). In addition, an unjammable public channel between a client and a server which is indispensable in previous BQC protocols is unnecessary, although it is required between TTP and each participant at some instant. Furthermore, the method to achieve identity verification in the presented protocols is general and it can be applied to other similar BQC protocols.
NASA Technical Reports Server (NTRS)
Parks, W. L.; Sewell, J. I. (Principal Investigator); Hilty, J. W.; Rennie, J. C.
1972-01-01
The author has identified the following significant results. A significant finding is the identification and delineation of a large soil association in Obion County, West Tennessee. These data are now being processed through the scanner and computer and will be included in the next report along with pictures of printout and imagery. Channel 7 appears to provide the most useful imagery related to soil differences. Soil types have been identified through the use of aircraft imagery. However, a soil association map appears to be the best that space imagery will provide. The exception to this will be large areas of a uniform soil type as occurs in the great plains.
Functional Evolution of a cis-Regulatory Module
Palsson, Arnar; Alekseeva, Elena; Bergman, Casey M; Nathan, Janaki; Kreitman, Martin
2005-01-01
Lack of knowledge about how regulatory regions evolve in relation to their structure–function may limit the utility of comparative sequence analysis in deciphering cis-regulatory sequences. To address this we applied reverse genetics to carry out a functional genetic complementation analysis of a eukaryotic cis-regulatory module—the even-skipped stripe 2 enhancer—from four Drosophila species. The evolution of this enhancer is non-clock-like, with important functional differences between closely related species and functional convergence between distantly related species. Functional divergence is attributable to differences in activation levels rather than spatiotemporal control of gene expression. Our findings have implications for understanding enhancer structure–function, mechanisms of speciation and computational identification of regulatory modules. PMID:15757364
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gooding, Thomas M.
Distributing an executable job load file to compute nodes in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: determining, by a compute node in the parallel computer, whether the compute node is participating in a job; determining, by the compute node in the parallel computer, whether a descendant compute node is participating in the job; responsive to determining that the compute node is participating in the job or that the descendant compute node is participating in the job, communicating, by the compute node to a parent compute node, an identification of a data communications linkmore » over which the compute node receives data from the parent compute node; constructing a class route for the job, wherein the class route identifies all compute nodes participating in the job; and broadcasting the executable load file for the job along the class route for the job.« less
Statistical use of argonaute expression and RISC assembly in microRNA target identification.
Stanhope, Stephen A; Sengupta, Srikumar; den Boon, Johan; Ahlquist, Paul; Newton, Michael A
2009-09-01
MicroRNAs (miRNAs) posttranscriptionally regulate targeted messenger RNAs (mRNAs) by inducing cleavage or otherwise repressing their translation. We address the problem of detecting m/miRNA targeting relationships in homo sapiens from microarray data by developing statistical models that are motivated by the biological mechanisms used by miRNAs. The focus of our modeling is the construction, activity, and mediation of RNA-induced silencing complexes (RISCs) competent for targeted mRNA cleavage. We demonstrate that regression models accommodating RISC abundance and controlling for other mediating factors fit the expression profiles of known target pairs substantially better than models based on m/miRNA expressions alone, and lead to verifications of computational target pair predictions that are more sensitive than those based on marginal expression levels. Because our models are fully independent of exogenous results from sequence-based computational methods, they are appropriate for use as either a primary or secondary source of information regarding m/miRNA target pair relationships, especially in conjunction with high-throughput expression studies.
Detection of boron nitride radicals by emission spectroscopy in a laser-induced plasma
NASA Astrophysics Data System (ADS)
Dutouquet, C.; Acquaviva, S.; Hermann, J.
2001-06-01
Several vibrational bands of boron nitride radicals have been observed in a plasma produced by pulsed-laser ablation of a boron nitride target in low-pressure nitrogen or argon atmospheres. Using time- and space-resolved emission spectroscopic measurements with a high dynamic range, the most abundant isotopic species B 11N have been detected. The emission bands in the spectral range from 340 to 380 nm belong to the Δυ =-1, 0, +1 sequences of the triplet system (transition A 3Π-X 3Π). For positive identification, the molecular emission bands have been compared with synthetic spectra obtained by computer simulations. Furthermore, B 10N emission bands have been reproduced by computer simulation using molecular constants which have been deduced from the B 11N constants. Nevertheless, the presence of the lower abundant isotopic radical B 10N was not proved due the noise level which masked the low emission intensity of the B 10N band heads.
Allen, Felicity; Pon, Allison; Greiner, Russ; Wishart, David
2016-08-02
We describe a tool, competitive fragmentation modeling for electron ionization (CFM-EI) that, given a chemical structure (e.g., in SMILES or InChI format), computationally predicts an electron ionization mass spectrum (EI-MS) (i.e., the type of mass spectrum commonly generated by gas chromatography mass spectrometry). The predicted spectra produced by this tool can be used for putative compound identification, complementing measured spectra in reference databases by expanding the range of compounds able to be considered when availability of measured spectra is limited. The tool extends CFM-ESI, a recently developed method for computational prediction of electrospray tandem mass spectra (ESI-MS/MS), but unlike CFM-ESI, CFM-EI can handle odd-electron ions and isotopes and incorporates an artificial neural network. Tests on EI-MS data from the NIST database demonstrate that CFM-EI is able to model fragmentation likelihoods in low-resolution EI-MS data, producing predicted spectra whose dot product scores are significantly better than full enumeration "bar-code" spectra. CFM-EI also outperformed previously reported results for MetFrag, MOLGEN-MS, and Mass Frontier on one compound identification task. It also outperformed MetFrag in a range of other compound identification tasks involving a much larger data set, containing both derivatized and nonderivatized compounds. While replicate EI-MS measurements of chemical standards are still a more accurate point of comparison, CFM-EI's predictions provide a much-needed alternative when no reference standard is available for measurement. CFM-EI is available at https://sourceforge.net/projects/cfm-id/ for download and http://cfmid.wishartlab.com as a web service.
Majeski, Stephanie A; Steffey, Michele A; Fuller, Mark; Hunt, Geraldine B; Mayhew, Philipp D; Pollard, Rachel E
2017-05-01
Sentinel lymph node mapping can help to direct surgical oncologic staging and metastatic disease detection in patients with complex lymphatic pathways. We hypothesized that indirect computed tomographic lymphography (ICTL) with a water-soluble iodinated contrast agent would successfully map lymphatic pathways of the iliosacral lymphatic center in dogs with anal sac gland carcinoma, providing a potential preoperative method for iliosacral sentinel lymph node identification in dogs. Thirteen adult dogs diagnosed with anal sac gland carcinoma were enrolled in this prospective, pilot study, and ICTL was performed via peritumoral contrast injection with serial caudal abdominal computed tomography scans for iliosacral sentinel lymph node identification. Technical and descriptive details for ICTL were recorded, including patient positioning, total contrast injection volume, timing of contrast visualization, and sentinel lymph nodes and lymphatic pathways identified. Indirect CT lymphography identified lymphatic pathways and sentinel lymph nodes in 12/13 cases (92%). Identified sentinel lymph nodes were ipsilateral to the anal sac gland carcinoma in 8/12 and contralateral to the anal sac gland carcinoma in 4/12 cases. Sacral, internal iliac, and medial iliac lymph nodes were identified as sentinel lymph nodes, and patterns were widely variable. Patient positioning and timing of imaging may impact successful sentinel lymph node identification. Positioning in supported sternal recumbency is recommended. Results indicate that ICTL may be a feasible technique for sentinel lymph node identification in dogs with anal sac gland carcinoma and offer preliminary data to drive further investigation of iliosacral lymphatic metastatic patterns using ICTL and sentinel lymph node biopsy. © 2017 American College of Veterinary Radiology.
Ensemble learning and model averaging for material identification in hyperspectral imagery
NASA Astrophysics Data System (ADS)
Basener, William F.
2017-05-01
In this paper we present a method for identifying the material contained in a pixel or region of pixels in a hyperspectral image. An identification process can be performed on a spectrum from an image from pixels that has been pre-determined to be of interest, generally comparing the spectrum from the image to spectra in an identification library. The metric for comparison used in this paper a Bayesian probability for each material. This probability can be computed either from Bayes' theorem applied to normal distributions for each library spectrum or using model averaging. Using probabilities has the advantage that the probabilities can be summed over spectra for any material class to obtain a class probability. For example, the probability that the spectrum of interest is a fabric is equal to the sum of all probabilities for fabric spectra in the library. We can do the same to determine the probability for a specific type of fabric, or any level of specificity contained in our library. Probabilities not only tell us which material is most likely, the tell us how confident we can be in the material presence; a probability close to 1 indicates near certainty of the presence of a material in the given class, and a probability close to 0.5 indicates that we cannot know if the material is present at the given level of specificity. This is much more informative than a detection score from a target detection algorithm or a label from a classification algorithm. In this paper we present results in the form of a hierarchical tree with probabilities for each node. We use Forest Radiance imagery with 159 bands.
Electronic Escape Trails for Firefighters
NASA Technical Reports Server (NTRS)
Jorgensen, Charles; Schipper, John; Betts, Bradley
2008-01-01
A proposed wireless-communication and data-processing system would exploit recent advances in radio-frequency identification devices (RFIDs) and software to establish information lifelines between firefighters in a burning building and a fire chief at a control station near but outside the building. The system would enable identification of trails that firefighters and others could follow to escape from the building, including identification of new trails should previously established trails become blocked. The system would include a transceiver unit and a computer at the control station, portable transceiver units carried by the firefighters in the building, and RFID tags that the firefighters would place at multiple locations as they move into and through the building (see figure). Each RFID tag, having a size of the order of a few centimeters, would include at least standard RFID circuitry and possibly sensors for measuring such other relevant environmental parameters as temperature, levels of light and sound, concentration of oxygen, concentrations of hazardous chemicals in smoke, and/or levels of nuclear radiation. The RFID tags would be activated and interrogated by the firefighters and control-station transceivers. Preferably, RFID tags would be configured to communicate with each other and with the firefighters units and the control station in an ordered sequence, with built-in redundancy. In a typical scenario, as firefighters moved through a building, they would scatter many RFID tags into smoke-obscured areas by use of a compressed-air gun. Alternatively or in addition, they would mark escape trails by dropping RFID tags at such points of interest as mantraps, hot spots, and trail waypoints. The RFID tags could be of different types, operating at different frequencies to identify their functions, and possibly responding by emitting audible beeps when activated by signals transmitted by transceiver units carried by nearby firefighters.
Huang, Yang; Lowe, Henry J; Klein, Dan; Cucina, Russell J
2005-01-01
The aim of this study was to develop and evaluate a method of extracting noun phrases with full phrase structures from a set of clinical radiology reports using natural language processing (NLP) and to investigate the effects of using the UMLS(R) Specialist Lexicon to improve noun phrase identification within clinical radiology documents. The noun phrase identification (NPI) module is composed of a sentence boundary detector, a statistical natural language parser trained on a nonmedical domain, and a noun phrase (NP) tagger. The NPI module processed a set of 100 XML-represented clinical radiology reports in Health Level 7 (HL7)(R) Clinical Document Architecture (CDA)-compatible format. Computed output was compared with manual markups made by four physicians and one author for maximal (longest) NP and those made by one author for base (simple) NP, respectively. An extended lexicon of biomedical terms was created from the UMLS Specialist Lexicon and used to improve NPI performance. The test set was 50 randomly selected reports. The sentence boundary detector achieved 99.0% precision and 98.6% recall. The overall maximal NPI precision and recall were 78.9% and 81.5% before using the UMLS Specialist Lexicon and 82.1% and 84.6% after. The overall base NPI precision and recall were 88.2% and 86.8% before using the UMLS Specialist Lexicon and 93.1% and 92.6% after, reducing false-positives by 31.1% and false-negatives by 34.3%. The sentence boundary detector performs excellently. After the adaptation using the UMLS Specialist Lexicon, the statistical parser's NPI performance on radiology reports increased to levels comparable to the parser's native performance in its newswire training domain and to that reported by other researchers in the general nonmedical domain.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-02
... release of the White Paper, DTC customers requested a tool that would help measure the impact of the... corresponding CUSIP-level identification information at 8:20 a.m.; and (3) make other conforming changes. II... on Uniform Security Identification Procedures (``CUSIP'') level identification information about the...
Pozzulo, Joanna D; Crescini, Charmagne; Panton, Tasha
2008-01-01
The present study examined the effect of mode of target exposure (live versus video) on eyewitness identification accuracy. Adult participants (N=104) were exposed to a staged crime that they witnessed either live or on videotape. Participants were then asked to rate their stress and arousal levels prior to being presented with either a target-present or -absent simultaneous lineup. Across target-present and -absent lineups, mode of target exposure did not have a significant effect on identification accuracy. However, mode of target exposure was found to have a significant effect on stress and arousal levels. Participants who witnessed the crime live had higher levels of stress and arousal than those who were exposed to the videotaped crime. A higher level of arousal was significantly related to poorer identification accuracy for those in the video condition. For participants in the live condition however, stress and arousal had no effect on eyewitness identification accuracy. Implications of these findings in regards to the generalizability of laboratory-based research on eyewitness testimony to real-life crime are discussed.
Communicating River Level Data and Information to Stakeholders with Different Interests
NASA Astrophysics Data System (ADS)
Macleod, K.; Sripada, S.; Ioris, A.; Arts, K.; van der Wal, R.
2012-12-01
There is a need to increase the effectiveness of how river level data are communicated to a range of stakeholders with an interest in river level information to increase the use of data collected by regulatory agencies. Currently, river level data is provided to members of the public through a web site without any formal engagement with river users having taken place. In our research project called wikiRivers, we are working with the suppliers of river level data as well as the users of this data to explore and improve from the user perspective how river level data and information is made available online. We are focusing on the application of natural language generation technology to create textual summaries of river level data tailored for specific interest groups. These tailored textual summaries will be presented among other modes of information presentation (e.g. maps and visualizations) with the aim to increase communication effectiveness. Natural language generation involves developing computational models that use non-linguistic input data to produce natural language as their output. Acquiring accurate correct system knowledge for natural language generation is a key step in developing such an effective computer software system. In this paper we set out the needs for this project based on discussions with the stakeholder who supplies the river level data and current cyberinfrastructure and report on what we have learned from those individuals and groups who use river level data. Stages in the wikiRivers stakeholder identification, engagement and cyberinfrastructure development. S1- interviews with collectors and suppliers of river level data. S2- river level data stakeholder analysis, including analysis of their interests in individual river networks in Scotland and what they require from the cyberinfrastructure. S3-5 Iterative development and testing of cyberinfrastructure and modelling of river level data with domain and stakeholder knowledge.
Modeling and Parameter Estimation of Spacecraft Fuel Slosh with Diaphragms Using Pendulum Analogs
NASA Technical Reports Server (NTRS)
Chatman, Yadira; Gangadharan, Sathya; Schlee, Keith; Ristow, James; Suderman, James; Walker, Charles; Hubert, Carl
2007-01-01
Prediction and control of liquid slosh in moving containers is an important consideration in the design of spacecraft and launch vehicle control systems. Even with modern computing systems, CFD type simulations are not fast enough to allow for large scale Monte Carlo analyses of spacecraft and launch vehicle dynamic behavior with slosh included. It is still desirable to use some type of simplified mechanical analog for the slosh to shorten computation time. Analytic determination of the slosh analog parameters has met with mixed success and is made even more difficult by the introduction of propellant management devices such as elastomeric diaphragms. By subjecting full-sized fuel tanks with actual flight fuel loads to motion similar to that experienced in flight and measuring the forces experienced by the tanks, these parameters can be determined experimentally. Currently, the identification of the model parameters is a laborious trial-and-error process in which the hand-derived equations of motion for the mechanical analog are evaluated and their results compared with the experimental results. This paper will describe efforts by the university component of a team comprised of NASA's Launch Services Program, Embry Riddle Aeronautical University, Southwest Research Institute and Hubert Astronautics to improve the accuracy and efficiency of modeling techniques used to predict these types of motions. Of particular interest is the effect of diaphragms and bladders on the slosh dynamics and how best to model these devices. The previous research was an effort to automate the process of slosh model parameter identification using a MATLAB/SimMechanics-based computer simulation. These results are the first step in applying the same computer estimation to a full-size tank and vehicle propulsion system. The introduction of diaphragms to this experimental set-up will aid in a better and more complete prediction of fuel slosh characteristics and behavior. Automating the parameter identification process will save time and thus allow earlier identification of potential vehicle performance problems.
Venlet, Jeroen; Piers, Sebastiaan R D; Kapel, Gijsbert F L; de Riva, Marta; Pauli, Philippe F G; van der Geest, Rob J; Zeppenfeld, Katja
2017-08-01
Low endocardial unipolar voltage (UV) at sites with normal bipolar voltage (BV) may indicate epicardial scar. Currently applied UV cutoff values are based on studies that lacked epicardial fat information. This study aimed to define endocardial UV cutoff values using computed tomography-derived fat information and to analyze their clinical value for right ventricular substrate delineation. Thirty-three patients (50±14 years; 79% men) underwent combined endocardial-epicardial right ventricular electroanatomical mapping and ablation of right ventricular scar-related ventricular tachycardia with computed tomographic image integration, including computed tomography-derived fat thickness. Of 6889 endocardial-epicardial mapping point pairs, 547 (8%) pairs with distance <10 mm and fat thickness <1.0 mm were analyzed for voltage and abnormal (fragmented/late potential) electrogram characteristics. At sites with endocardial BV >1.50 mV, the optimal endocardial UV cutoff for identification of epicardial BV <1.50 mV was 3.9 mV (area under the curve, 0.75; sensitivity, 60%; specificity, 79%) and cutoff for identification of abnormal epicardial electrogram was 3.7 mV (area under the curve, 0.88; sensitivity, 100%; specificity, 67%). The majority of abnormal electrograms (130 of 151) were associated with transmural scar. Eighty-six percent of abnormal epicardial electrograms had corresponding endocardial sites with BV <1.50 mV, and the remaining could be identified by corresponding low endocardial UV <3.7 mV. For identification of epicardial right ventricular scar, an endocardial UV cutoff value of 3.9 mV is more accurate than previously reported cutoff values. Although the majority of epicardial abnormal electrograms are associated with transmural scar with low endocardial BV, the additional use of endocardial UV at normal BV sites improves the diagnostic accuracy resulting in identification of all epicardial abnormal electrograms at sites with <1.0 mm fat. © 2017 American Heart Association, Inc.
NASA Astrophysics Data System (ADS)
Chen, Lei; Huang, Tao; Zhang, Yu-Hang; Jiang, Yang; Zheng, Mingyue; Cai, Yu-Dong
2016-07-01
Tumors are formed by the abnormal proliferation of somatic cells with disordered growth regulation under the influence of tumorigenic factors. Recently, the theory of “cancer drivers” connects tumor initiation with several specific mutations in the so-called cancer driver genes. According to the differentiation of four basic levels between tumor and adjacent normal tissues, the cancer drivers can be divided into the following: (1) Methylation level, (2) microRNA level, (3) mutation level, and (4) mRNA level. In this study, a computational method is proposed to identify novel lung adenocarcinoma drivers based on dysfunctional genes on the methylation, microRNA, mutation and mRNA levels. First, a large network was constructed using protein-protein interactions. Next, we searched all of the shortest paths connecting dysfunctional genes on different levels and extracted new candidate genes lying on these paths. Finally, the obtained candidate genes were filtered by a permutation test and an additional strict selection procedure involving a betweenness ratio and an interaction score. Several candidate genes remained, which are deemed to be related to two different levels of cancer. The analyses confirmed our assertions that some have the potential to contribute to the tumorigenesis process on multiple levels.
Chapter 17. Extension of endogenous primers as a tool to detect micro-RNA targets.
Vatolin, Sergei; Weil, Robert J
2008-01-01
Mammalian cells express a large number of small, noncoding RNAs, including micro-RNAs (miRNAs), that can regulate both the level of a target mRNA and the protein produced by the target mRNA. Recognition of miRNA targets is a complicated process, as a single target mRNA may be regulated by several miRNAs. The potential for combinatorial miRNA-mediated regulation of miRNA targets complicates diagnostic and therapeutic applications of miRNAs. Despite significant progress in understanding the biology of miRNAs and advances in computational predictions of miRNA targets, methods that permit direct physical identification of miRNA-mRNA complexes in eukaryotic cells are still required. Several groups have utilized coimmunoprecipitation of RNA associated with a protein(s) that is part of the RNA silencing macromolecular complex. This chapter describes a detailed but straightforward strategy that identifies miRNA targets based on the assumption that small RNAs base paired with a complementary target mRNA can be used as a primer to synthesize cDNA that may be used for cloning, identification, and functional analysis.
Chiang, Yi-Kun; Kuo, Ching-Chuan; Wu, Yu-Shan; Chen, Chung-Tong; Coumar, Mohane Selvaraj; Wu, Jian-Sung; Hsieh, Hsing-Pang; Chang, Chi-Yen; Jseng, Huan-Yi; Wu, Ming-Hsine; Leou, Jiun-Shyang; Song, Jen-Shin; Chang, Jang-Yang; Lyu, Ping-Chiang; Chao, Yu-Sheng; Wu, Su-Ying
2009-07-23
A pharmacophore model, Hypo1, was built on the basis of 21 training-set indole compounds with varying levels of antiproliferative activity. Hypo1 possessed important chemical features required for the inhibitors and demonstrated good predictive ability for biological activity, with high correlation coefficients of 0.96 and 0.89 for the training-set and test-set compounds, respectively. Further utilization of the Hypo1 pharmacophore model to screen chemical database in silico led to the identification of four compounds with antiproliferative activity. Among these four compounds, 43 showed potent antiproliferative activity against various cancer cell lines with the strongest inhibition on the proliferation of KB cells (IC(50) = 187 nM). Further biological characterization revealed that 43 effectively inhibited tubulin polymerization and significantly induced cell cycle arrest in G(2)-M phase. In addition, 43 also showed the in vivo-like anticancer effects. To our knowledge, 43 is the most potent antiproliferative compound with antitubulin activity discovered by computer-aided drug design. The chemical novelty of 43 and its anticancer activities make this compound worthy of further lead optimization.
Using electronic patient records to inform strategic decision making in primary care.
Mitchell, Elizabeth; Sullivan, Frank; Watt, Graham; Grimshaw, Jeremy M; Donnan, Peter T
2004-01-01
Although absolute risk of death associated with raised blood pressure increases with age, the benefits of treatment are greater in elderly patients. Despite this, the 'rule of halves' particularly applies to this group. We conducted a randomised controlled trial to evaluate different levels of feedback designed to improve identification, treatment and control of elderly hypertensives. Fifty-two general practices were randomly allocated to either: Control (n=19), Audit only feedback (n=16) or Audit plus Strategic feedback, prioritising patients by absolute risk (n=17). Feedback was based on electronic data, annually extracted from practice computer systems. Data were collected for 265,572 patients, 30,345 aged 65-79. The proportion of known hypertensives in each group with BP recorded increased over the study period and the numbers of untreated and uncontrolled patients reduced. There was a significant difference in mean systolic pressure between the Audit plus Strategic and Audit only groups and significantly greater control in the Audit plus Strategic group. Providing patient-specific practice feedback can impact on identification and management of hypertension in the elderly and produce a significant increase in control.
NASA Astrophysics Data System (ADS)
Li, Shuanghong; Cao, Hongliang; Yang, Yupu
2018-02-01
Fault diagnosis is a key process for the reliability and safety of solid oxide fuel cell (SOFC) systems. However, it is difficult to rapidly and accurately identify faults for complicated SOFC systems, especially when simultaneous faults appear. In this research, a data-driven Multi-Label (ML) pattern identification approach is proposed to address the simultaneous fault diagnosis of SOFC systems. The framework of the simultaneous-fault diagnosis primarily includes two components: feature extraction and ML-SVM classifier. The simultaneous-fault diagnosis approach can be trained to diagnose simultaneous SOFC faults, such as fuel leakage, air leakage in different positions in the SOFC system, by just using simple training data sets consisting only single fault and not demanding simultaneous faults data. The experimental result shows the proposed framework can diagnose the simultaneous SOFC system faults with high accuracy requiring small number training data and low computational burden. In addition, Fault Inference Tree Analysis (FITA) is employed to identify the correlations among possible faults and their corresponding symptoms at the system component level.
Particle Identification on an FPGA Accelerated Compute Platform for the LHCb Upgrade
NASA Astrophysics Data System (ADS)
Fäerber, Christian; Schwemmer, Rainer; Machen, Jonathan; Neufeld, Niko
2017-07-01
The current LHCb readout system will be upgraded in 2018 to a “triggerless” readout of the entire detector at the Large Hadron Collider collision rate of 40 MHz. The corresponding bandwidth from the detector down to the foreseen dedicated computing farm (event filter farm), which acts as the trigger, has to be increased by a factor of almost 100 from currently 500 Gb/s up to 40 Tb/s. The event filter farm will preanalyze the data and will select the events on an event by event basis. This will reduce the bandwidth down to a manageable size to write the interesting physics data to tape. The design of such a system is a challenging task, and the reason why different new technologies are considered and have to be investigated for the different parts of the system. For the usage in the event building farm or in the event filter farm (trigger), an experimental field programmable gate array (FPGA) accelerated computing platform is considered and, therefore, tested. FPGA compute accelerators are used more and more in standard servers such as for Microsoft Bing search or Baidu search. The platform we use hosts a general Intel CPU and a high-performance FPGA linked via the high-speed Intel QuickPath Interconnect. An accelerator is implemented on the FPGA. It is very likely that these platforms, which are built, in general, for high-performance computing, are also very interesting for the high-energy physics community. First, the performance results of smaller test cases performed at the beginning are presented. Afterward, a part of the existing LHCb RICH particle identification is tested and is ported to the experimental FPGA accelerated platform. We have compared the performance of the LHCb RICH particle identification running on a normal CPU with the performance of the same algorithm, which is running on the Xeon-FPGA compute accelerator platform.
Ferreira, L; Sánchez-Juanes, F; Porras-Guerra, I; García-García, M I; García-Sánchez, J E; González-Buitrago, J M; Muñoz-Bellido, J L
2011-04-01
Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) allows a fast and reliable bacterial identification from culture plates. Direct analysis of clinical samples may increase its usefulness in samples in which a fast identification of microorganisms can guide empirical treatment, such as blood cultures (BC). Three hundred and thirty BC, reported as positive by the automated BC incubation device, were processed by conventional methods for BC processing, and by a fast method based on direct MALDI-TOF MS. Three hundred and eighteen of them yield growth on culture plates, and 12 were false positive. The MALDI-TOF MS-based method reported that no peaks were found, or the absence of a reliable identification profile, in all these false positive BC. No mixed cultures were found. Among these 318 BC, we isolated 61 Gram-negatives (GN), 239 Gram-positives (GP) and 18 fungi. Microorganism identifications in GN were coincident with conventional identification, at the species level, in 83.3% of BC and, at the genus level, in 96.6%. In GP, identifications were coincident with conventional identification in 31.8% of BC at the species level, and in 64.8% at the genus level. Fungaemia was not reliably detected by MALDI-TOF. In 18 BC positive for Candida species (eight C. albicans, nine C. parapsilosis and one C. tropicalis), no microorganisms were identified at the species level, and only one (5.6%) was detected at the genus level. The results of the present study show that this fast, MALDI-TOF MS-based method allows bacterial identification directly from presumptively positive BC in a short time (<30 min), with a high accuracy, especially when GN bacteria are involved. © 2010 The Authors. Clinical Microbiology and Infection © 2010 European Society of Clinical Microbiology and Infectious Diseases.
Sharma, Manish; Goyal, Deepanshu; Achuth, P V; Acharya, U Rajendra
2018-07-01
Sleep related disorder causes diminished quality of lives in human beings. Sleep scoring or sleep staging is the process of classifying various sleep stages which helps to detect the quality of sleep. The identification of sleep-stages using electroencephalogram (EEG) signals is an arduous task. Just by looking at an EEG signal, one cannot determine the sleep stages precisely. Sleep specialists may make errors in identifying sleep stages by visual inspection. To mitigate the erroneous identification and to reduce the burden on doctors, a computer-aided EEG based system can be deployed in the hospitals, which can help identify the sleep stages, correctly. Several automated systems based on the analysis of polysomnographic (PSG) signals have been proposed. A few sleep stage scoring systems using EEG signals have also been proposed. But, still there is a need for a robust and accurate portable system developed using huge dataset. In this study, we have developed a new single-channel EEG based sleep-stages identification system using a novel set of wavelet-based features extracted from a large EEG dataset. We employed a novel three-band time-frequency localized (TBTFL) wavelet filter bank (FB). The EEG signals are decomposed using three-level wavelet decomposition, yielding seven sub-bands (SBs). This is followed by the computation of discriminating features namely, log-energy (LE), signal-fractal-dimensions (SFD), and signal-sample-entropy (SSE) from all seven SBs. The extracted features are ranked and fed to the support vector machine (SVM) and other supervised learning classifiers. In this study, we have considered five different classification problems (CPs), (two-class (CP-1), three-class (CP-2), four-class (CP-3), five-class (CP-4) and six-class (CP-5)). The proposed system yielded accuracies of 98.3%, 93.9%, 92.1%, 91.7%, and 91.5% for CP-1 to CP-5, respectively, using 10-fold cross validation (CV) technique. Copyright © 2018 Elsevier Ltd. All rights reserved.
Flight elements: Fault detection and fault management
NASA Technical Reports Server (NTRS)
Lum, H.; Patterson-Hine, A.; Edge, J. T.; Lawler, D.
1990-01-01
Fault management for an intelligent computational system must be developed using a top down integrated engineering approach. An approach proposed includes integrating the overall environment involving sensors and their associated data; design knowledge capture; operations; fault detection, identification, and reconfiguration; testability; causal models including digraph matrix analysis; and overall performance impacts on the hardware and software architecture. Implementation of the concept to achieve a real time intelligent fault detection and management system will be accomplished via the implementation of several objectives, which are: Development of fault tolerant/FDIR requirement and specification from a systems level which will carry through from conceptual design through implementation and mission operations; Implementation of monitoring, diagnosis, and reconfiguration at all system levels providing fault isolation and system integration; Optimize system operations to manage degraded system performance through system integration; and Lower development and operations costs through the implementation of an intelligent real time fault detection and fault management system and an information management system.
A Computational Discriminability Analysis on Twin Fingerprints
NASA Astrophysics Data System (ADS)
Liu, Yu; Srihari, Sargur N.
Sharing similar genetic traits makes the investigation of twins an important study in forensics and biometrics. Fingerprints are one of the most commonly found types of forensic evidence. The similarity between twins’ prints is critical establish to the reliability of fingerprint identification. We present a quantitative analysis of the discriminability of twin fingerprints on a new data set (227 pairs of identical twins and fraternal twins) recently collected from a twin population using both level 1 and level 2 features. Although the patterns of minutiae among twins are more similar than in the general population, the similarity of fingerprints of twins is significantly different from that between genuine prints of the same finger. Twins fingerprints are discriminable with a 1.5%~1.7% higher EER than non-twins. And identical twins can be distinguished by examine fingerprint with a slightly higher error rate than fraternal twins.
Migrating lumbar facet joint cysts.
Palmieri, Francesco; Cassar-Pullicino, Victor N; Lalam, Radhesh K; Tins, Bernhard J; Tyrrell, Prudencia N M; McCall, Iain W
2006-04-01
The majority of lumbar facet joint cysts (LFJCs) are located in the spinal canal, on the medial aspect of the facet joint with characteristic diagnostic features. When they migrate away from the joint of origin, they cause diagnostic problems. In a 7-year period we examined by computed tomography (CT) and magnetic resonance (MR) imaging five unusual cases of facet joint cysts which migrated from the facet joint of origin. Three LFJCs were identified in the right S1 foramen, one in the right L5-S1 neural foramen and one in the left erector spinae and multifidus muscles between the levels of L2-L4 spinous process. Awareness that spinal lesions identified at MRI and CT could be due to migrating facet joint cyst requires a high level of suspicion. The identification of the appositional contact of the cyst and the facet joint needs to be actively sought in the presence of degenerative facet joints.
Emotional System for Military Target Identification
2009-10-01
algorithm [23], and used it to solve a facial recognition problem. In other works [24,25], we explored the potential of using emotional neural...other application areas, such as security ( facial recognition ) and medical (blood cell identification), can be also efficiently used in military...Application of an emotional neural network to facial recognition . Neural Computing and Applications, 18(4), 309-320. [25] Khashman, A. (2009). Blood cell