Sample records for computer processing means

  1. The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Zhou, Liqing

    2015-12-01

    With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.

  2. 48 CFR 970.5227-1 - Rights in data-facilities.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...) Computer data bases, as used in this clause, means a collection of data in a form capable of, and for the purpose of, being stored in, processed, and operated on by a computer. The term does not include computer software. (2) Computer software, as used in this clause, means (i) computer programs which are data...

  3. 48 CFR 970.5227-1 - Rights in data-facilities.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...) Computer data bases, as used in this clause, means a collection of data in a form capable of, and for the purpose of, being stored in, processed, and operated on by a computer. The term does not include computer software. (2) Computer software, as used in this clause, means (i) computer programs which are data...

  4. 48 CFR 970.5227-1 - Rights in data-facilities.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...) Computer data bases, as used in this clause, means a collection of data in a form capable of, and for the purpose of, being stored in, processed, and operated on by a computer. The term does not include computer software. (2) Computer software, as used in this clause, means (i) computer programs which are data...

  5. 48 CFR 970.5227-1 - Rights in data-facilities.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...) Computer data bases, as used in this clause, means a collection of data in a form capable of, and for the purpose of, being stored in, processed, and operated on by a computer. The term does not include computer software. (2) Computer software, as used in this clause, means (i) computer programs which are data...

  6. An empirical analysis of the distribution of overshoots in a stationary Gaussian stochastic process

    NASA Technical Reports Server (NTRS)

    Carter, M. C.; Madison, M. W.

    1973-01-01

    The frequency distribution of overshoots in a stationary Gaussian stochastic process is analyzed. The primary processes involved in this analysis are computer simulation and statistical estimation. Computer simulation is used to simulate stationary Gaussian stochastic processes that have selected autocorrelation functions. An analysis of the simulation results reveals a frequency distribution for overshoots with a functional dependence on the mean and variance of the process. Statistical estimation is then used to estimate the mean and variance of a process. It is shown that for an autocorrelation function, the mean and the variance for the number of overshoots, a frequency distribution for overshoots can be estimated.

  7. An empirical analysis of the distribution of the duration of overshoots in a stationary gaussian stochastic process

    NASA Technical Reports Server (NTRS)

    Parrish, R. S.; Carter, M. C.

    1974-01-01

    This analysis utilizes computer simulation and statistical estimation. Realizations of stationary gaussian stochastic processes with selected autocorrelation functions are computer simulated. Analysis of the simulated data revealed that the mean and the variance of a process were functionally dependent upon the autocorrelation parameter and crossing level. Using predicted values for the mean and standard deviation, by the method of moments, the distribution parameters was estimated. Thus, given the autocorrelation parameter, crossing level, mean, and standard deviation of a process, the probability of exceeding the crossing level for a particular length of time was calculated.

  8. Cognitive Approaches for Medicine in Cloud Computing.

    PubMed

    Ogiela, Urszula; Takizawa, Makoto; Ogiela, Lidia

    2018-03-03

    This paper will present the application potential of the cognitive approach to data interpretation, with special reference to medical areas. The possibilities of using the meaning approach to data description and analysis will be proposed for data analysis tasks in Cloud Computing. The methods of cognitive data management in Cloud Computing are aimed to support the processes of protecting data against unauthorised takeover and they serve to enhance the data management processes. The accomplishment of the proposed tasks will be the definition of algorithms for the execution of meaning data interpretation processes in safe Cloud Computing. • We proposed a cognitive methods for data description. • Proposed a techniques for secure data in Cloud Computing. • Application of cognitive approaches for medicine was described.

  9. Size and emotion averaging: costs of dividing attention after all.

    PubMed

    Brand, John; Oriet, Chris; Tottenham, Laurie Sykes

    2012-03-01

    Perceptual averaging is a process by which sets of similar items are represented by summary statistics such as their average size, luminance, or orientation. Researchers have argued that this process is automatic, able to be carried out without interference from concurrent processing. Here, we challenge this conclusion and demonstrate a reliable cost of computing the mean size of circles distinguished by colour (Experiments 1 and 2) and the mean emotionality of faces distinguished by sex (Experiment 3). We also test the viability of two strategies that could have allowed observers to guess the correct response without computing the average size or emotionality of both sets concurrently. We conclude that although two means can be computed concurrently, doing so incurs a cost of dividing attention.

  10. Genten: Software for Generalized Tensor Decompositions v. 1.0.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phipps, Eric T.; Kolda, Tamara G.; Dunlavy, Daniel

    Tensors, or multidimensional arrays, are a powerful mathematical means of describing multiway data. This software provides computational means for decomposing or approximating a given tensor in terms of smaller tensors of lower dimension, focusing on decomposition of large, sparse tensors. These techniques have applications in many scientific areas, including signal processing, linear algebra, computer vision, numerical analysis, data mining, graph analysis, neuroscience and more. The software is designed to take advantage of parallelism present emerging computer architectures such has multi-core CPUs, many-core accelerators such as the Intel Xeon Phi, and computation-oriented GPUs to enable efficient processing of large tensors.

  11. Pervasive Computing Goes to School

    ERIC Educational Resources Information Center

    Plymale, William O.

    2005-01-01

    In 1991 Mark Weiser introduced the idea of ubiquitous computing: a world in which computers and associated technologies become invisible, and thus indistinguishable from everyday life. This invisible computing is accomplished by means of "embodied virtuality," the process of drawing computers into the physical world. Weiser proposed that…

  12. Computers for symbolic processing

    NASA Technical Reports Server (NTRS)

    Wah, Benjamin W.; Lowrie, Matthew B.; Li, Guo-Jie

    1989-01-01

    A detailed survey on the motivations, design, applications, current status, and limitations of computers designed for symbolic processing is provided. Symbolic processing computations are performed at the word, relation, or meaning levels, and the knowledge used in symbolic applications may be fuzzy, uncertain, indeterminate, and ill represented. Various techniques for knowledge representation and processing are discussed from both the designers' and users' points of view. The design and choice of a suitable language for symbolic processing and the mapping of applications into a software architecture are then considered. The process of refining the application requirements into hardware and software architectures is treated, and state-of-the-art sequential and parallel computers designed for symbolic processing are discussed.

  13. 10 CFR 431.92 - Definitions concerning commercial air conditioners and heat pumps.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... expressed in identical units of measurement. Commercial package air-conditioning and heating equipment means... application. Computer Room Air Conditioner means a basic model of commercial package air-conditioning and heating equipment (packaged or split) that is: Used in computer rooms, data processing rooms, or other...

  14. Computation of discharge using the index-velocity method in tidally affected areas

    USGS Publications Warehouse

    Ruhl, Catherine A.; Simpson, Michael R.

    2005-01-01

    Computation of a discharge time-series in a tidally affected area is a two-step process. First, the cross-sectional area is computed on the basis of measured water levels and the mean cross-sectional velocity is computed on the basis of the measured index velocity. Then discharge is calculated as the product of the area and mean velocity. Daily mean discharge is computed as the daily average of the low-pass filtered discharge. The Sacramento-San Joaquin River Delta and San Francisco Bay, California, is an area that is strongly influenced by the tides, and therefore is used as an example of how this methodology is used.

  15. System and method of designing a load bearing layer of an inflatable vessel

    NASA Technical Reports Server (NTRS)

    Spexarth, Gary R. (Inventor)

    2007-01-01

    A computer-implemented method is provided for designing a restraint layer of an inflatable vessel. The restraint layer is inflatable from an initial uninflated configuration to an inflated configuration and is constructed from a plurality of interfacing longitudinal straps and hoop straps. The method involves providing computer processing means (e.g., to receive user inputs, perform calculations, and output results) and utilizing this computer processing means to implement a plurality of subsequent design steps. The computer processing means is utilized to input the load requirements of the inflated restraint layer and to specify an inflated configuration of the restraint layer. This includes specifying a desired design gap between pairs of adjacent longitudinal or hoop straps, whereby the adjacent straps interface with a plurality of transversely extending hoop or longitudinal straps at a plurality of intersections. Furthermore, an initial uninflated configuration of the restraint layer that is inflatable to achieve the specified inflated configuration is determined. This includes calculating a manufacturing gap between pairs of adjacent longitudinal or hoop straps that correspond to the specified desired gap in the inflated configuration of the restraint layer.

  16. Computer Aided Design of Computer Generated Holograms for electron beam fabrication

    NASA Technical Reports Server (NTRS)

    Urquhart, Kristopher S.; Lee, Sing H.; Guest, Clark C.; Feldman, Michael R.; Farhoosh, Hamid

    1989-01-01

    Computer Aided Design (CAD) systems that have been developed for electrical and mechanical design tasks are also effective tools for the process of designing Computer Generated Holograms (CGHs), particularly when these holograms are to be fabricated using electron beam lithography. CAD workstations provide efficient and convenient means of computing, storing, displaying, and preparing for fabrication many of the features that are common to CGH designs. Experience gained in the process of designing CGHs with various types of encoding methods is presented. Suggestions are made so that future workstations may further accommodate the CGH design process.

  17. 77 FR 26660 - Guidelines for the Transfer of Excess Computers or Other Technical Equipment Pursuant to Section...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-07

    ....usda.gov . SUPPLEMENTARY INFORMATION: A. Background A proposed rule was published in the Federal.... Computers or other technical equipment means central processing units, laptops, desktops, computer mouses...

  18. Real-time holographic surveillance system

    DOEpatents

    Collins, H. Dale; McMakin, Douglas L.; Hall, Thomas E.; Gribble, R. Parks

    1995-01-01

    A holographic surveillance system including means for generating electromagnetic waves; means for transmitting the electromagnetic waves toward a target at a plurality of predetermined positions in space; means for receiving and converting electromagnetic waves reflected from the target to electrical signals at a plurality of predetermined positions in space; means for processing the electrical signals to obtain signals corresponding to a holographic reconstruction of the target; and means for displaying the processed information to determine nature of the target. The means for processing the electrical signals includes means for converting analog signals to digital signals followed by a computer means to apply a backward wave algorithm.

  19. Information processing, computation, and cognition.

    PubMed

    Piccinini, Gualtiero; Scarantino, Andrea

    2011-01-01

    Computation and information processing are among the most fundamental notions in cognitive science. They are also among the most imprecisely discussed. Many cognitive scientists take it for granted that cognition involves computation, information processing, or both - although others disagree vehemently. Yet different cognitive scientists use 'computation' and 'information processing' to mean different things, sometimes without realizing that they do. In addition, computation and information processing are surrounded by several myths; first and foremost, that they are the same thing. In this paper, we address this unsatisfactory state of affairs by presenting a general and theory-neutral account of computation and information processing. We also apply our framework by analyzing the relations between computation and information processing on one hand and classicism, connectionism, and computational neuroscience on the other. We defend the relevance to cognitive science of both computation, at least in a generic sense, and information processing, in three important senses of the term. Our account advances several foundational debates in cognitive science by untangling some of their conceptual knots in a theory-neutral way. By leveling the playing field, we pave the way for the future resolution of the debates' empirical aspects.

  20. Flexible processing and the design of grammar.

    PubMed

    Sag, Ivan A; Wasow, Thomas

    2015-02-01

    We explore the consequences of letting the incremental and integrative nature of language processing inform the design of competence grammar. What emerges is a view of grammar as a system of local monotonic constraints that provide a direct characterization of the signs (the form-meaning correspondences) of a given language. This "sign-based" conception of grammar has provided precise solutions to the key problems long thought to motivate movement-based analyses, has supported three decades of computational research developing large-scale grammar implementations, and is now beginning to play a role in computational psycholinguistics research that explores the use of underspecification in the incremental computation of partial meanings.

  1. Logical Access Control Mechanisms in Computer Systems.

    ERIC Educational Resources Information Center

    Hsiao, David K.

    The subject of access control mechanisms in computer systems is concerned with effective means to protect the anonymity of private information on the one hand, and to regulate the access to shareable information on the other hand. Effective means for access control may be considered on three levels: memory, process and logical. This report is a…

  2. On Study of Application of Big Data and Cloud Computing Technology in Smart Campus

    NASA Astrophysics Data System (ADS)

    Tang, Zijiao

    2017-12-01

    We live in an era of network and information, which means we produce and face a lot of data every day, however it is not easy for database in the traditional meaning to better store, process and analyze the mass data, therefore the big data was born at the right moment. Meanwhile, the development and operation of big data rest with cloud computing which provides sufficient space and resources available to process and analyze data of big data technology. Nowadays, the proposal of smart campus construction aims at improving the process of building information in colleges and universities, therefore it is necessary to consider combining big data technology and cloud computing technology into construction of smart campus to make campus database system and campus management system mutually combined rather than isolated, and to serve smart campus construction through integrating, storing, processing and analyzing mass data.

  3. A method for interactive satellite failure diagnosis: Towards a connectionist solution

    NASA Technical Reports Server (NTRS)

    Bourret, P.; Reggia, James A.

    1989-01-01

    Various kinds of processes which allow one to make a diagnosis are analyzed. The analyses then focuses on one of these processes used for satellite failure diagnosis. This process consists of sending the satellite instructions about system status alterations: to mask the effects of one possible component failure or to look for additional abnormal measures. A formal model of this process is given. This model is an extension of a previously defined connectionist model which allows computation of ratios between the likelihoods of observed manifestations according to various diagnostic hypotheses. The expected mean value of these likelihood measures for each possible status of the satellite can be computed in a similar way. Therefore, it is possible to select the most appropriate status according to three different purposes: to confirm an hypothesis, to eliminate an hypothesis, or to choose between two hypotheses. Finally, a first connectionist schema of computation of these expected mean values is given.

  4. Real-time holographic surveillance system

    DOEpatents

    Collins, H.D.; McMakin, D.L.; Hall, T.E.; Gribble, R.P.

    1995-10-03

    A holographic surveillance system is disclosed including means for generating electromagnetic waves; means for transmitting the electromagnetic waves toward a target at a plurality of predetermined positions in space; means for receiving and converting electromagnetic waves reflected from the target to electrical signals at a plurality of predetermined positions in space; means for processing the electrical signals to obtain signals corresponding to a holographic reconstruction of the target; and means for displaying the processed information to determine nature of the target. The means for processing the electrical signals includes means for converting analog signals to digital signals followed by a computer means to apply a backward wave algorithm. 21 figs.

  5. Using Hand-Held Computers When Conducting National Security Background Interviews: Utility Test Results

    DTIC Science & Technology

    2010-05-01

    Tablet computers resemble ordinary notebook computers but can be set up as a flat display for handwriting by means of a stylus (digital pen). When used...PC accessories, and often strongly resemble notebook computers. However, all tablets can be set up as a flat display for handwriting by means of a...P3: “Depending on how the tablet handles the post-interview process, it would save time over paper.”  P4: “I hoped you were going to say that this

  6. Application of computer generated color graphic techniques to the processing and display of three dimensional fluid dynamic data

    NASA Technical Reports Server (NTRS)

    Anderson, B. H.; Putt, C. W.; Giamati, C. C.

    1981-01-01

    Color coding techniques used in the processing of remote sensing imagery were adapted and applied to the fluid dynamics problems associated with turbofan mixer nozzles. The computer generated color graphics were found to be useful in reconstructing the measured flow field from low resolution experimental data to give more physical meaning to this information and in scanning and interpreting the large volume of computer generated data from the three dimensional viscous computer code used in the analysis.

  7. Case Studies of Auditing in a Computer-Based Systems Environment.

    ERIC Educational Resources Information Center

    General Accounting Office, Washington, DC.

    In response to a growing need for effective and efficient means for auditing computer-based systems, a number of studies dealing primarily with batch-processing type computer operations have been conducted to explore the impact of computers on auditing activities in the Federal Government. This report first presents some statistical data on…

  8. 45 CFR 310.1 - What definitions apply to this part?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... existing automated data processing computer system through an Intergovernmental Service Agreement; (4...) Office Automation means a generic adjunct component of a computer system that supports the routine... timely and satisfactory; (iv) Assurances that information in the computer system as well as access, use...

  9. 45 CFR 310.1 - What definitions apply to this part?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... existing automated data processing computer system through an Intergovernmental Service Agreement; (4...) Office Automation means a generic adjunct component of a computer system that supports the routine... timely and satisfactory; (iv) Assurances that information in the computer system as well as access, use...

  10. 45 CFR 310.1 - What definitions apply to this part?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... existing automated data processing computer system through an Intergovernmental Service Agreement; (4...) Office Automation means a generic adjunct component of a computer system that supports the routine... timely and satisfactory; (iv) Assurances that information in the computer system as well as access, use...

  11. 45 CFR 310.1 - What definitions apply to this part?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... existing automated data processing computer system through an Intergovernmental Service Agreement; (4...) Office Automation means a generic adjunct component of a computer system that supports the routine... timely and satisfactory; (iv) Assurances that information in the computer system as well as access, use...

  12. 45 CFR 310.1 - What definitions apply to this part?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... existing automated data processing computer system through an Intergovernmental Service Agreement; (4...) Office Automation means a generic adjunct component of a computer system that supports the routine... timely and satisfactory; (iv) Assurances that information in the computer system as well as access, use...

  13. Late Bilinguals Share Syntax Unsparingly between L1 and L2: Evidence from Crosslinguistically Similar and Different Constructions

    ERIC Educational Resources Information Center

    Hwang, Heeju; Shin, Jeong-Ah; Hartsuiker, Robert J.

    2018-01-01

    Languages often use different constructions to convey the same meaning. For example, the meaning of a causative construction in English ("Jen had her computer fixed") is conveyed using an active structure in Korean ("Jen-NOM her computer-ACC fixed"), and yet little is known about how bilinguals represent and process such…

  14. Reconfigurable Computing for Computational Science: A New Focus in High Performance Computing

    DTIC Science & Technology

    2006-11-01

    in the past decade. Researchers are regularly employing the power of large computing systems and parallel processing to tackle larger and more...complex problems in all of the physical sciences. For the past decade or so, most of this growth in computing power has been “free” with increased...the scientific computing community as a means to continued growth in computing capability. This paper offers a glimpse of the hardware and

  15. Computers and Classroom Culture.

    ERIC Educational Resources Information Center

    Schofield, Janet Ward

    This book explores the meaning of computer technology in schools. The book is based on data gathered from a two-year observation of more than 30 different classrooms in an urban high school: geometry classes in which students used artificially intelligent tutors; business classes in which students learned word processing; and computer science…

  16. What Does "Fast" Mean? Understanding the Physical World through Computational Representations

    ERIC Educational Resources Information Center

    Parnafes, Orit

    2007-01-01

    This article concerns the development of conceptual understanding of a physical phenomenon through the use of computational representations. It examines how students make sense of and interpret computational representations, and how their understanding of the represented physical phenomenon develops in this process. Eight studies were conducted,…

  17. Information processing, computation, and cognition

    PubMed Central

    Scarantino, Andrea

    2010-01-01

    Computation and information processing are among the most fundamental notions in cognitive science. They are also among the most imprecisely discussed. Many cognitive scientists take it for granted that cognition involves computation, information processing, or both – although others disagree vehemently. Yet different cognitive scientists use ‘computation’ and ‘information processing’ to mean different things, sometimes without realizing that they do. In addition, computation and information processing are surrounded by several myths; first and foremost, that they are the same thing. In this paper, we address this unsatisfactory state of affairs by presenting a general and theory-neutral account of computation and information processing. We also apply our framework by analyzing the relations between computation and information processing on one hand and classicism, connectionism, and computational neuroscience on the other. We defend the relevance to cognitive science of both computation, at least in a generic sense, and information processing, in three important senses of the term. Our account advances several foundational debates in cognitive science by untangling some of their conceptual knots in a theory-neutral way. By leveling the playing field, we pave the way for the future resolution of the debates’ empirical aspects. PMID:22210958

  18. Displaying Computer Simulations Of Physical Phenomena

    NASA Technical Reports Server (NTRS)

    Watson, Val

    1991-01-01

    Paper discusses computer simulation as means of experiencing and learning to understand physical phenomena. Covers both present simulation capabilities and major advances expected in near future. Visual, aural, tactile, and kinesthetic effects used to teach such physical sciences as dynamics of fluids. Recommends classrooms in universities, government, and industry be linked to advanced computing centers so computer simulations integrated into education process.

  19. Analyses of requirements for computer control and data processing experiment subsystems. Volume 2: ATM experiment S-056 image data processing system software development

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The IDAPS (Image Data Processing System) is a user-oriented, computer-based, language and control system, which provides a framework or standard for implementing image data processing applications, simplifies set-up of image processing runs so that the system may be used without a working knowledge of computer programming or operation, streamlines operation of the image processing facility, and allows multiple applications to be run in sequence without operator interaction. The control system loads the operators, interprets the input, constructs the necessary parameters for each application, and cells the application. The overlay feature of the IBSYS loader (IBLDR) provides the means of running multiple operators which would otherwise overflow core storage.

  20. Computer Aided Phenomenography: The Role of Leximancer Computer Software in Phenomenographic Investigation

    ERIC Educational Resources Information Center

    Penn-Edwards, Sorrel

    2010-01-01

    The qualitative research methodology of phenomenography has traditionally required a manual sorting and analysis of interview data. In this paper I explore a potential means of streamlining this procedure by considering a computer aided process not previously reported upon. Two methods of lexicological analysis, manual and automatic, were examined…

  1. Beyond the Computer: Reading as a Process of Intellectual Development.

    ERIC Educational Resources Information Center

    Thompson, Mark E.

    With more than 100,000 computers in public schools across the United States, the impact of computer assisted instruction (CAI) on students' reading behavior needs to be evaluated. In reading laboratories, CAI has been found to provide an efficient and highly motivating means of teaching specific educational objectives. Yet, while computer…

  2. Computer Network Operations Methodology

    DTIC Science & Technology

    2004-03-01

    means of their computer information systems. Disrupt - This type of attack focuses on disrupting as “attackers might surreptitiously reprogram enemy...by reprogramming the computers that control distribution within the power grid. A disruption attack introduces disorder and inhibits the effective...between commanders. The use of methodologies is widespread and done subconsciously to assist individuals in decision making. The processes that

  3. Introducing a "Means-End" Approach to Human-Computer Interaction: Why Users Choose Particular Web Sites Over Others.

    ERIC Educational Resources Information Center

    Subramony, Deepak Prem

    Gutman's means-end theory, widely used in market research, identifies three levels of abstraction: attributes, consequences, and values--associated with the use of products, representing the process by which physical attributes of products gain personal meaning for users. The primary methodological manifestation of means-end theory is the…

  4. Improving the Aircraft Design Process Using Web-Based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)

    2000-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  5. Improving the Aircraft Design Process Using Web-based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.

    2003-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  6. High-Resiliency and Auto-Scaling of Large-Scale Cloud Computing for OCO-2 L2 Full Physics Processing

    NASA Astrophysics Data System (ADS)

    Hua, H.; Manipon, G.; Starch, M.; Dang, L. B.; Southam, P.; Wilson, B. D.; Avis, C.; Chang, A.; Cheng, C.; Smyth, M.; McDuffie, J. L.; Ramirez, P.

    2015-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as SWOT and NISAR where data volumes and data throughput rates are order of magnitude larger than present day missions. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. We present our experiences on deploying a hybrid-cloud computing science data system (HySDS) for the OCO-2 Science Computing Facility to support large-scale processing of their Level-2 full physics data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer ~10X costs savings but with an unpredictable computing environment based on market forces. We will present how we enabled high-tolerance computing in order to achieve large-scale computing as well as operational cost savings.

  7. 32 CFR 236.2 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... DEPARTMENT OF DEFENSE (DOD)-DEFENSE INDUSTRIAL BASE (DIB) VOLUNTARY CYBER SECURITY AND INFORMATION ASSURANCE... defense information. (e) Cyber incident means actions taken through the use of computer networks that... residing therein. (f) Cyber intrusion damage assessment means a managed, coordinated process to determine...

  8. 32 CFR 236.2 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... DEPARTMENT OF DEFENSE (DOD)-DEFENSE INDUSTRIAL BASE (DIB) VOLUNTARY CYBER SECURITY AND INFORMATION ASSURANCE... defense information. (e) Cyber incident means actions taken through the use of computer networks that... residing therein. (f) Cyber intrusion damage assessment means a managed, coordinated process to determine...

  9. Why Today's Computers Don't Learn the Way People Do.

    ERIC Educational Resources Information Center

    Clancey, W. J.

    A major error in cognitive science has been to suppose that the meaning of a representation in the mind is known prior to its production. Representations are inherently perceptual--constructed by a perceptual process and given meaning by subsequent perception of them. The person perceiving the representation determines what it means. This premise…

  10. 20 CFR 660.300 - What definitions apply to the regulations for workforce investment systems under title I of WIA?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    .... Literacy means an individual's ability to read, write, and speak in English, and to compute, and solve... award document. Register means the process for collecting information to determine an individual's.... Self certification means an individual's signed attestation that the information he/she submits to...

  11. 20 CFR 660.300 - What definitions apply to the regulations for workforce investment systems under title I of WIA?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    .... Literacy means an individual's ability to read, write, and speak in English, and to compute, and solve... award document. Register means the process for collecting information to determine an individual's.... Self certification means an individual's signed attestation that the information he/she submits to...

  12. 20 CFR 660.300 - What definitions apply to the regulations for workforce investment systems under title I of WIA?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    .... Literacy means an individual's ability to read, write, and speak in English, and to compute, and solve... award document. Register means the process for collecting information to determine an individual's.... Self certification means an individual's signed attestation that the information he/she submits to...

  13. Parallel computing method for simulating hydrological processesof large rivers under climate change

    NASA Astrophysics Data System (ADS)

    Wang, H.; Chen, Y.

    2016-12-01

    Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.

  14. Fog-computing concept usage as means to enhance information and control system reliability

    NASA Astrophysics Data System (ADS)

    Melnik, E. V.; Klimenko, A. B.; Ivanov, D. Ya

    2018-05-01

    This paper focuses on the reliability issue of information and control systems (ICS). The authors propose using the elements of the fog-computing concept to enhance the reliability function. The key idea of fog-computing is to shift computations to the fog-layer of the network, and thus to decrease the workload of the communication environment and data processing components. As for ICS, workload also can be distributed among sensors, actuators and network infrastructure facilities near the sources of data. The authors simulated typical workload distribution situations for the “traditional” ICS architecture and for the one with fogcomputing concept elements usage. The paper contains some models, selected simulation results and conclusion about the prospects of the fog-computing as a means to enhance ICS reliability.

  15. X-ray tomographic image magnification process, system and apparatus therefor

    DOEpatents

    Kinney, J.H.; Bonse, U.K.; Johnson, Q.C.; Nichols, M.C.; Saroyan, R.A.; Massey, W.N.; Nusshardt, R.

    1993-09-14

    A computerized three-dimensional x-ray tomographic microscopy system is disclosed, comprising: (a) source means for providing a source of parallel x-ray beams, (b) staging means for staging and sequentially rotating a sample to be positioned in the path of the (c) x-ray image magnifier means positioned in the path of the beams downstream from the sample, (d) detecting means for detecting the beams after being passed through and magnified by the image magnifier means, and (e) computing means for analyzing values received from the detecting means, and converting the values into three-dimensional representations. Also disclosed is a process for magnifying an x-ray image, and apparatus therefor. 25 figures.

  16. X-ray tomographic image magnification process, system and apparatus therefor

    DOEpatents

    Kinney, John H.; Bonse, Ulrich K.; Johnson, Quintin C.; Nichols, Monte C.; Saroyan, Ralph A.; Massey, Warren N.; Nusshardt, Rudolph

    1993-01-01

    A computerized three-dimensional x-ray tomographic microscopy system is disclosed, comprising: a) source means for providing a source of parallel x-ray beams, b) staging means for staging and sequentially rotating a sample to be positioned in the path of the c) x-ray image magnifier means positioned in the path of the beams downstream from the sample, d) detecting means for detecting the beams after being passed through and magnified by the image magnifier means, and e) computing means for analyzing values received from the detecting means, and converting the values into three-dimensional representations. Also disclosed is a process for magnifying an x-ray image, and apparatus therefor.

  17. 5 CFR 850.103 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... graphical image of a handwritten signature, usually created using a special computer input device, such as a... comparison with the characteristics and biometric data of a known or exemplar signature image. Director means... folder across the Government. Electronic retirement and insurance processing system means the new...

  18. 5 CFR 850.103 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... graphical image of a handwritten signature, usually created using a special computer input device, such as a... comparison with the characteristics and biometric data of a known or exemplar signature image. Director means... folder across the Government. Electronic retirement and insurance processing system means the new...

  19. 5 CFR 850.103 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... graphical image of a handwritten signature, usually created using a special computer input device, such as a... comparison with the characteristics and biometric data of a known or exemplar signature image. Director means... folder across the Government. Electronic retirement and insurance processing system means the new...

  20. 32 CFR 236.2 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... DEPARTMENT OF DEFENSE (DoD)-DEFENSE INDUSTRIAL BASE (DIB) VOLUNTARY CYBER SECURITY AND INFORMATION ASSURANCE... information. (e) Cyber incident means actions taken through the use of computer networks that result in an...) Cyber intrusion damage assessment means a managed, coordinated process to determine the effect on...

  1. Development of Integrated Programs for Aerospace-vehicle design (IPAD): Integrated information processing requirements

    NASA Technical Reports Server (NTRS)

    Southall, J. W.

    1979-01-01

    The engineering-specified requirements for integrated information processing by means of the Integrated Programs for Aerospace-Vehicle Design (IPAD) system are presented. A data model is described and is based on the design process of a typical aerospace vehicle. General data management requirements are specified for data storage, retrieval, generation, communication, and maintenance. Information management requirements are specified for a two-component data model. In the general portion, data sets are managed as entities, and in the specific portion, data elements and the relationships between elements are managed by the system, allowing user access to individual elements for the purpose of query. Computer program management requirements are specified for support of a computer program library, control of computer programs, and installation of computer programs into IPAD.

  2. Computational methods to extract meaning from text and advance theories of human cognition.

    PubMed

    McNamara, Danielle S

    2011-01-01

    Over the past two decades, researchers have made great advances in the area of computational methods for extracting meaning from text. This research has to a large extent been spurred by the development of latent semantic analysis (LSA), a method for extracting and representing the meaning of words using statistical computations applied to large corpora of text. Since the advent of LSA, researchers have developed and tested alternative statistical methods designed to detect and analyze meaning in text corpora. This research exemplifies how statistical models of semantics play an important role in our understanding of cognition and contribute to the field of cognitive science. Importantly, these models afford large-scale representations of human knowledge and allow researchers to explore various questions regarding knowledge, discourse processing, text comprehension, and language. This topic includes the latest progress by the leading researchers in the endeavor to go beyond LSA. Copyright © 2010 Cognitive Science Society, Inc.

  3. Energy--What to Do until the Computer Comes.

    ERIC Educational Resources Information Center

    Johnston, Archie B.

    Drawing from Tallahassee Community College's (TCC's) experiences with energy conservation, this paper offers suggestions for reducing energy costs through computer-controlled systems and other means. After stating the energy problems caused by TCC's multi-zone heating and cooling system, the paper discusses the five-step process by which TCC…

  4. Proceedings of the Annual Conference on "The Role of the Computer in Education" (5th, Arlington Heights, Illinois, February 20-22, 1985).

    ERIC Educational Resources Information Center

    Micro-Ideas, Glenview, IL.

    The 46 papers in this proceedings summarize the work of academic and private groups which seek to provide a means of integrating the utilization of the computer into an established curriculum; descriptions of sample courses are included. The contents include: (1) Four Precollege Computer Curricula: A Symposium; (2) Data Processing Management…

  5. Process for computing geometric perturbations for probabilistic analysis

    DOEpatents

    Fitch, Simeon H. K. [Charlottesville, VA; Riha, David S [San Antonio, TX; Thacker, Ben H [San Antonio, TX

    2012-04-10

    A method for computing geometric perturbations for probabilistic analysis. The probabilistic analysis is based on finite element modeling, in which uncertainties in the modeled system are represented by changes in the nominal geometry of the model, referred to as "perturbations". These changes are accomplished using displacement vectors, which are computed for each node of a region of interest and are based on mean-value coordinate calculations.

  6. 5 CFR 850.103 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) ELECTRONIC RETIREMENT PROCESSING General Provisions § 850.103 Definitions. In this part— Agency means an... graphical image of a handwritten signature usually created using a special computer input device (such as a... comparison with the characteristics and biometric data of a known or exemplar signature image. Director means...

  7. A study of the relationship between the performance and dependability of a fault-tolerant computer

    NASA Technical Reports Server (NTRS)

    Goswami, Kumar K.

    1994-01-01

    This thesis studies the relationship by creating a tool (FTAPE) that integrates a high stress workload generator with fault injection and by using the tool to evaluate system performance under error conditions. The workloads are comprised of processes which are formed from atomic components that represent CPU, memory, and I/O activity. The fault injector is software-implemented and is capable of injecting any memory addressable location, including special registers and caches. This tool has been used to study a Tandem Integrity S2 Computer. Workloads with varying numbers of processes and varying compositions of CPU, memory, and I/O activity are first characterized in terms of performance. Then faults are injected into these workloads. The results show that as the number of concurrent processes increases, the mean fault latency initially increases due to increased contention for the CPU. However, for even higher numbers of processes (less than 3 processes), the mean latency decreases because long latency faults are paged out before they can be activated.

  8. Representing idioms: syntactic and contextual effects on idiom processing.

    PubMed

    Holsinger, Edward

    2013-09-01

    Recent work on the processing of idiomatic expressions argues against the idea that idioms are simply big words. For example, hybrid models of idiom representation, originally investigated in the context of idiom production, propose a priority of literal computation, and a principled relationship between the conceptual meaning of an idiom, its literal lemmas and its syntactic structure. We examined the predictions of the hybrid representation hypothesis in the domain of idiom comprehension. We conducted two experiments to examine the role of syntactic, lexical and contextual factors on the interpretation of idiomatic expressions. Experiment I examines the role of syntactic compatibility and lexical compatibility on the real-time processing of potentially idiomatic strings. Experiment 2 examines the role of contextual information on idiom processing and how context interacts with lexical information during processing. We find evidence that literal computation plays a causal role in the retrieval of idiomatic meaning and that contextual, lexical and structural information influence the processing of idiomatic strings at early stages during processing, which provide support for the hybrid model of idiom representation in the domain of idiom comprehension.

  9. The Impact of Internet Virtual Physics Laboratory Instruction on the Achievement in Physics, Science Process Skills and Computer Attitudes of 10th-Grade Students

    NASA Astrophysics Data System (ADS)

    Yang, Kun-Yuan; Heh, Jia-Sheng

    2007-10-01

    The purpose of this study was to investigate and compare the impact of Internet Virtual Physics Laboratory (IVPL) instruction with traditional laboratory instruction in physics academic achievement, performance of science process skills, and computer attitudes of tenth grade students. One-hundred and fifty students from four classes at one private senior high school in Taoyuan Country, Taiwan, R.O.C. were sampled. All four classes contained 75 students who were equally divided into an experimental group and a control group. The pre-test results indicated that the students' entry-level physics academic achievement, science process skills, and computer attitudes were equal for both groups. On the post-test, the experimental group achieved significantly higher mean scores in physics academic achievement and science process skills. There was no significant difference in computer attitudes between the groups. We concluded that the IVPL had potential to help tenth graders improve their physics academic achievement and science process skills.

  10. Mean-field approaches to the totally asymmetric exclusion process with quenched disorder and large particles

    NASA Astrophysics Data System (ADS)

    Shaw, Leah B.; Sethna, James P.; Lee, Kelvin H.

    2004-08-01

    The process of protein synthesis in biological systems resembles a one-dimensional driven lattice gas in which the particles (ribosomes) have spatial extent, covering more than one lattice site. Realistic, nonuniform gene sequences lead to quenched disorder in the particle hopping rates. We study the totally asymmetric exclusion process with large particles and quenched disorder via several mean-field approaches and compare the mean-field results with Monte Carlo simulations. Mean-field equations obtained from the literature are found to be reasonably effective in describing this system. A numerical technique is developed for computing the particle current rapidly. The mean-field approach is extended to include two-point correlations between adjacent sites. The two-point results are found to match Monte Carlo simulations more closely.

  11. Examining the architecture of cellular computing through a comparative study with a computer

    PubMed Central

    Wang, Degeng; Gribskov, Michael

    2005-01-01

    The computer and the cell both use information embedded in simple coding, the binary software code and the quadruple genomic code, respectively, to support system operations. A comparative examination of their system architecture as well as their information storage and utilization schemes is performed. On top of the code, both systems display a modular, multi-layered architecture, which, in the case of a computer, arises from human engineering efforts through a combination of hardware implementation and software abstraction. Using the computer as a reference system, a simplistic mapping of the architectural components between the two is easily detected. This comparison also reveals that a cell abolishes the software–hardware barrier through genomic encoding for the constituents of the biochemical network, a cell's ‘hardware’ equivalent to the computer central processing unit (CPU). The information loading (gene expression) process acts as a major determinant of the encoded constituent's abundance, which, in turn, often determines the ‘bandwidth’ of a biochemical pathway. Cellular processes are implemented in biochemical pathways in parallel manners. In a computer, on the other hand, the software provides only instructions and data for the CPU. A process represents just sequentially ordered actions by the CPU and only virtual parallelism can be implemented through CPU time-sharing. Whereas process management in a computer may simply mean job scheduling, coordinating pathway bandwidth through the gene expression machinery represents a major process management scheme in a cell. In summary, a cell can be viewed as a super-parallel computer, which computes through controlled hardware composition. While we have, at best, a very fragmented understanding of cellular operation, we have a thorough understanding of the computer throughout the engineering process. The potential utilization of this knowledge to the benefit of systems biology is discussed. PMID:16849179

  12. Examining the architecture of cellular computing through a comparative study with a computer.

    PubMed

    Wang, Degeng; Gribskov, Michael

    2005-06-22

    The computer and the cell both use information embedded in simple coding, the binary software code and the quadruple genomic code, respectively, to support system operations. A comparative examination of their system architecture as well as their information storage and utilization schemes is performed. On top of the code, both systems display a modular, multi-layered architecture, which, in the case of a computer, arises from human engineering efforts through a combination of hardware implementation and software abstraction. Using the computer as a reference system, a simplistic mapping of the architectural components between the two is easily detected. This comparison also reveals that a cell abolishes the software-hardware barrier through genomic encoding for the constituents of the biochemical network, a cell's "hardware" equivalent to the computer central processing unit (CPU). The information loading (gene expression) process acts as a major determinant of the encoded constituent's abundance, which, in turn, often determines the "bandwidth" of a biochemical pathway. Cellular processes are implemented in biochemical pathways in parallel manners. In a computer, on the other hand, the software provides only instructions and data for the CPU. A process represents just sequentially ordered actions by the CPU and only virtual parallelism can be implemented through CPU time-sharing. Whereas process management in a computer may simply mean job scheduling, coordinating pathway bandwidth through the gene expression machinery represents a major process management scheme in a cell. In summary, a cell can be viewed as a super-parallel computer, which computes through controlled hardware composition. While we have, at best, a very fragmented understanding of cellular operation, we have a thorough understanding of the computer throughout the engineering process. The potential utilization of this knowledge to the benefit of systems biology is discussed.

  13. Identification and Description of Alternative Means of Accomplishing IMS Operational Features.

    ERIC Educational Resources Information Center

    Dave, Ashok

    The operational features of feasible alternative configurations for a computer-based instructional management system are identified. Potential alternative means and components of accomplishing these features are briefly described. Included are aspects of data collection, data input, data transmission, data reception, scanning and processing,…

  14. 10 CFR 431.92 - Definitions concerning commercial air conditioners and heat pumps.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... measurement. Commercial package air-conditioning and heating equipment means air-cooled, water-cooled... Conditioner means a basic model of commercial package air-conditioning and heating equipment (packaged or split) that is: Used in computer rooms, data processing rooms, or other information technology cooling...

  15. A computer-controlled scintiscanning system and associated computer graphic techniques for study of regional distribution of blood flow.

    NASA Technical Reports Server (NTRS)

    Coulam, C. M.; Dunnette, W. H.; Wood, E. H.

    1970-01-01

    Two methods whereby a digital computer may be used to regulate a scintiscanning process are discussed from the viewpoint of computer input-output software. The computer's function, in this case, is to govern the data acquisition and storage, and to display the results to the investigator in a meaningful manner, both during and subsequent to the scanning process. Several methods (such as three-dimensional maps, contour plots, and wall-reflection maps) have been developed by means of which the computer can graphically display the data on-line, for real-time monitoring purposes, during the scanning procedure and subsequently for detailed analysis of the data obtained. A computer-governed method for converting scintiscan data recorded over the dorsal or ventral surfaces of the thorax into fractions of pulmonary blood flow traversing the right and left lungs is presented.

  16. Technical Note: scuda: A software platform for cumulative dose assessment.

    PubMed

    Park, Seyoun; McNutt, Todd; Plishker, William; Quon, Harry; Wong, John; Shekhar, Raj; Lee, Junghoon

    2016-10-01

    Accurate tracking of anatomical changes and computation of actually delivered dose to the patient are critical for successful adaptive radiation therapy (ART). Additionally, efficient data management and fast processing are practically important for the adoption in clinic as ART involves a large amount of image and treatment data. The purpose of this study was to develop an accurate and efficient Software platform for CUmulative Dose Assessment (scuda) that can be seamlessly integrated into the clinical workflow. scuda consists of deformable image registration (DIR), segmentation, dose computation modules, and a graphical user interface. It is connected to our image PACS and radiotherapy informatics databases from which it automatically queries/retrieves patient images, radiotherapy plan, beam data, and daily treatment information, thus providing an efficient and unified workflow. For accurate registration of the planning CT and daily CBCTs, the authors iteratively correct CBCT intensities by matching local intensity histograms during the DIR process. Contours of the target tumor and critical structures are then propagated from the planning CT to daily CBCTs using the computed deformations. The actual delivered daily dose is computed using the registered CT and patient setup information by a superposition/convolution algorithm, and accumulated using the computed deformation fields. Both DIR and dose computation modules are accelerated by a graphics processing unit. The cumulative dose computation process has been validated on 30 head and neck (HN) cancer cases, showing 3.5 ± 5.0 Gy (mean±STD) absolute mean dose differences between the planned and the actually delivered doses in the parotid glands. On average, DIR, dose computation, and segmentation take 20 s/fraction and 17 min for a 35-fraction treatment including additional computation for dose accumulation. The authors developed a unified software platform that provides accurate and efficient monitoring of anatomical changes and computation of actually delivered dose to the patient, thus realizing an efficient cumulative dose computation workflow. Evaluation on HN cases demonstrated the utility of our platform for monitoring the treatment quality and detecting significant dosimetric variations that are keys to successful ART.

  17. Technical Note: SCUDA: A software platform for cumulative dose assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Seyoun; McNutt, Todd; Quon, Harry

    Purpose: Accurate tracking of anatomical changes and computation of actually delivered dose to the patient are critical for successful adaptive radiation therapy (ART). Additionally, efficient data management and fast processing are practically important for the adoption in clinic as ART involves a large amount of image and treatment data. The purpose of this study was to develop an accurate and efficient Software platform for CUmulative Dose Assessment (SCUDA) that can be seamlessly integrated into the clinical workflow. Methods: SCUDA consists of deformable image registration (DIR), segmentation, dose computation modules, and a graphical user interface. It is connected to our imagemore » PACS and radiotherapy informatics databases from which it automatically queries/retrieves patient images, radiotherapy plan, beam data, and daily treatment information, thus providing an efficient and unified workflow. For accurate registration of the planning CT and daily CBCTs, the authors iteratively correct CBCT intensities by matching local intensity histograms during the DIR process. Contours of the target tumor and critical structures are then propagated from the planning CT to daily CBCTs using the computed deformations. The actual delivered daily dose is computed using the registered CT and patient setup information by a superposition/convolution algorithm, and accumulated using the computed deformation fields. Both DIR and dose computation modules are accelerated by a graphics processing unit. Results: The cumulative dose computation process has been validated on 30 head and neck (HN) cancer cases, showing 3.5 ± 5.0 Gy (mean±STD) absolute mean dose differences between the planned and the actually delivered doses in the parotid glands. On average, DIR, dose computation, and segmentation take 20 s/fraction and 17 min for a 35-fraction treatment including additional computation for dose accumulation. Conclusions: The authors developed a unified software platform that provides accurate and efficient monitoring of anatomical changes and computation of actually delivered dose to the patient, thus realizing an efficient cumulative dose computation workflow. Evaluation on HN cases demonstrated the utility of our platform for monitoring the treatment quality and detecting significant dosimetric variations that are keys to successful ART.« less

  18. Approximate Joint Diagonalization and Geometric Mean of Symmetric Positive Definite Matrices

    PubMed Central

    Congedo, Marco; Afsari, Bijan; Barachant, Alexandre; Moakher, Maher

    2015-01-01

    We explore the connection between two problems that have arisen independently in the signal processing and related fields: the estimation of the geometric mean of a set of symmetric positive definite (SPD) matrices and their approximate joint diagonalization (AJD). Today there is a considerable interest in estimating the geometric mean of a SPD matrix set in the manifold of SPD matrices endowed with the Fisher information metric. The resulting mean has several important invariance properties and has proven very useful in diverse engineering applications such as biomedical and image data processing. While for two SPD matrices the mean has an algebraic closed form solution, for a set of more than two SPD matrices it can only be estimated by iterative algorithms. However, none of the existing iterative algorithms feature at the same time fast convergence, low computational complexity per iteration and guarantee of convergence. For this reason, recently other definitions of geometric mean based on symmetric divergence measures, such as the Bhattacharyya divergence, have been considered. The resulting means, although possibly useful in practice, do not satisfy all desirable invariance properties. In this paper we consider geometric means of covariance matrices estimated on high-dimensional time-series, assuming that the data is generated according to an instantaneous mixing model, which is very common in signal processing. We show that in these circumstances we can approximate the Fisher information geometric mean by employing an efficient AJD algorithm. Our approximation is in general much closer to the Fisher information geometric mean as compared to its competitors and verifies many invariance properties. Furthermore, convergence is guaranteed, the computational complexity is low and the convergence rate is quadratic. The accuracy of this new geometric mean approximation is demonstrated by means of simulations. PMID:25919667

  19. Multiphase, multi-electrode Joule heat computations for glass melter and in situ vitrification simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lowery, P.S.; Lessor, D.L.

    Waste glass melter and in situ vitrification (ISV) processes represent the combination of electrical thermal, and fluid flow phenomena to produce a stable waste-from product. Computational modeling of the thermal and fluid flow aspects of these processes provides a useful tool for assessing the potential performance of proposed system designs. These computations can be performed at a fraction of the cost of experiment. Consequently, computational modeling of vitrification systems can also provide and economical means for assessing the suitability of a proposed process application. The computational model described in this paper employs finite difference representations of the basic continuum conservationmore » laws governing the thermal, fluid flow, and electrical aspects of the vitrification process -- i.e., conservation of mass, momentum, energy, and electrical charge. The resulting code is a member of the TEMPEST family of codes developed at the Pacific Northwest Laboratory (operated by Battelle for the US Department of Energy). This paper provides an overview of the numerical approach employed in TEMPEST. In addition, results from several TEMPEST simulations of sample waste glass melter and ISV processes are provided to illustrate the insights to be gained from computational modeling of these processes. 3 refs., 13 figs.« less

  20. Computer vision-based analysis of foods: a non-destructive colour measurement tool to monitor quality and safety.

    PubMed

    Mogol, Burçe Ataç; Gökmen, Vural

    2014-05-01

    Computer vision-based image analysis has been widely used in food industry to monitor food quality. It allows low-cost and non-contact measurements of colour to be performed. In this paper, two computer vision-based image analysis approaches are discussed to extract mean colour or featured colour information from the digital images of foods. These types of information may be of particular importance as colour indicates certain chemical changes or physical properties in foods. As exemplified here, the mean CIE a* value or browning ratio determined by means of computer vision-based image analysis algorithms can be correlated with acrylamide content of potato chips or cookies. Or, porosity index as an important physical property of breadcrumb can be calculated easily. In this respect, computer vision-based image analysis provides a useful tool for automatic inspection of food products in a manufacturing line, and it can be actively involved in the decision-making process where rapid quality/safety evaluation is needed. © 2013 Society of Chemical Industry.

  1. When the mean is not enough: Calculating fixation time distributions in birth-death processes.

    PubMed

    Ashcroft, Peter; Traulsen, Arne; Galla, Tobias

    2015-10-01

    Studies of fixation dynamics in Markov processes predominantly focus on the mean time to absorption. This may be inadequate if the distribution is broad and skewed. We compute the distribution of fixation times in one-step birth-death processes with two absorbing states. These are expressed in terms of the spectrum of the process, and we provide different representations as forward-only processes in eigenspace. These allow efficient sampling of fixation time distributions. As an application we study evolutionary game dynamics, where invading mutants can reach fixation or go extinct. We also highlight the median fixation time as a possible analog of mixing times in systems with small mutation rates and no absorbing states, whereas the mean fixation time has no such interpretation.

  2. A time to search: finding the meaning of variable activation energy.

    PubMed

    Vyazovkin, Sergey

    2016-07-28

    This review deals with the phenomenon of variable activation energy frequently observed when studying the kinetics in the liquid or solid phase. This phenomenon commonly manifests itself through nonlinear Arrhenius plots or dependencies of the activation energy on conversion computed by isoconversional methods. Variable activation energy signifies a multi-step process and has a meaning of a collective parameter linked to the activation energies of individual steps. It is demonstrated that by using appropriate models of the processes, the link can be established in algebraic form. This allows one to analyze experimentally observed dependencies of the activation energy in a quantitative fashion and, as a result, to obtain activation energies of individual steps, to evaluate and predict other important parameters of the process, and generally to gain deeper kinetic and mechanistic insights. This review provides multiple examples of such analysis as applied to the processes of crosslinking polymerization, crystallization and melting of polymers, gelation, and solid-solid morphological and glass transitions. The use of appropriate computational techniques is discussed as well.

  3. A method of computer modelling the lithium-ion batteries aging process based on the experimental characteristics

    NASA Astrophysics Data System (ADS)

    Czerepicki, A.; Koniak, M.

    2017-06-01

    The paper presents a method of modelling the processes of aging lithium-ion batteries, its implementation as a computer application and results for battery state estimation. Authors use previously developed behavioural battery model, which was built using battery operating characteristics obtained from the experiment. This model was implemented in the form of a computer program using a database to store battery characteristics. Batteries aging process is a new extended functionality of the model. Algorithm of computer simulation uses a real measurements of battery capacity as a function of the battery charge and discharge cycles number. Simulation allows to take into account the incomplete cycles of charge or discharge battery, which are characteristic for transport powered by electricity. The developed model was used to simulate the battery state estimation for different load profiles, obtained by measuring the movement of the selected means of transport.

  4. Parallel, Asynchronous Executive (PAX): System concepts, facilities, and architecture

    NASA Technical Reports Server (NTRS)

    Jones, W. H.

    1983-01-01

    The Parallel, Asynchronous Executive (PAX) is a software operating system simulation that allows many computers to work on a single problem at the same time. PAX is currently implemented on a UNIVAC 1100/42 computer system. Independent UNIVAC runstreams are used to simulate independent computers. Data are shared among independent UNIVAC runstreams through shared mass-storage files. PAX has achieved the following: (1) applied several computing processes simultaneously to a single, logically unified problem; (2) resolved most parallel processor conflicts by careful work assignment; (3) resolved by means of worker requests to PAX all conflicts not resolved by work assignment; (4) provided fault isolation and recovery mechanisms to meet the problems of an actual parallel, asynchronous processing machine. Additionally, one real-life problem has been constructed for the PAX environment. This is CASPER, a collection of aerodynamic and structural dynamic problem simulation routines. CASPER is not discussed in this report except to provide examples of parallel-processing techniques.

  5. Parallel-hierarchical processing and classification of laser beam profile images based on the GPU-oriented architecture

    NASA Astrophysics Data System (ADS)

    Yarovyi, Andrii A.; Timchenko, Leonid I.; Kozhemiako, Volodymyr P.; Kokriatskaia, Nataliya I.; Hamdi, Rami R.; Savchuk, Tamara O.; Kulyk, Oleksandr O.; Surtel, Wojciech; Amirgaliyev, Yedilkhan; Kashaganova, Gulzhan

    2017-08-01

    The paper deals with a problem of insufficient productivity of existing computer means for large image processing, which do not meet modern requirements posed by resource-intensive computing tasks of laser beam profiling. The research concentrated on one of the profiling problems, namely, real-time processing of spot images of the laser beam profile. Development of a theory of parallel-hierarchic transformation allowed to produce models for high-performance parallel-hierarchical processes, as well as algorithms and software for their implementation based on the GPU-oriented architecture using GPGPU technologies. The analyzed performance of suggested computerized tools for processing and classification of laser beam profile images allows to perform real-time processing of dynamic images of various sizes.

  6. 26 CFR 1.197-2 - Amortization of goodwill and certain other intangibles.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ..., process, design, pattern, know-how, format, package design, computer software (as defined in paragraph (c... has the meaning given in section 1253(b)(1) and includes any agreement that provides one of the...-readable code) that is designed to cause a computer to perform a desired function or set of functions, and...

  7. 26 CFR 1.197-2 - Amortization of goodwill and certain other intangibles.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ..., process, design, pattern, know-how, format, package design, computer software (as defined in paragraph (c... has the meaning given in section 1253(b)(1) and includes any agreement that provides one of the...-readable code) that is designed to cause a computer to perform a desired function or set of functions, and...

  8. CMC Technologies for Teaching Foreign Languages: What's on the Horizon?

    ERIC Educational Resources Information Center

    Lafford, Peter A.; Lafford, Barbara A.

    2005-01-01

    Computer-mediated communication (CMC) technologies have begun to play an increasingly important role in the teaching of foreign/second (L2) languages. Its use in this context is supported by a growing body of CMC research that highlights the importance of the negotiation of meaning and computer-based interaction in the process of second language…

  9. Improved Processing Speed: Online Computer-Based Cognitive Training in Older Adults

    ERIC Educational Resources Information Center

    Simpson, Tamara; Camfield, David; Pipingas, Andrew; Macpherson, Helen; Stough, Con

    2012-01-01

    In an increasingly aging population, a number of adults are concerned about declines in their cognitive abilities. Online computer-based cognitive training programs have been proposed as an accessible means by which the elderly may improve their cognitive abilities; yet, more research is needed in order to assess the efficacy of these programs. In…

  10. Stimulus Value Signals in Ventromedial PFC Reflect the Integration of Attribute Value Signals Computed in Fusiform Gyrus and Posterior Superior Temporal Gyrus

    PubMed Central

    Lim, Seung-Lark; O'Doherty, John P.

    2013-01-01

    We often have to make choices among multiattribute stimuli (e.g., a food that differs on its taste and health). Behavioral data suggest that choices are made by computing the value of the different attributes and then integrating them into an overall stimulus value signal. However, it is not known whether this theory describes the way the brain computes the stimulus value signals, or how the underlying computations might be implemented. We investigated these questions using a human fMRI task in which individuals had to evaluate T-shirts that varied in their visual esthetic (e.g., color) and semantic (e.g., meaning of logo printed in T-shirt) components. We found that activity in the fusiform gyrus, an area associated with the processing of visual features, correlated with the value of the visual esthetic attributes, but not with the value of the semantic attributes. In contrast, activity in posterior superior temporal gyrus, an area associated with the processing of semantic meaning, exhibited the opposite pattern. Furthermore, both areas exhibited functional connectivity with an area of ventromedial prefrontal cortex that reflects the computation of overall stimulus values at the time of decision. The results provide supporting evidence for the hypothesis that some attribute values are computed in cortical areas specialized in the processing of such features, and that those attribute-specific values are then passed to the vmPFC to be integrated into an overall stimulus value signal to guide the decision. PMID:23678116

  11. Stimulus value signals in ventromedial PFC reflect the integration of attribute value signals computed in fusiform gyrus and posterior superior temporal gyrus.

    PubMed

    Lim, Seung-Lark; O'Doherty, John P; Rangel, Antonio

    2013-05-15

    We often have to make choices among multiattribute stimuli (e.g., a food that differs on its taste and health). Behavioral data suggest that choices are made by computing the value of the different attributes and then integrating them into an overall stimulus value signal. However, it is not known whether this theory describes the way the brain computes the stimulus value signals, or how the underlying computations might be implemented. We investigated these questions using a human fMRI task in which individuals had to evaluate T-shirts that varied in their visual esthetic (e.g., color) and semantic (e.g., meaning of logo printed in T-shirt) components. We found that activity in the fusiform gyrus, an area associated with the processing of visual features, correlated with the value of the visual esthetic attributes, but not with the value of the semantic attributes. In contrast, activity in posterior superior temporal gyrus, an area associated with the processing of semantic meaning, exhibited the opposite pattern. Furthermore, both areas exhibited functional connectivity with an area of ventromedial prefrontal cortex that reflects the computation of overall stimulus values at the time of decision. The results provide supporting evidence for the hypothesis that some attribute values are computed in cortical areas specialized in the processing of such features, and that those attribute-specific values are then passed to the vmPFC to be integrated into an overall stimulus value signal to guide the decision.

  12. Sparse approximation of currents for statistics on curves and surfaces.

    PubMed

    Durrleman, Stanley; Pennec, Xavier; Trouvé, Alain; Ayache, Nicholas

    2008-01-01

    Computing, processing, visualizing statistics on shapes like curves or surfaces is a real challenge with many applications ranging from medical image analysis to computational geometry. Modelling such geometrical primitives with currents avoids feature-based approach as well as point-correspondence method. This framework has been proved to be powerful to register brain surfaces or to measure geometrical invariants. However, if the state-of-the-art methods perform efficiently pairwise registrations, new numerical schemes are required to process groupwise statistics due to an increasing complexity when the size of the database is growing. Statistics such as mean and principal modes of a set of shapes often have a heavy and highly redundant representation. We propose therefore to find an adapted basis on which mean and principal modes have a sparse decomposition. Besides the computational improvement, this sparse representation offers a way to visualize and interpret statistics on currents. Experiments show the relevance of the approach on 34 sets of 70 sulcal lines and on 50 sets of 10 meshes of deep brain structures.

  13. Beyond mean-field approximations for accurate and computationally efficient models of on-lattice chemical kinetics

    NASA Astrophysics Data System (ADS)

    Pineda, M.; Stamatakis, M.

    2017-07-01

    Modeling the kinetics of surface catalyzed reactions is essential for the design of reactors and chemical processes. The majority of microkinetic models employ mean-field approximations, which lead to an approximate description of catalytic kinetics by assuming spatially uncorrelated adsorbates. On the other hand, kinetic Monte Carlo (KMC) methods provide a discrete-space continuous-time stochastic formulation that enables an accurate treatment of spatial correlations in the adlayer, but at a significant computation cost. In this work, we use the so-called cluster mean-field approach to develop higher order approximations that systematically increase the accuracy of kinetic models by treating spatial correlations at a progressively higher level of detail. We further demonstrate our approach on a reduced model for NO oxidation incorporating first nearest-neighbor lateral interactions and construct a sequence of approximations of increasingly higher accuracy, which we compare with KMC and mean-field. The latter is found to perform rather poorly, overestimating the turnover frequency by several orders of magnitude for this system. On the other hand, our approximations, while more computationally intense than the traditional mean-field treatment, still achieve tremendous computational savings compared to KMC simulations, thereby opening the way for employing them in multiscale modeling frameworks.

  14. Approaches in highly parameterized inversion: TSPROC, a general time-series processor to assist in model calibration and result summarization

    USGS Publications Warehouse

    Westenbroek, Stephen M.; Doherty, John; Walker, John F.; Kelson, Victor A.; Hunt, Randall J.; Cera, Timothy B.

    2012-01-01

    The TSPROC (Time Series PROCessor) computer software uses a simple scripting language to process and analyze time series. It was developed primarily to assist in the calibration of environmental models. The software is designed to perform calculations on time-series data commonly associated with surface-water models, including calculation of flow volumes, transformation by means of basic arithmetic operations, and generation of seasonal and annual statistics and hydrologic indices. TSPROC can also be used to generate some of the key input files required to perform parameter optimization by means of the PEST (Parameter ESTimation) computer software. Through the use of TSPROC, the objective function for use in the model-calibration process can be focused on specific components of a hydrograph.

  15. Translational systems biology using an agent-based approach for dynamic knowledge representation: An evolutionary paradigm for biomedical research.

    PubMed

    An, Gary C

    2010-01-01

    The greatest challenge facing the biomedical research community is the effective translation of basic mechanistic knowledge into clinically effective therapeutics. This challenge is most evident in attempts to understand and modulate "systems" processes/disorders, such as sepsis, cancer, and wound healing. Formulating an investigatory strategy for these issues requires the recognition that these are dynamic processes. Representation of the dynamic behavior of biological systems can aid in the investigation of complex pathophysiological processes by augmenting existing discovery procedures by integrating disparate information sources and knowledge. This approach is termed Translational Systems Biology. Focusing on the development of computational models capturing the behavior of mechanistic hypotheses provides a tool that bridges gaps in the understanding of a disease process by visualizing "thought experiments" to fill those gaps. Agent-based modeling is a computational method particularly well suited to the translation of mechanistic knowledge into a computational framework. Utilizing agent-based models as a means of dynamic hypothesis representation will be a vital means of describing, communicating, and integrating community-wide knowledge. The transparent representation of hypotheses in this dynamic fashion can form the basis of "knowledge ecologies," where selection between competing hypotheses will apply an evolutionary paradigm to the development of community knowledge.

  16. A novel computer algorithm for modeling and treating mandibular fractures: A pilot study.

    PubMed

    Rizzi, Christopher J; Ortlip, Timothy; Greywoode, Jewel D; Vakharia, Kavita T; Vakharia, Kalpesh T

    2017-02-01

    To describe a novel computer algorithm that can model mandibular fracture repair. To evaluate the algorithm as a tool to model mandibular fracture reduction and hardware selection. Retrospective pilot study combined with cross-sectional survey. A computer algorithm utilizing Aquarius Net (TeraRecon, Inc, Foster City, CA) and Adobe Photoshop CS6 (Adobe Systems, Inc, San Jose, CA) was developed to model mandibular fracture repair. Ten different fracture patterns were selected from nine patients who had already undergone mandibular fracture repair. The preoperative computed tomography (CT) images were processed with the computer algorithm to create virtual images that matched the actual postoperative three-dimensional CT images. A survey comparing the true postoperative image with the virtual postoperative images was created and administered to otolaryngology resident and attending physicians. They were asked to rate on a scale from 0 to 10 (0 = completely different; 10 = identical) the similarity between the two images in terms of the fracture reduction and fixation hardware. Ten mandible fracture cases were analyzed and processed. There were 15 survey respondents. The mean score for overall similarity between the images was 8.41 ± 0.91; the mean score for similarity of fracture reduction was 8.61 ± 0.98; and the mean score for hardware appearance was 8.27 ± 0.97. There were no significant differences between attending and resident responses. There were no significant differences based on fracture location. This computer algorithm can accurately model mandibular fracture repair. Images created by the algorithm are highly similar to true postoperative images. The algorithm can potentially assist a surgeon planning mandibular fracture repair. 4. Laryngoscope, 2016 127:331-336, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  17. Understanding Counterfactuality: A Review of Experimental Evidence for the Dual Meaning of Counterfactuals

    PubMed Central

    Nieuwland, Mante S.

    2016-01-01

    Abstract Cognitive and linguistic theories of counterfactual language comprehension assume that counterfactuals convey a dual meaning. Subjunctive‐counterfactual conditionals (e.g., ‘If Tom had studied hard, he would have passed the test’) express a supposition while implying the factual state of affairs (Tom has not studied hard and failed). The question of how counterfactual dual meaning plays out during language processing is currently gaining interest in psycholinguistics. Whereas numerous studies using offline measures of language processing consistently support counterfactual dual meaning, evidence coming from online studies is less conclusive. Here, we review the available studies that examine online counterfactual language comprehension through behavioural measurement (self‐paced reading times, eye‐tracking) and neuroimaging (electroencephalography, functional magnetic resonance imaging). While we argue that these studies do not offer direct evidence for the online computation of counterfactual dual meaning, they provide valuable information about the way counterfactual meaning unfolds in time and influences successive information processing. Further advances in research on counterfactual comprehension require more specific predictions about how counterfactual dual meaning impacts incremental sentence processing. PMID:27512408

  18. Portfolio Optimization by Means of Multiple Tandem Certainty-Uncertainty Searches: A Technical Description

    DTIC Science & Technology

    2013-03-15

    310) 451-7002; Fax: (310) 451-6915; Email : order@rand.org The research described in this report was conducted as part of a series of previously...the office of a project sponsor at computer 11 and then sent through email from computer 11 to computer 12 over network 14. Sometimes, this file is...operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover

  19. Computational fluid dynamics modelling of hydraulics and sedimentation in process reactors during aeration tank settling.

    PubMed

    Jensen, M D; Ingildsen, P; Rasmussen, M R; Laursen, J

    2006-01-01

    Aeration tank settling is a control method allowing settling in the process tank during high hydraulic load. The control method is patented. Aeration tank settling has been applied in several waste water treatment plants using the present design of the process tanks. Some process tank designs have shown to be more effective than others. To improve the design of less effective plants, computational fluid dynamics (CFD) modelling of hydraulics and sedimentation has been applied. This paper discusses the results at one particular plant experiencing problems with partly short-circuiting of the inlet and outlet causing a disruption of the sludge blanket at the outlet and thereby reducing the retention of sludge in the process tank. The model has allowed us to establish a clear picture of the problems arising at the plant during aeration tank settling. Secondly, several process tank design changes have been suggested and tested by means of computational fluid dynamics modelling. The most promising design changes have been found and reported.

  20. A Deterministic Computational Procedure for Space Environment Electron Transport

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamcyk, Anne M.

    2010-01-01

    A deterministic computational procedure for describing the transport of electrons in condensed media is formulated to simulate the effects and exposures from spectral distributions typical of electrons trapped in planetary magnetic fields. The primary purpose for developing the procedure is to provide a means of rapidly performing numerous repetitive transport calculations essential for electron radiation exposure assessments for complex space structures. The present code utilizes well-established theoretical representations to describe the relevant interactions and transport processes. A combined mean free path and average trajectory approach is used in the transport formalism. For typical space environment spectra, several favorable comparisons with Monte Carlo calculations are made which have indicated that accuracy is not compromised at the expense of the computational speed.

  1. An Interactive Graphics Program for Investigating Digital Signal Processing.

    ERIC Educational Resources Information Center

    Miller, Billy K.; And Others

    1983-01-01

    Describes development of an interactive computer graphics program for use in teaching digital signal processing. The program allows students to interactively configure digital systems on a monitor display and observe their system's performance by means of digital plots on the system's outputs. A sample program run is included. (JN)

  2. Fission properties of superheavy nuclei for r -process calculations

    NASA Astrophysics Data System (ADS)

    Giuliani, Samuel A.; Martínez-Pinedo, Gabriel; Robledo, Luis M.

    2018-03-01

    We computed a new set of static fission properties suited for r -process calculations. The potential energy surfaces and collective inertias of 3640 nuclei in the superheavy region are obtained from self-consistent mean-field calculations using the Barcelona-Catania-Paris-Madrid energy density functional. The fission path is computed as a function of the quadrupole moment by minimizing the potential energy and exploring octupole and hexadecapole deformations. The spontaneous fission lifetimes are evaluated employing different schemes for the collective inertias and vibrational energy corrections. This allows us to explore the sensitivity of the lifetimes to those quantities together with the collective ground-state energy along the superheavy landscape. We computed neutron-induced stellar reaction rates relevant for r -process nucleosynthesis using the Hauser-Feshbach statistical approach and study the impact of collective inertias. The competition between different reaction channels including neutron-induced rates, spontaneous fission, and α decay is discussed for typical r -process conditions.

  3. Evaluating the Informative Quality of Documents in SGML Format from Judgements by Means of Fuzzy Linguistic Techniques Based on Computing with Words.

    ERIC Educational Resources Information Center

    Herrera-Viedma, Enrique; Peis, Eduardo

    2003-01-01

    Presents a fuzzy evaluation method of SGML documents based on computing with words. Topics include filtering the amount of information available on the Web to assist users in their search processes; document type definitions; linguistic modeling; user-system interaction; and use with XML and other markup languages. (Author/LRW)

  4. Analysis of backward error recovery for concurrent processes with recovery blocks

    NASA Technical Reports Server (NTRS)

    Shin, K. G.; Lee, Y. H.

    1982-01-01

    Three different methods of implementing recovery blocks (RB's). These are the asynchronous, synchronous, and the pseudo recovery point implementations. Pseudo recovery points so that unbounded rollback may be avoided while maintaining process autonomy are proposed. Probabilistic models for analyzing these three methods under standard assumptions in computer performance analysis, i.e., exponential distributions for related random variables were developed. The interval between two successive recovery lines for asynchronous RB's mean loss in computation power for the synchronized method, and additional overhead and rollback distance in case PRP's are used were estimated.

  5. Large-scale scour of the sea floor and the effect of natural armouring processes, land reclamation Maasvlakte 2, port of Rotterdam

    USGS Publications Warehouse

    Boer, S.; Elias, E.; Aarninkhof, S.; Roelvink, D.; Vellinga, T.

    2007-01-01

    Morphological model computations based on uniform (non-graded) sediment revealed an unrealistically strong scour of the sea floor in the immediate vicinity to the west of Maasvlakte 2. By means of a state-of-the-art graded sediment transport model the effect of natural armouring and sorting of bed material on the scour process has been examined. Sensitivity computations confirm that the development of the scour hole is strongly reduced due to the incorporation of armouring processes, suggesting an approximately 30% decrease in terms of erosion area below the -20m depth contour. ?? 2007 ASCE.

  6. OpenCluster: A Flexible Distributed Computing Framework for Astronomical Data Processing

    NASA Astrophysics Data System (ADS)

    Wei, Shoulin; Wang, Feng; Deng, Hui; Liu, Cuiyin; Dai, Wei; Liang, Bo; Mei, Ying; Shi, Congming; Liu, Yingbo; Wu, Jingping

    2017-02-01

    The volume of data generated by modern astronomical telescopes is extremely large and rapidly growing. However, current high-performance data processing architectures/frameworks are not well suited for astronomers because of their limitations and programming difficulties. In this paper, we therefore present OpenCluster, an open-source distributed computing framework to support rapidly developing high-performance processing pipelines of astronomical big data. We first detail the OpenCluster design principles and implementations and present the APIs facilitated by the framework. We then demonstrate a case in which OpenCluster is used to resolve complex data processing problems for developing a pipeline for the Mingantu Ultrawide Spectral Radioheliograph. Finally, we present our OpenCluster performance evaluation. Overall, OpenCluster provides not only high fault tolerance and simple programming interfaces, but also a flexible means of scaling up the number of interacting entities. OpenCluster thereby provides an easily integrated distributed computing framework for quickly developing a high-performance data processing system of astronomical telescopes and for significantly reducing software development expenses.

  7. Exploiting multicore compute resources in the CMS experiment

    NASA Astrophysics Data System (ADS)

    Ramírez, J. E.; Pérez-Calero Yzquierdo, A.; Hernández, J. M.; CMS Collaboration

    2016-10-01

    CMS has developed a strategy to efficiently exploit the multicore architecture of the compute resources accessible to the experiment. A coherent use of the multiple cores available in a compute node yields substantial gains in terms of resource utilization. The implemented approach makes use of the multithreading support of the event processing framework and the multicore scheduling capabilities of the resource provisioning system. Multicore slots are acquired and provisioned by means of multicore pilot agents which internally schedule and execute single and multicore payloads. Multicore scheduling and multithreaded processing are currently used in production for online event selection and prompt data reconstruction. More workflows are being adapted to run in multicore mode. This paper presents a review of the experience gained in the deployment and operation of the multicore scheduling and processing system, the current status and future plans.

  8. A computer-based tutorial structure for teaching and applying a complex process

    Treesearch

    Daniel L. Schmoldt; William G Bradshaw

    1991-01-01

    Economic accountability concerns for wildfire prevention planning have led to the development of an ignition management approach to fire problems. The Fire Loss Prevention Planning Process (FLPPP) systematizes fire problem analyses and concomitantly establishes a means for evaluating prescribed prevention programs. However, new users of the FLPPP have experienced...

  9. GEOS 3 data processing for the recovery of geoid undulations and gravity anomalies

    NASA Technical Reports Server (NTRS)

    Rapp, R. H.

    1979-01-01

    The paper discusses the analysis of GEOS 3 altimeter data for the determination of geoid heights and point and mean gravity anomalies. Methods are presented for determining the mean anomalies and mean undulations from the GEOS 3 altimeter data available by the end of September 1977 without having a complete set of precise orbits. The editing of the data is extensive to remove questionable data, although no filtering of the data is carried out. An adjustment process is carried out to eliminate orbit error and altimeter bias. Representative point anomaly values are computed to investigate anomaly behavior across the Bonin Trench and over the Patton seamounts.

  10. Corruption dynamics model

    NASA Astrophysics Data System (ADS)

    Malafeyev, O. A.; Nemnyugin, S. A.; Rylow, D.; Kolpak, E. P.; Awasthi, Achal

    2017-07-01

    The corruption dynamics is analyzed by means of the lattice model which is similar to the three-dimensional Ising model. Agents placed at nodes of the corrupt network periodically choose to perfom or not to perform the act of corruption at gain or loss while making decisions based on the process history. The gain value and its dynamics are defined by means of the Markov stochastic process modelling with parameters established in accordance with the influence of external and individual factors on the agent's gain. The model is formulated algorithmically and is studied by means of the computer simulation. Numerical results are obtained which demonstrate asymptotic behaviour of the corruption network under various conditions.

  11. CIMOSA process classification for business process mapping in non-manufacturing firms: A case study

    NASA Astrophysics Data System (ADS)

    Latiffianti, Effi; Siswanto, Nurhadi; Wiratno, Stefanus Eko; Saputra, Yudha Andrian

    2017-11-01

    A business process mapping is one important means to enable an enterprise to effectively manage the value chain. One of widely used approaches to classify business process for mapping purpose is Computer Integrated Manufacturing System Open Architecture (CIMOSA). CIMOSA was initially designed for Computer Integrated Manufacturing (CIM) system based enterprises. This paper aims to analyze the use of CIMOSA process classification for business process mapping in the firms that do not fall within the area of CIM. Three firms of different business area that have used CIMOSA process classification were observed: an airline firm, a marketing and trading firm for oil and gas products, and an industrial estate management firm. The result of the research has shown that CIMOSA can be used in non-manufacturing firms with some adjustment. The adjustment includes addition, reduction, or modification of some processes suggested by CIMOSA process classification as evidenced by the case studies.

  12. SAPNEW: Parallel finite element code for thin shell structures on the Alliant FX-80

    NASA Astrophysics Data System (ADS)

    Kamat, Manohar P.; Watson, Brian C.

    1992-11-01

    The finite element method has proven to be an invaluable tool for analysis and design of complex, high performance systems, such as bladed-disk assemblies in aircraft turbofan engines. However, as the problem size increase, the computation time required by conventional computers can be prohibitively high. Parallel processing computers provide the means to overcome these computation time limits. This report summarizes the results of a research activity aimed at providing a finite element capability for analyzing turbomachinery bladed-disk assemblies in a vector/parallel processing environment. A special purpose code, named with the acronym SAPNEW, has been developed to perform static and eigen analysis of multi-degree-of-freedom blade models built-up from flat thin shell elements. SAPNEW provides a stand alone capability for static and eigen analysis on the Alliant FX/80, a parallel processing computer. A preprocessor, named with the acronym NTOS, has been developed to accept NASTRAN input decks and convert them to the SAPNEW format to make SAPNEW more readily used by researchers at NASA Lewis Research Center.

  13. Historical Overview of Data Communication With Analysis of a Selective Repeat Protocol

    DTIC Science & Technology

    1991-03-01

    optical fiber have created international and national communications nets. Personal computers have effected , if not where, certainly how we conduct...Telecommunications entails disciplines, means and methodologies to communicate over distances, in effect to transmit voice, video, facimile, and computer data...having and will continue to have significant effect on the integration of data processing and the telecommunications industry. The high data transmission

  14. Workflow computing. Improving management and efficiency of pathology diagnostic services.

    PubMed

    Buffone, G J; Moreau, D; Beck, J R

    1996-04-01

    Traditionally, information technology in health care has helped practitioners to collect, store, and present information and also to add a degree of automation to simple tasks (instrument interfaces supporting result entry, for example). Thus commercially available information systems do little to support the need to model, execute, monitor, coordinate, and revise the various complex clinical processes required to support health-care delivery. Workflow computing, which is already implemented and improving the efficiency of operations in several nonmedical industries, can address the need to manage complex clinical processes. Workflow computing not only provides a means to define and manage the events, roles, and information integral to health-care delivery but also supports the explicit implementation of policy or rules appropriate to the process. This article explains how workflow computing may be applied to health-care and the inherent advantages of the technology, and it defines workflow system requirements for use in health-care delivery with special reference to diagnostic pathology.

  15. Integrating Micro-computers with a Centralized DBMS: ORACLE, SEED AND INGRES

    NASA Technical Reports Server (NTRS)

    Hoerger, J.

    1984-01-01

    Users of ADABAS, a relational-like data base management system (ADABAS) with its data base programming language (NATURAL) are acquiring microcomputers with hopes of solving their individual word processing, office automation, decision support, and simple data processing problems. As processor speeds, memory sizes, and disk storage capacities increase, individual departments begin to maintain "their own" data base on "their own" micro-computer. This situation can adversely affect several of the primary goals set for implementing a centralized DBMS. In order to avoid this potential problem, these micro-computers must be integrated with the centralized DBMS. An easy to use and flexible means for transferring logic data base files between the central data base machine and micro-computers must be provided. Some of the problems encounted in an effort to accomplish this integration and possible solutions are discussed.

  16. System of error detection in the manufacture of garments using artificial vision

    NASA Astrophysics Data System (ADS)

    Moreno, J. J.; Aguila, A.; Partida, E.; Martinez, C. L.; Morales, O.; Tejeida, R.

    2017-12-01

    A computer vision system is implemented to detect errors in the cutting stage within the manufacturing process of garments in the textile industry. It provides solution to errors within the process that cannot be easily detected by any employee, in addition to significantly increase the speed of quality review. In the textile industry as in many others, quality control is required in manufactured products and this has been carried out manually by means of visual inspection by employees over the years. For this reason, the objective of this project is to design a quality control system using computer vision to identify errors in the cutting stage within the garment manufacturing process to increase the productivity of textile processes by reducing costs.

  17. A Spatiotemporal Clustering Approach to Maritime Domain Awareness

    DTIC Science & Technology

    2013-09-01

    1997. [25] M. E. Celebi, “Effective initialization of k-means for color quantization,” 16th IEEE International Conference on Image Processing (ICIP...release; distribution is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Spatiotemporal clustering is the process of grouping...Department of Electrical and Computer Engineering iv THIS PAGE INTENTIONALLY LEFT BLANK v ABSTRACT Spatiotemporal clustering is the process of

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dang, Liem X.; Vo, Quynh N.; Nilsson, Mikael

    We report one of the first simulations using a classical rate theory approach to predict the mechanism of the exchange process between water and aqueous uranyl ions. Using our water and ion-water polarizable force fields and molecular dynamics techniques, we computed the potentials of mean force for the uranyl ion-water pair as the function of pressures at ambient temperature. Subsequently, these simulated potentials of mean force were used to calculate rate constants using the transition rate theory; the time dependent transmission coefficients were also examined using the reactive flux method and Grote-Hynes treatments of the dynamic response of the solvent.more » The computed activation volumes using transition rate theory and the corrected rate constants are positive, thus the mechanism of this particular water-exchange is a dissociative process. We discuss our rate theory results and compare them with previously studies in which non-polarizable force fields were used. This work was supported by the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences. The calculations were carried out using computer resources provided by the Office of Basic Energy Sciences.« less

  19. Effect and process evaluation of a kindergarten-based, family-involved intervention with a randomized cluster design on sedentary behaviour in 4- to 6- year old European preschool children: The ToyBox-study.

    PubMed

    Latomme, Julie; Cardon, Greet; De Bourdeaudhuij, Ilse; Iotova, Violeta; Koletzko, Berthold; Socha, Piotr; Moreno, Luis; Androutsos, Odysseas; Manios, Yannis; De Craemer, Marieke

    2017-01-01

    The aim of the present study evaluated the effect and process of the ToyBox-intervention on proxy-reported sedentary behaviours in 4- to 6-year-old preschoolers from six European countries. In total, 2434 preschoolers' parents/primary caregivers (mean age: 4.7±0.4 years, 52.2% boys) filled out a questionnaire, assessing preschoolers' sedentary behaviours (TV/DVD/video viewing, computer/video games use and quiet play) on weekdays and weekend days. Multilevel repeated measures analyses were conducted to measure the intervention effects. Additionally, process evaluation data were included to better understand the intervention effects. Positive intervention effects were found for computer/video games use. In the total sample, the intervention group showed a smaller increase in computer/video games use on weekdays (ß = -3.40, p = 0.06; intervention: +5.48 min/day, control: +8.89 min/day) and on weekend days (ß = -5.97, p = 0.05; intervention: +9.46 min/day, control: +15.43 min/day) from baseline to follow-up, compared to the control group. Country-specific analyses showed similar effects in Belgium and Bulgaria, while no significant intervention effects were found in the other countries. Process evaluation data showed relatively low teachers' and low parents' process evaluation scores for the sedentary behaviour component of the intervention (mean: 15.6/24, range: 2.5-23.5 and mean: 8.7/17, range: 0-17, respectively). Higher parents' process evaluation scores were related to a larger intervention effect, but higher teachers' process evaluation scores were not. The ToyBox-intervention had a small, positive effect on European preschoolers' computer/video games use on both weekdays and weekend days, but not on TV/DVD/video viewing or quiet play. The lack of larger effects can possibly be due to the fact that parents were only passively involved in the intervention and to the fact that the intervention was too demanding for the teachers. Future interventions targeting preschoolers' behaviours should involve parents more actively in both the development and the implementation of the intervention and, when involving schools, less demanding activities for teachers should be developed. clinicaltrials.gov NCT02116296.

  20. Effect and process evaluation of a kindergarten-based, family-involved intervention with a randomized cluster design on sedentary behaviour in 4- to 6- year old European preschool children: The ToyBox-study

    PubMed Central

    Latomme, Julie; Cardon, Greet; De Bourdeaudhuij, Ilse; Iotova, Violeta; Koletzko, Berthold; Socha, Piotr; Moreno, Luis; Androutsos, Odysseas; Manios, Yannis; De Craemer, Marieke

    2017-01-01

    Background The aim of the present study evaluated the effect and process of the ToyBox-intervention on proxy-reported sedentary behaviours in 4- to 6-year-old preschoolers from six European countries. Methods In total, 2434 preschoolers’ parents/primary caregivers (mean age: 4.7±0.4 years, 52.2% boys) filled out a questionnaire, assessing preschoolers’ sedentary behaviours (TV/DVD/video viewing, computer/video games use and quiet play) on weekdays and weekend days. Multilevel repeated measures analyses were conducted to measure the intervention effects. Additionally, process evaluation data were included to better understand the intervention effects. Results Positive intervention effects were found for computer/video games use. In the total sample, the intervention group showed a smaller increase in computer/video games use on weekdays (ß = -3.40, p = 0.06; intervention: +5.48 min/day, control: +8.89 min/day) and on weekend days (ß = -5.97, p = 0.05; intervention: +9.46 min/day, control: +15.43 min/day) from baseline to follow-up, compared to the control group. Country-specific analyses showed similar effects in Belgium and Bulgaria, while no significant intervention effects were found in the other countries. Process evaluation data showed relatively low teachers’ and low parents’ process evaluation scores for the sedentary behaviour component of the intervention (mean: 15.6/24, range: 2.5–23.5 and mean: 8.7/17, range: 0–17, respectively). Higher parents’ process evaluation scores were related to a larger intervention effect, but higher teachers’ process evaluation scores were not. Conclusions The ToyBox-intervention had a small, positive effect on European preschoolers’ computer/video games use on both weekdays and weekend days, but not on TV/DVD/video viewing or quiet play. The lack of larger effects can possibly be due to the fact that parents were only passively involved in the intervention and to the fact that the intervention was too demanding for the teachers. Future interventions targeting preschoolers' behaviours should involve parents more actively in both the development and the implementation of the intervention and, when involving schools, less demanding activities for teachers should be developed. Trial registration clinicaltrials.gov NCT02116296 PMID:28380053

  1. The possibilities of improvement in the sensitivity of cancer fluorescence diagnostics by computer image processing

    NASA Astrophysics Data System (ADS)

    Ledwon, Aleksandra; Bieda, Robert; Kawczyk-Krupka, Aleksandra; Polanski, Andrzej; Wojciechowski, Konrad; Latos, Wojciech; Sieron-Stoltny, Karolina; Sieron, Aleksander

    2008-02-01

    Background: Fluorescence diagnostics uses the ability of tissues to fluoresce after exposition to a specific wavelength of light. The change in fluorescence between normal and progression to cancer allows to see early cancer and precancerous lesions often missed by white light. Aim: To improve by computer image processing the sensitivity of fluorescence images obtained during examination of skin, oral cavity, vulva and cervix lesions, during endoscopy, cystoscopy and bronchoscopy using Xillix ONCOLIFE. Methods: Function of image f(x,y):R2 --> R 3 was transformed from original color space RGB to space in which vector of 46 values refers to every point labeled by defined xy-coordinates- f(x,y):R2 --> R 46. By means of Fisher discriminator vector of attributes of concrete point analalyzed in the image was reduced according to two defined classes defined as pathologic areas (foreground) and healthy areas (background). As a result the highest four fisher's coefficients allowing the greatest separation between points of pathologic (foreground) and healthy (background) areas were chosen. In this way new function f(x,y):R2 --> R 4 was created in which point x,y corresponds with vector Y, H, a*, c II. In the second step using Gaussian Mixtures and Expectation-Maximisation appropriate classificator was constructed. This classificator enables determination of probability that the selected pixel of analyzed image is a pathologically changed point (foreground) or healthy one (background). Obtained map of probability distribution was presented by means of pseudocolors. Results: Image processing techniques improve the sensitivity, quality and sharpness of original fluorescence images. Conclusion: Computer image processing enables better visualization of suspected areas examined by means of fluorescence diagnostics.

  2. Concepts and Relations in Neurally Inspired In Situ Concept-Based Computing

    PubMed Central

    van der Velde, Frank

    2016-01-01

    In situ concept-based computing is based on the notion that conceptual representations in the human brain are “in situ.” In this way, they are grounded in perception and action. Examples are neuronal assemblies, whose connection structures develop over time and are distributed over different brain areas. In situ concepts representations cannot be copied or duplicated because that will disrupt their connection structure, and thus the meaning of these concepts. Higher-level cognitive processes, as found in language and reasoning, can be performed with in situ concepts by embedding them in specialized neurally inspired “blackboards.” The interactions between the in situ concepts and the blackboards form the basis for in situ concept computing architectures. In these architectures, memory (concepts) and processing are interwoven, in contrast with the separation between memory and processing found in Von Neumann architectures. Because the further development of Von Neumann computing (more, faster, yet power limited) is questionable, in situ concept computing might be an alternative for concept-based computing. In situ concept computing will be illustrated with a recently developed BABI reasoning task. Neurorobotics can play an important role in the development of in situ concept computing because of the development of in situ concept representations derived in scenarios as needed for reasoning tasks. Neurorobotics would also benefit from power limited and in situ concept computing. PMID:27242504

  3. Concepts and Relations in Neurally Inspired In Situ Concept-Based Computing.

    PubMed

    van der Velde, Frank

    2016-01-01

    In situ concept-based computing is based on the notion that conceptual representations in the human brain are "in situ." In this way, they are grounded in perception and action. Examples are neuronal assemblies, whose connection structures develop over time and are distributed over different brain areas. In situ concepts representations cannot be copied or duplicated because that will disrupt their connection structure, and thus the meaning of these concepts. Higher-level cognitive processes, as found in language and reasoning, can be performed with in situ concepts by embedding them in specialized neurally inspired "blackboards." The interactions between the in situ concepts and the blackboards form the basis for in situ concept computing architectures. In these architectures, memory (concepts) and processing are interwoven, in contrast with the separation between memory and processing found in Von Neumann architectures. Because the further development of Von Neumann computing (more, faster, yet power limited) is questionable, in situ concept computing might be an alternative for concept-based computing. In situ concept computing will be illustrated with a recently developed BABI reasoning task. Neurorobotics can play an important role in the development of in situ concept computing because of the development of in situ concept representations derived in scenarios as needed for reasoning tasks. Neurorobotics would also benefit from power limited and in situ concept computing.

  4. Characterization of rhenium compounds obtained by electrochemical synthesis after aging process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vargas-Uscategui, Alejandro, E-mail: avargasuscat@ing.uchile.cl; Mosquera, Edgar; López-Encarnación, Juan M.

    2014-12-15

    The proper identification of the molecular nature of the aged rhenium compound obtained by means of electrodeposition from an alkaline aqueous electrolyte was determined. Chemical, structural and vibrational experimental characterization of the aged Re compound showed agreement with quantum-computations, thereby allowing the unambiguous identification of the Re compound as H(ReO{sub 4})H{sub 2}O. - Graphical abstract: Rhenium oxides were electrodeposited on a copper surface and after environmental aging was formed the H(ReO{sub 4})H{sub 2}O compound. The characterization of the synthesized material was made through the comparison of experimental evidence with quantum mechanical computations carried out by means of density functional theorymore » (DFT). - Highlights: • Aged rhenium compound obtained by means of electrodeposition was studied. • The study was made by combining experimental and DFT-computational information. • The aged electrodeposited material is consistent with the H(ReO{sub 4})H{sub 2}O compound.« less

  5. Representational geometry: integrating cognition, computation, and the brain

    PubMed Central

    Kriegeskorte, Nikolaus; Kievit, Rogier A.

    2013-01-01

    The cognitive concept of representation plays a key role in theories of brain information processing. However, linking neuronal activity to representational content and cognitive theory remains challenging. Recent studies have characterized the representational geometry of neural population codes by means of representational distance matrices, enabling researchers to compare representations across stages of processing and to test cognitive and computational theories. Representational geometry provides a useful intermediate level of description, capturing both the information represented in a neuronal population code and the format in which it is represented. We review recent insights gained with this approach in perception, memory, cognition, and action. Analyses of representational geometry can compare representations between models and the brain, and promise to explain brain computation as transformation of representational similarity structure. PMID:23876494

  6. Medical education and information and communication technology.

    PubMed

    Houshyari, Asefeh Badiey; Bahadorani, Mahnaz; Tootoonchi, Mina; Gardiner, John Jacob Zucker; Peña, Roberto A; Adibi, Peyman

    2012-01-01

    Information and communication technology (ICT) has brought many changes in medical education and practice in the last couple of decades. Teaching and learning medicine particularly has gone under profound changes due to computer technologies, and medical schools around the world have invested heavily either in new computer technologies or in the process of adapting to this technological revolution. In order to catch up with the rest of the world, developing countries need to research their options in adapting to new computer technologies. This descriptive survey study was designed to assess medical students' computer and Internet skills and their attitude toward ICT. Research findings showed that the mean score of self-perceived computer knowledge for male students in general was greater than for female students. Also, students who had participated in various prior computer workshops, had access to computer, Internet, and e-mail, and frequently checked their e-mail had higher mean of self-perceived knowledge and skill score. Finally, students with positive attitude toward ICT scored their computer knowledge higher than those who had no opinion. The results have confirmed that the medical schools, particularly in developing countries, need to bring fundamental changes such as curriculum modification in order to integrate ICT into medical education, creating essential infrastructure for ICT use in medical education and practice, and structured computer training for faculty and students.

  7. Image Segmentation Method Using Fuzzy C Mean Clustering Based on Multi-Objective Optimization

    NASA Astrophysics Data System (ADS)

    Chen, Jinlin; Yang, Chunzhi; Xu, Guangkui; Ning, Li

    2018-04-01

    Image segmentation is not only one of the hottest topics in digital image processing, but also an important part of computer vision applications. As one kind of image segmentation algorithms, fuzzy C-means clustering is an effective and concise segmentation algorithm. However, the drawback of FCM is that it is sensitive to image noise. To solve the problem, this paper designs a novel fuzzy C-mean clustering algorithm based on multi-objective optimization. We add a parameter λ to the fuzzy distance measurement formula to improve the multi-objective optimization. The parameter λ can adjust the weights of the pixel local information. In the algorithm, the local correlation of neighboring pixels is added to the improved multi-objective mathematical model to optimize the clustering cent. Two different experimental results show that the novel fuzzy C-means approach has an efficient performance and computational time while segmenting images by different type of noises.

  8. Experimentally validated computational modeling of organic binder burnout from green ceramic compacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ewsuk, K.G.; Cochran, R.J.; Blackwell, B.F.

    The properties and performance of a ceramic component is determined by a combination of the materials from which it was fabricated and how it was processed. Most ceramic components are manufactured by dry pressing a powder/binder system in which the organic binder provides formability and green compact strength. A key step in this manufacturing process is the removal of the binder from the powder compact after pressing. The organic binder is typically removed by a thermal decomposition process in which heating rate, temperature, and time are the key process parameters. Empirical approaches are generally used to design the burnout time-temperaturemore » cycle, often resulting in excessive processing times and energy usage, and higher overall manufacturing costs. Ideally, binder burnout should be completed as quickly as possible without damaging the compact, while using a minimum of energy. Process and computational modeling offer one means to achieve this end. The objective of this study is to develop an experimentally validated computer model that can be used to better understand, control, and optimize binder burnout from green ceramic compacts.« less

  9. SedCT: MATLAB™ tools for standardized and quantitative processing of sediment core computed tomography (CT) data collected using a medical CT scanner

    NASA Astrophysics Data System (ADS)

    Reilly, B. T.; Stoner, J. S.; Wiest, J.

    2017-08-01

    Computed tomography (CT) of sediment cores allows for high-resolution images, three-dimensional volumes, and down core profiles. These quantitative data are generated through the attenuation of X-rays, which are sensitive to sediment density and atomic number, and are stored in pixels as relative gray scale values or Hounsfield units (HU). We present a suite of MATLAB™ tools specifically designed for routine sediment core analysis as a means to standardize and better quantify the products of CT data collected on medical CT scanners. SedCT uses a graphical interface to process Digital Imaging and Communications in Medicine (DICOM) files, stitch overlapping scanned intervals, and create down core HU profiles in a manner robust to normal coring imperfections. Utilizing a random sampling technique, SedCT reduces data size and allows for quick processing on typical laptop computers. SedCTimage uses a graphical interface to create quality tiff files of CT slices that are scaled to a user-defined HU range, preserving the quantitative nature of CT images and easily allowing for comparison between sediment cores with different HU means and variance. These tools are presented along with examples from lacustrine and marine sediment cores to highlight the robustness and quantitative nature of this method.

  10. Formulation and implementation of a practical algorithm for parameter estimation with process and measurement noise

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1980-01-01

    A new formulation is proposed for the problem of parameter estimation of dynamic systems with both process and measurement noise. The formulation gives estimates that are maximum likelihood asymptotically in time. The means used to overcome the difficulties encountered by previous formulations are discussed. It is then shown how the proposed formulation can be efficiently implemented in a computer program. A computer program using the proposed formulation is available in a form suitable for routine application. Examples with simulated and real data are given to illustrate that the program works well.

  11. Concept of a Cloud Service for Data Preparation and Computational Control on Custom HPC Systems in Application to Molecular Dynamics

    NASA Astrophysics Data System (ADS)

    Puzyrkov, Dmitry; Polyakov, Sergey; Podryga, Viktoriia; Markizov, Sergey

    2018-02-01

    At the present stage of computer technology development it is possible to study the properties and processes in complex systems at molecular and even atomic levels, for example, by means of molecular dynamics methods. The most interesting are problems related with the study of complex processes under real physical conditions. Solving such problems requires the use of high performance computing systems of various types, for example, GRID systems and HPC clusters. Considering the time consuming computational tasks, the need arises of software for automatic and unified monitoring of such computations. A complex computational task can be performed over different HPC systems. It requires output data synchronization between the storage chosen by a scientist and the HPC system used for computations. The design of the computational domain is also quite a problem. It requires complex software tools and algorithms for proper atomistic data generation on HPC systems. The paper describes the prototype of a cloud service, intended for design of atomistic systems of large volume for further detailed molecular dynamic calculations and computational management for this calculations, and presents the part of its concept aimed at initial data generation on the HPC systems.

  12. DEVELOPMENT OF A CHEMICAL PROCESS MODELING ENVIRONMENT BASED ON CAPE-OPEN INTERFACE STANDARDS AND THE MICROSOFT .NET FRAMEWORK

    EPA Science Inventory

    Chemical process simulation has long been used as a design tool in the development of chemical plants, and has long been considered a means to evaluate different design options. With the advent of large scale computer networks and interface models for program components, it is po...

  13. Capital Budgeting Guidelines: How to Decide Whether to Fund a New Dorm or an Upgraded Computer Lab.

    ERIC Educational Resources Information Center

    Swiger, John; Klaus, Allen

    1996-01-01

    A process for college and university decision making and budgeting for capital outlays that focuses on evaluating the qualitative and quantitative benefits of each proposed project is described and illustrated. The process provides a means to solicit suggestions from those involved and provide detailed information for cost-benefit analysis. (MSE)

  14. Study of sensor spectral responses and data processing algorithms and architectures for onboard feature identification

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Davis, R. E.; Fales, C. L.; Aherron, R. M.

    1982-01-01

    A computational model of the deterministic and stochastic processes involved in remote sensing is used to study spectral feature identification techniques for real-time onboard processing of data acquired with advanced earth-resources sensors. Preliminary results indicate that: Narrow spectral responses are advantageous; signal normalization improves mean-square distance (MSD) classification accuracy but tends to degrade maximum-likelihood (MLH) classification accuracy; and MSD classification of normalized signals performs better than the computationally more complex MLH classification when imaging conditions change appreciably from those conditions during which reference data were acquired. The results also indicate that autonomous categorization of TM signals into vegetation, bare land, water, snow and clouds can be accomplished with adequate reliability for many applications over a reasonably wide range of imaging conditions. However, further analysis is required to develop computationally efficient boundary approximation algorithms for such categorization.

  15. Development of alternative data analysis techniques for improving the accuracy and specificity of natural resource inventories made with digital remote sensing data

    NASA Technical Reports Server (NTRS)

    Lillesand, T. M.; Meisner, D. E. (Principal Investigator)

    1980-01-01

    An investigation was conducted into ways to improve the involvement of state and local user personnel in the digital image analysis process by isolating those elements of the analysis process which require extensive involvement by field personnel and providing means for performing those activities apart from a computer facility. In this way, the analysis procedure can be converted from a centralized activity focused on a computer facility to a distributed activity in which users can interact with the data at the field office level or in the field itself. A general image processing software was developed on the University of Minnesota computer system (Control Data Cyber models 172 and 74). The use of color hardcopy image data as a primary medium in supervised training procedures was investigated and digital display equipment and a coordinate digitizer were procured.

  16. Real-time parallel processing of grammatical structure in the fronto-striatal system: a recurrent network simulation study using reservoir computing.

    PubMed

    Hinaut, Xavier; Dominey, Peter Ford

    2013-01-01

    Sentence processing takes place in real-time. Previous words in the sentence can influence the processing of the current word in the timescale of hundreds of milliseconds. Recent neurophysiological studies in humans suggest that the fronto-striatal system (frontal cortex, and striatum--the major input locus of the basal ganglia) plays a crucial role in this process. The current research provides a possible explanation of how certain aspects of this real-time processing can occur, based on the dynamics of recurrent cortical networks, and plasticity in the cortico-striatal system. We simulate prefrontal area BA47 as a recurrent network that receives on-line input about word categories during sentence processing, with plastic connections between cortex and striatum. We exploit the homology between the cortico-striatal system and reservoir computing, where recurrent frontal cortical networks are the reservoir, and plastic cortico-striatal synapses are the readout. The system is trained on sentence-meaning pairs, where meaning is coded as activation in the striatum corresponding to the roles that different nouns and verbs play in the sentences. The model learns an extended set of grammatical constructions, and demonstrates the ability to generalize to novel constructions. It demonstrates how early in the sentence, a parallel set of predictions are made concerning the meaning, which are then confirmed or updated as the processing of the input sentence proceeds. It demonstrates how on-line responses to words are influenced by previous words in the sentence, and by previous sentences in the discourse, providing new insight into the neurophysiology of the P600 ERP scalp response to grammatical complexity. This demonstrates that a recurrent neural network can decode grammatical structure from sentences in real-time in order to generate a predictive representation of the meaning of the sentences. This can provide insight into the underlying mechanisms of human cortico-striatal function in sentence processing.

  17. Real-Time Parallel Processing of Grammatical Structure in the Fronto-Striatal System: A Recurrent Network Simulation Study Using Reservoir Computing

    PubMed Central

    Hinaut, Xavier; Dominey, Peter Ford

    2013-01-01

    Sentence processing takes place in real-time. Previous words in the sentence can influence the processing of the current word in the timescale of hundreds of milliseconds. Recent neurophysiological studies in humans suggest that the fronto-striatal system (frontal cortex, and striatum – the major input locus of the basal ganglia) plays a crucial role in this process. The current research provides a possible explanation of how certain aspects of this real-time processing can occur, based on the dynamics of recurrent cortical networks, and plasticity in the cortico-striatal system. We simulate prefrontal area BA47 as a recurrent network that receives on-line input about word categories during sentence processing, with plastic connections between cortex and striatum. We exploit the homology between the cortico-striatal system and reservoir computing, where recurrent frontal cortical networks are the reservoir, and plastic cortico-striatal synapses are the readout. The system is trained on sentence-meaning pairs, where meaning is coded as activation in the striatum corresponding to the roles that different nouns and verbs play in the sentences. The model learns an extended set of grammatical constructions, and demonstrates the ability to generalize to novel constructions. It demonstrates how early in the sentence, a parallel set of predictions are made concerning the meaning, which are then confirmed or updated as the processing of the input sentence proceeds. It demonstrates how on-line responses to words are influenced by previous words in the sentence, and by previous sentences in the discourse, providing new insight into the neurophysiology of the P600 ERP scalp response to grammatical complexity. This demonstrates that a recurrent neural network can decode grammatical structure from sentences in real-time in order to generate a predictive representation of the meaning of the sentences. This can provide insight into the underlying mechanisms of human cortico-striatal function in sentence processing. PMID:23383296

  18. Fit of pressed crowns fabricated from two CAD-CAM wax pattern process plans: A comparative in vitro study.

    PubMed

    Shamseddine, Loubna; Mortada, Rola; Rifai, Khaldoun; Chidiac, Jose Johann

    2017-07-01

    Subtractive and additive computer-aided design and computer-aided manufacturing (CAD-CAM) wax pattern processing are 2 methods of fabricating a pressed ceramic crown. Whether a subtractive milled wax pattern or a pattern from the micro-stereolithography additive process produces lithium disilicate crowns with better marginal and internal fit is unclear. Ten silicone impressions were made for a prepared canine tooth. Each die received 2 lithium disilicate (IPS e.max) copings, 1 from milled wax blocks and 1 from additive wax. The replica technique was used to measure the fit by scanning electron microscopy at ×80 magnification. Collected data were analyzed using the paired Student t test for the marginal and internal fit. For the occlusal fit, the difference in scores did not follow a normal distribution, and the Wilcoxon signed rank test was used (α=.05). The mean marginal, axial, and occlusal fit showed no significant differences when the 2 CAD-CAM manufacturing processes were compared (P>.05). For the marginal fit, the mean (±SD) values were 105.1 μm ±39.6 with the milled process and 126.2 μm ±25.2 for the additive process. The mean values were 98.1 μm ±26.1 for the axial fit in the milled process and 106.8 μm ±21.2 in the additive process. For the occlusal fit, median values (interquartile interval) were 199.0 μm (141.5 to 269.9) for subtractive manufacturing and 257.2 μm (171.6 to 266.0) for micro-SLA manufacturing. No significant difference was found between the fit of the 2 techniques. The mean values of axial and occlusal median values were 10 and 5 to 6 times greater than machine's nominal values. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  19. Embedded Process Modeling, Analogy-Based Option Generation and Analytical Graphic Interaction for Enhanced User-Computer Interaction: An Interactive Storyboard of Next Generation User-Computer Interface Technology. Phase 1

    DTIC Science & Technology

    1988-03-01

    structure of the interface is a mapping from the physical world [for example, the use of icons, which S have inherent meaning to users but represent...design alternatives. Mechanisms for linking the user to the computer include physical devices (keyboards), actions taken with the devices (keystrokes...VALUATION AIDES TEMLATEI IITCOM1I LATOR IACTICAL KNOWLEDGE ACGIUISITION MICNnII t 1 Fig. 9. INTACVAL. * OtJiCTs ARE PHYSICAL ENTITIES OR CONCEPTUAL EN

  20. Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment.

    PubMed

    Meng, Bowen; Pratx, Guillem; Xing, Lei

    2011-12-01

    Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT∕CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment. In this work, we accelerated the Feldcamp-Davis-Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT∕CT reconstruction algorithm. Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10(-7). Our study also proved that cloud computing with MapReduce is fault tolerant: the reconstruction completed successfully with identical results even when half of the nodes were manually terminated in the middle of the process. An ultrafast, reliable and scalable 4D CBCT∕CT reconstruction method was developed using the MapReduce framework. Unlike other parallel computing approaches, the parallelization and speedup required little modification of the original reconstruction code. MapReduce provides an efficient and fault tolerant means of solving large-scale computing problems in a cloud computing environment.

  1. Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment

    PubMed Central

    Meng, Bowen; Pratx, Guillem; Xing, Lei

    2011-01-01

    Purpose: Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT/CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment. Methods: In this work, we accelerated the Feldcamp–Davis–Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT/CT reconstruction algorithm. Results: Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10−7. Our study also proved that cloud computing with MapReduce is fault tolerant: the reconstruction completed successfully with identical results even when half of the nodes were manually terminated in the middle of the process. Conclusions: An ultrafast, reliable and scalable 4D CBCT/CT reconstruction method was developed using the MapReduce framework. Unlike other parallel computing approaches, the parallelization and speedup required little modification of the original reconstruction code. MapReduce provides an efficient and fault tolerant means of solving large-scale computing problems in a cloud computing environment. PMID:22149842

  2. Computer Simulation of Replaceable Many Sider Plates (RMSP) with Enhanced Chip-Breaking Characteristics

    NASA Astrophysics Data System (ADS)

    Korchuganova, M.; Syrbakov, A.; Chernysheva, T.; Ivanov, G.; Gnedasch, E.

    2016-08-01

    Out of all common chip curling methods, a special tool face form has become the most widespread which is developed either by means of grinding or by means of profile pressing in the production process of RMSP. Currently, over 15 large tool manufacturers produce tools using instrument materials of over 500 brands. To this, we must add a large variety of tool face geometries, which purpose includes the control over form and dimensions of the chip. Taking into account all the many processed materials, specific tasks of the process planner, requirements to the quality of manufactured products, all this makes the choice of a proper tool which can perform the processing in the most effective way significantly harder. Over recent years, the nomenclature of RMSP for lathe tools with mechanical mounting has been considerably broadened by means of diversification of their faces

  3. An improved K-means clustering algorithm in agricultural image segmentation

    NASA Astrophysics Data System (ADS)

    Cheng, Huifeng; Peng, Hui; Liu, Shanmei

    Image segmentation is the first important step to image analysis and image processing. In this paper, according to color crops image characteristics, we firstly transform the color space of image from RGB to HIS, and then select proper initial clustering center and cluster number in application of mean-variance approach and rough set theory followed by clustering calculation in such a way as to automatically segment color component rapidly and extract target objects from background accurately, which provides a reliable basis for identification, analysis, follow-up calculation and process of crops images. Experimental results demonstrate that improved k-means clustering algorithm is able to reduce the computation amounts and enhance precision and accuracy of clustering.

  4. On the Large-Scaling Issues of Cloud-based Applications for Earth Science Dat

    NASA Astrophysics Data System (ADS)

    Hua, H.

    2016-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as NASA's SWOT and NISAR where its SAR data volumes and data throughput rates are order of magnitude larger than present day missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Experiences have shown that to embrace efficient cloud computing approaches for large-scale science data systems requires more than just moving existing code to cloud environments. At large cloud scales, we need to deal with scaling and cost issues. We present our experiences on deploying multiple instances of our hybrid-cloud computing science data system (HySDS) to support large-scale processing of Earth Science data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer 75%-90% costs savings but with an unpredictable computing environment based on market forces.

  5. Activating the Meaning of a Word Facilitates the Integration of Orthography: Evidence from Spelling Exercises in Beginning Spellers

    ERIC Educational Resources Information Center

    Hilte, Maartje; Reitsma, Pieter

    2011-01-01

    The present study examines the effect of activating the connection between meaning and phonology in spelling exercises in second-grade spellers (n=41; 8 years and 3 months). In computer-based exercises in a within-subject design, semantic and neutral descriptions were contrasted and provided either before the process of spelling or in feedback.…

  6. Computer aided diagnosis based on medical image processing and artificial intelligence methods

    NASA Astrophysics Data System (ADS)

    Stoitsis, John; Valavanis, Ioannis; Mougiakakou, Stavroula G.; Golemati, Spyretta; Nikita, Alexandra; Nikita, Konstantina S.

    2006-12-01

    Advances in imaging technology and computer science have greatly enhanced interpretation of medical images, and contributed to early diagnosis. The typical architecture of a Computer Aided Diagnosis (CAD) system includes image pre-processing, definition of region(s) of interest, features extraction and selection, and classification. In this paper, the principles of CAD systems design and development are demonstrated by means of two examples. The first one focuses on the differentiation between symptomatic and asymptomatic carotid atheromatous plaques. For each plaque, a vector of texture and motion features was estimated, which was then reduced to the most robust ones by means of ANalysis of VAriance (ANOVA). Using fuzzy c-means, the features were then clustered into two classes. Clustering performances of 74%, 79%, and 84% were achieved for texture only, motion only, and combinations of texture and motion features, respectively. The second CAD system presented in this paper supports the diagnosis of focal liver lesions and is able to characterize liver tissue from Computed Tomography (CT) images as normal, hepatic cyst, hemangioma, and hepatocellular carcinoma. Five texture feature sets were extracted for each lesion, while a genetic algorithm based feature selection method was applied to identify the most robust features. The selected feature set was fed into an ensemble of neural network classifiers. The achieved classification performance was 100%, 93.75% and 90.63% in the training, validation and testing set, respectively. It is concluded that computerized analysis of medical images in combination with artificial intelligence can be used in clinical practice and may contribute to more efficient diagnosis.

  7. Phase-locked-loop interferometry applied to aspheric testing with a computer-stored compensator.

    PubMed

    Servin, M; Malacara, D; Rodriguez-Vera, R

    1994-05-01

    A recently developed technique for continuous-phase determination of interferograms with a digital phase-locked loop (PLL) is applied to the null testing of aspheres. Although this PLL demodulating scheme is also a synchronous or direct interferometric technique, the separate unwrapping process is not explicitly required. The unwrapping and the phase-detection processes are achieved simultaneously within the PLL. The proposed method uses a computer-generated holographic compensator. The holographic compensator does not need to be printed out by any means; it is calculated and used from the computer. This computer-stored compensator is used as the reference signal to phase demodulate a sample interferogram obtained from the asphere being tested. Consequently the demodulated phase contains information about the wave-front departures from the ideal computer-stored aspheric interferogram. Wave-front differences of ~ 1 λ are handled easily by the proposed PLL scheme. The maximum recorded frequency in the template's interferogram as well as in the sampled interferogram are assumed to be below the Nyquist frequency.

  8. Choosing order of operations to accelerate strip structure analysis in parameter range

    NASA Astrophysics Data System (ADS)

    Kuksenko, S. P.; Akhunov, R. R.; Gazizov, T. R.

    2018-05-01

    The paper considers the issue of using iteration methods in solving the sequence of linear algebraic systems obtained in quasistatic analysis of strip structures with the method of moments. Using the analysis of 4 strip structures, the authors have proved that additional acceleration (up to 2.21 times) of the iterative process can be obtained during the process of solving linear systems repeatedly by means of choosing a proper order of operations and a preconditioner. The obtained results can be used to accelerate the process of computer-aided design of various strip structures. The choice of the order of operations to accelerate the process is quite simple, universal and could be used not only for strip structure analysis but also for a wide range of computational problems.

  9. Physician Utilization of a Hospital Information System: A Computer Simulation Model

    PubMed Central

    Anderson, James G.; Jay, Stephen J.; Clevenger, Stephen J.; Kassing, David R.; Perry, Jane; Anderson, Marilyn M.

    1988-01-01

    The purpose of this research was to develop a computer simulation model that represents the process through which physicians enter orders into a hospital information system (HIS). Computer simulation experiments were performed to estimate the effects of two methods of order entry on outcome variables. The results of the computer simulation experiments were used to perform a cost-benefit analysis to compare the two different means of entering medical orders into the HIS. The results indicate that the use of personal order sets to enter orders into the HIS will result in a significant reduction in manpower, salaries and fringe benefits, and errors in order entry.

  10. Large Spatial Scale Ground Displacement Mapping through the P-SBAS Processing of Sentinel-1 Data on a Cloud Computing Environment

    NASA Astrophysics Data System (ADS)

    Casu, F.; Bonano, M.; de Luca, C.; Lanari, R.; Manunta, M.; Manzo, M.; Zinno, I.

    2017-12-01

    Since its launch in 2014, the Sentinel-1 (S1) constellation has played a key role on SAR data availability and dissemination all over the World. Indeed, the free and open access data policy adopted by the European Copernicus program together with the global coverage acquisition strategy, make the Sentinel constellation as a game changer in the Earth Observation scenario. Being the SAR data become ubiquitous, the technological and scientific challenge is focused on maximizing the exploitation of such huge data flow. In this direction, the use of innovative processing algorithms and distributed computing infrastructures, such as the Cloud Computing platforms, can play a crucial role. In this work we present a Cloud Computing solution for the advanced interferometric (DInSAR) processing chain based on the Parallel SBAS (P-SBAS) approach, aimed at processing S1 Interferometric Wide Swath (IWS) data for the generation of large spatial scale deformation time series in efficient, automatic and systematic way. Such a DInSAR chain ingests Sentinel 1 SLC images and carries out several processing steps, to finally compute deformation time series and mean deformation velocity maps. Different parallel strategies have been designed ad hoc for each processing step of the P-SBAS S1 chain, encompassing both multi-core and multi-node programming techniques, in order to maximize the computational efficiency achieved within a Cloud Computing environment and cut down the relevant processing times. The presented P-SBAS S1 processing chain has been implemented on the Amazon Web Services platform and a thorough analysis of the attained parallel performances has been performed to identify and overcome the major bottlenecks to the scalability. The presented approach is used to perform national-scale DInSAR analyses over Italy, involving the processing of more than 3000 S1 IWS images acquired from both ascending and descending orbits. Such an experiment confirms the big advantage of exploiting large computational and storage resources of Cloud Computing platforms for large scale DInSAR analysis. The presented Cloud Computing P-SBAS processing chain can be a precious tool in the perspective of developing operational services disposable for the EO scientific community related to hazard monitoring and risk prevention and mitigation.

  11. Genomic cloud computing: legal and ethical points to consider

    PubMed Central

    Dove, Edward S; Joly, Yann; Tassé, Anne-Marie; Burton, Paul; Chisholm, Rex; Fortier, Isabel; Goodwin, Pat; Harris, Jennifer; Hveem, Kristian; Kaye, Jane; Kent, Alistair; Knoppers, Bartha Maria; Lindpaintner, Klaus; Little, Julian; Riegman, Peter; Ripatti, Samuli; Stolk, Ronald; Bobrow, Martin; Cambon-Thomsen, Anne; Dressler, Lynn; Joly, Yann; Kato, Kazuto; Knoppers, Bartha Maria; Rodriguez, Laura Lyman; McPherson, Treasa; Nicolás, Pilar; Ouellette, Francis; Romeo-Casabona, Carlos; Sarin, Rajiv; Wallace, Susan; Wiesner, Georgia; Wilson, Julia; Zeps, Nikolajs; Simkevitz, Howard; De Rienzo, Assunta; Knoppers, Bartha M

    2015-01-01

    The biggest challenge in twenty-first century data-intensive genomic science, is developing vast computer infrastructure and advanced software tools to perform comprehensive analyses of genomic data sets for biomedical research and clinical practice. Researchers are increasingly turning to cloud computing both as a solution to integrate data from genomics, systems biology and biomedical data mining and as an approach to analyze data to solve biomedical problems. Although cloud computing provides several benefits such as lower costs and greater efficiency, it also raises legal and ethical issues. In this article, we discuss three key ‘points to consider' (data control; data security, confidentiality and transfer; and accountability) based on a preliminary review of several publicly available cloud service providers' Terms of Service. These ‘points to consider' should be borne in mind by genomic research organizations when negotiating legal arrangements to store genomic data on a large commercial cloud service provider's servers. Diligent genomic cloud computing means leveraging security standards and evaluation processes as a means to protect data and entails many of the same good practices that researchers should always consider in securing their local infrastructure. PMID:25248396

  12. Genomic cloud computing: legal and ethical points to consider.

    PubMed

    Dove, Edward S; Joly, Yann; Tassé, Anne-Marie; Knoppers, Bartha M

    2015-10-01

    The biggest challenge in twenty-first century data-intensive genomic science, is developing vast computer infrastructure and advanced software tools to perform comprehensive analyses of genomic data sets for biomedical research and clinical practice. Researchers are increasingly turning to cloud computing both as a solution to integrate data from genomics, systems biology and biomedical data mining and as an approach to analyze data to solve biomedical problems. Although cloud computing provides several benefits such as lower costs and greater efficiency, it also raises legal and ethical issues. In this article, we discuss three key 'points to consider' (data control; data security, confidentiality and transfer; and accountability) based on a preliminary review of several publicly available cloud service providers' Terms of Service. These 'points to consider' should be borne in mind by genomic research organizations when negotiating legal arrangements to store genomic data on a large commercial cloud service provider's servers. Diligent genomic cloud computing means leveraging security standards and evaluation processes as a means to protect data and entails many of the same good practices that researchers should always consider in securing their local infrastructure.

  13. Automated land-use mapping from spacecraft data. [Oakland County, Michigan

    NASA Technical Reports Server (NTRS)

    Chase, P. E. (Principal Investigator); Rogers, R. H.; Reed, L. E.

    1974-01-01

    The author has identified the following significant results. In response to the need for a faster, more economical means of producing land use maps, this study evaluated the suitability of using ERTS-1 computer compatible tape (CCT) data as a basis for automatic mapping. Significant findings are: (1) automatic classification accuracy greater than 90% is achieved on categories of deep and shallow water, tended grass, rangeland, extractive (bare earth), urban, forest land, and nonforested wet lands; (2) computer-generated printouts by target class provide a quantitative measure of land use; and (3) the generation of map overlays showing land use from ERTS-1 CCTs offers a significant breakthrough in the rate at which land use maps are generated. Rather than uncorrected classified imagery or computer line printer outputs, the processing results in geometrically-corrected computer-driven pen drawing of land categories, drawn on a transparent material at a scale specified by the operator. These map overlays are economically produced and provide an efficient means of rapidly updating maps showing land use.

  14. Representational geometry: integrating cognition, computation, and the brain.

    PubMed

    Kriegeskorte, Nikolaus; Kievit, Rogier A

    2013-08-01

    The cognitive concept of representation plays a key role in theories of brain information processing. However, linking neuronal activity to representational content and cognitive theory remains challenging. Recent studies have characterized the representational geometry of neural population codes by means of representational distance matrices, enabling researchers to compare representations across stages of processing and to test cognitive and computational theories. Representational geometry provides a useful intermediate level of description, capturing both the information represented in a neuronal population code and the format in which it is represented. We review recent insights gained with this approach in perception, memory, cognition, and action. Analyses of representational geometry can compare representations between models and the brain, and promise to explain brain computation as transformation of representational similarity structure. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Scalable Multiprocessor for High-Speed Computing in Space

    NASA Technical Reports Server (NTRS)

    Lux, James; Lang, Minh; Nishimoto, Kouji; Clark, Douglas; Stosic, Dorothy; Bachmann, Alex; Wilkinson, William; Steffke, Richard

    2004-01-01

    A report discusses the continuing development of a scalable multiprocessor computing system for hard real-time applications aboard a spacecraft. "Hard realtime applications" signifies applications, like real-time radar signal processing, in which the data to be processed are generated at "hundreds" of pulses per second, each pulse "requiring" millions of arithmetic operations. In these applications, the digital processors must be tightly integrated with analog instrumentation (e.g., radar equipment), and data input/output must be synchronized with analog instrumentation, controlled to within fractions of a microsecond. The scalable multiprocessor is a cluster of identical commercial-off-the-shelf generic DSP (digital-signal-processing) computers plus generic interface circuits, including analog-to-digital converters, all controlled by software. The processors are computers interconnected by high-speed serial links. Performance can be increased by adding hardware modules and correspondingly modifying the software. Work is distributed among the processors in a parallel or pipeline fashion by means of a flexible master/slave control and timing scheme. Each processor operates under its own local clock; synchronization is achieved by broadcasting master time signals to all the processors, which compute offsets between the master clock and their local clocks.

  16. Defining and Enforcing Hardware Security Requirements

    DTIC Science & Technology

    2011-12-01

    Computer-Aided Design CPU Central Processing Unit CTL Computation Tree Logic DARPA The Defense Advanced Projects Research Agency DFF D-type Flip-Flop DNF...They too have no global knowledge of what is going on, nor any meaning to attach to any bit, whether storage or gating . . . it is we who attach...This option is prohibitively ex- pensive with the current trends in the global distribution of the steps in IC design and fabrication. The second option

  17. Guide to making time-lapse graphics using the facilities of the National Magnetic Fusion Energy Computing Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munro, J.K. Jr.

    1980-05-01

    The advent of large, fast computers has opened the way to modeling more complex physical processes and to handling very large quantities of experimental data. The amount of information that can be processed in a short period of time is so great that use of graphical displays assumes greater importance as a means of displaying this information. Information from dynamical processes can be displayed conveniently by use of animated graphics. This guide presents the basic techniques for generating black and white animated graphics, with consideration of aesthetic, mechanical, and computational problems. The guide is intended for use by someone whomore » wants to make movies on the National Magnetic Fusion Energy Computing Center (NMFECC) CDC-7600. Problems encountered by a geographically remote user are given particular attention. Detailed information is given that will allow a remote user to do some file checking and diagnosis before giving graphics files to the system for processing into film in order to spot problems without having to wait for film to be delivered. Source listings of some useful software are given in appendices along with descriptions of how to use it. 3 figures, 5 tables.« less

  18. Prioritizing parts from cutting bills when gang-ripping first

    Treesearch

    R. Edward Thomas

    1996-01-01

    Computer optimization of gang-rip-first processing is a difficult problem when working with specific cutting bills. Interactions among board grade and size, arbor setup, and part sizes and quantities greatly complicate the decision making process. Cutting the wrong parts at any moment will mean that more board footage will be required to meet the bill. Using the ROugh...

  19. Nested polynomial trends for the improvement of Gaussian process-based predictors

    NASA Astrophysics Data System (ADS)

    Perrin, G.; Soize, C.; Marque-Pucheu, S.; Garnier, J.

    2017-10-01

    The role of simulation keeps increasing for the sensitivity analysis and the uncertainty quantification of complex systems. Such numerical procedures are generally based on the processing of a huge amount of code evaluations. When the computational cost associated with one particular evaluation of the code is high, such direct approaches based on the computer code only, are not affordable. Surrogate models have therefore to be introduced to interpolate the information given by a fixed set of code evaluations to the whole input space. When confronted to deterministic mappings, the Gaussian process regression (GPR), or kriging, presents a good compromise between complexity, efficiency and error control. Such a method considers the quantity of interest of the system as a particular realization of a Gaussian stochastic process, whose mean and covariance functions have to be identified from the available code evaluations. In this context, this work proposes an innovative parametrization of this mean function, which is based on the composition of two polynomials. This approach is particularly relevant for the approximation of strongly non linear quantities of interest from very little information. After presenting the theoretical basis of this method, this work compares its efficiency to alternative approaches on a series of examples.

  20. Cellular automata-based modelling and simulation of biofilm structure on multi-core computers.

    PubMed

    Skoneczny, Szymon

    2015-01-01

    The article presents a mathematical model of biofilm growth for aerobic biodegradation of a toxic carbonaceous substrate. Modelling of biofilm growth has fundamental significance in numerous processes of biotechnology and mathematical modelling of bioreactors. The process following double-substrate kinetics with substrate inhibition proceeding in a biofilm has not been modelled so far by means of cellular automata. Each process in the model proposed, i.e. diffusion of substrates, uptake of substrates, growth and decay of microorganisms and biofilm detachment, is simulated in a discrete manner. It was shown that for flat biofilm of constant thickness, the results of the presented model agree with those of a continuous model. The primary outcome of the study was to propose a mathematical model of biofilm growth; however a considerable amount of focus was also placed on the development of efficient algorithms for its solution. Two parallel algorithms were created, differing in the way computations are distributed. Computer programs were created using OpenMP Application Programming Interface for C++ programming language. Simulations of biofilm growth were performed on three high-performance computers. Speed-up coefficients of computer programs were compared. Both algorithms enabled a significant reduction of computation time. It is important, inter alia, in modelling and simulation of bioreactor dynamics.

  1. A Computational Framework for Realistic Retina Modeling.

    PubMed

    Martínez-Cañada, Pablo; Morillas, Christian; Pino, Begoña; Ros, Eduardo; Pelayo, Francisco

    2016-11-01

    Computational simulations of the retina have led to valuable insights about the biophysics of its neuronal activity and processing principles. A great number of retina models have been proposed to reproduce the behavioral diversity of the different visual processing pathways. While many of these models share common computational stages, previous efforts have been more focused on fitting specific retina functions rather than generalizing them beyond a particular model. Here, we define a set of computational retinal microcircuits that can be used as basic building blocks for the modeling of different retina mechanisms. To validate the hypothesis that similar processing structures may be repeatedly found in different retina functions, we implemented a series of retina models simply by combining these computational retinal microcircuits. Accuracy of the retina models for capturing neural behavior was assessed by fitting published electrophysiological recordings that characterize some of the best-known phenomena observed in the retina: adaptation to the mean light intensity and temporal contrast, and differential motion sensitivity. The retinal microcircuits are part of a new software platform for efficient computational retina modeling from single-cell to large-scale levels. It includes an interface with spiking neural networks that allows simulation of the spiking response of ganglion cells and integration with models of higher visual areas.

  2. A computational model of the human visual cortex

    NASA Astrophysics Data System (ADS)

    Albus, James S.

    2008-04-01

    The brain is first and foremost a control system that is capable of building an internal representation of the external world, and using this representation to make decisions, set goals and priorities, formulate plans, and control behavior with intent to achieve its goals. The computational model proposed here assumes that this internal representation resides in arrays of cortical columns. More specifically, it models each cortical hypercolumn together with its underlying thalamic nuclei as a Fundamental Computational Unit (FCU) consisting of a frame-like data structure (containing attributes and pointers) plus the computational processes and mechanisms required to maintain it. In sensory-processing areas of the brain, FCUs enable segmentation, grouping, and classification. Pointers stored in FCU frames link pixels and signals to objects and events in situations and episodes that are overlaid with meaning and emotional values. In behavior-generating areas of the brain, FCUs make decisions, set goals and priorities, generate plans, and control behavior. Pointers are used to define rules, grammars, procedures, plans, and behaviors. It is suggested that it may be possible to reverse engineer the human brain at the FCU level of fidelity using nextgeneration massively parallel computer hardware and software. Key Words: computational modeling, human cortex, brain modeling, reverse engineering the brain, image processing, perception, segmentation, knowledge representation

  3. Bubble nucleation in simple and molecular liquids via the largest spherical cavity method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonzalez, Miguel A., E-mail: m.gonzalez12@imperial.ac.uk; Department of Chemistry, Imperial College London, London SW7 2AZ; Abascal, José L. F.

    2015-04-21

    In this work, we propose a methodology to compute bubble nucleation free energy barriers using trajectories generated via molecular dynamics simulations. We follow the bubble nucleation process by means of a local order parameter, defined by the volume of the largest spherical cavity (LSC) formed in the nucleating trajectories. This order parameter simplifies considerably the monitoring of the nucleation events, as compared with the previous approaches which require ad hoc criteria to classify the atoms and molecules as liquid or vapor. The combination of the LSC and the mean first passage time technique can then be used to obtain themore » free energy curves. Upon computation of the cavity distribution function the nucleation rate and free-energy barrier can then be computed. We test our method against recent computations of bubble nucleation in simple liquids and water at negative pressures. We obtain free-energy barriers in good agreement with the previous works. The LSC method provides a versatile and computationally efficient route to estimate the volume of critical bubbles the nucleation rate and to compute bubble nucleation free-energies in both simple and molecular liquids.« less

  4. Fuzzy-C-Means Clustering Based Segmentation and CNN-Classification for Accurate Segmentation of Lung Nodules

    PubMed

    K, Jalal Deen; R, Ganesan; A, Merline

    2017-07-27

    Objective: Accurate segmentation of abnormal and healthy lungs is very crucial for a steadfast computer-aided disease diagnostics. Methods: For this purpose a stack of chest CT scans are processed. In this paper, novel methods are proposed for segmentation of the multimodal grayscale lung CT scan. In the conventional methods using Markov–Gibbs Random Field (MGRF) model the required regions of interest (ROI) are identified. Result: The results of proposed FCM and CNN based process are compared with the results obtained from the conventional method using MGRF model. The results illustrate that the proposed method can able to segment the various kinds of complex multimodal medical images precisely. Conclusion: However, in this paper, to obtain an exact boundary of the regions, every empirical dispersion of the image is computed by Fuzzy C-Means Clustering segmentation. A classification process based on the Convolutional Neural Network (CNN) classifier is accomplished to distinguish the normal tissue and the abnormal tissue. The experimental evaluation is done using the Interstitial Lung Disease (ILD) database. Creative Commons Attribution License

  5. Fuzzy-C-Means Clustering Based Segmentation and CNN-Classification for Accurate Segmentation of Lung Nodules

    PubMed Central

    K, Jalal Deen; R, Ganesan; A, Merline

    2017-01-01

    Objective: Accurate segmentation of abnormal and healthy lungs is very crucial for a steadfast computer-aided disease diagnostics. Methods: For this purpose a stack of chest CT scans are processed. In this paper, novel methods are proposed for segmentation of the multimodal grayscale lung CT scan. In the conventional methods using Markov–Gibbs Random Field (MGRF) model the required regions of interest (ROI) are identified. Result: The results of proposed FCM and CNN based process are compared with the results obtained from the conventional method using MGRF model. The results illustrate that the proposed method can able to segment the various kinds of complex multimodal medical images precisely. Conclusion: However, in this paper, to obtain an exact boundary of the regions, every empirical dispersion of the image is computed by Fuzzy C-Means Clustering segmentation. A classification process based on the Convolutional Neural Network (CNN) classifier is accomplished to distinguish the normal tissue and the abnormal tissue. The experimental evaluation is done using the Interstitial Lung Disease (ILD) database. PMID:28749127

  6. Use of a Computer-Mediated Delphi Process to Validate a Mass Casualty Conceptual Model

    PubMed Central

    CULLEY, JOAN M.

    2012-01-01

    Since the original work on the Delphi technique, multiple versions have been developed and used in research and industry; however, very little empirical research has been conducted that evaluates the efficacy of using online computer, Internet, and e-mail applications to facilitate a Delphi method that can be used to validate theoretical models. The purpose of this research was to develop computer, Internet, and e-mail applications to facilitate a modified Delphi technique through which experts provide validation for a proposed conceptual model that describes the information needs for a mass-casualty continuum of care. Extant literature and existing theoretical models provided the basis for model development. Two rounds of the Delphi process were needed to satisfy the criteria for consensus and/or stability related to the constructs, relationships, and indicators in the model. The majority of experts rated the online processes favorably (mean of 6.1 on a seven-point scale). Using online Internet and computer applications to facilitate a modified Delphi process offers much promise for future research involving model building or validation. The online Delphi process provided an effective methodology for identifying and describing the complex series of events and contextual factors that influence the way we respond to disasters. PMID:21076283

  7. Use of a computer-mediated Delphi process to validate a mass casualty conceptual model.

    PubMed

    Culley, Joan M

    2011-05-01

    Since the original work on the Delphi technique, multiple versions have been developed and used in research and industry; however, very little empirical research has been conducted that evaluates the efficacy of using online computer, Internet, and e-mail applications to facilitate a Delphi method that can be used to validate theoretical models. The purpose of this research was to develop computer, Internet, and e-mail applications to facilitate a modified Delphi technique through which experts provide validation for a proposed conceptual model that describes the information needs for a mass-casualty continuum of care. Extant literature and existing theoretical models provided the basis for model development. Two rounds of the Delphi process were needed to satisfy the criteria for consensus and/or stability related to the constructs, relationships, and indicators in the model. The majority of experts rated the online processes favorably (mean of 6.1 on a seven-point scale). Using online Internet and computer applications to facilitate a modified Delphi process offers much promise for future research involving model building or validation. The online Delphi process provided an effective methodology for identifying and describing the complex series of events and contextual factors that influence the way we respond to disasters.

  8. Auto-Generated Semantic Processing Services

    NASA Technical Reports Server (NTRS)

    Davis, Rodney; Hupf, Greg

    2009-01-01

    Auto-Generated Semantic Processing (AGSP) Services is a suite of software tools for automated generation of other computer programs, denoted cross-platform semantic adapters, that support interoperability of computer-based communication systems that utilize a variety of both new and legacy communication software running in a variety of operating- system/computer-hardware combinations. AGSP has numerous potential uses in military, space-exploration, and other government applications as well as in commercial telecommunications. The cross-platform semantic adapters take advantage of common features of computer- based communication systems to enforce semantics, messaging protocols, and standards of processing of streams of binary data to ensure integrity of data and consistency of meaning among interoperating systems. The auto-generation aspect of AGSP Services reduces development time and effort by emphasizing specification and minimizing implementation: In effect, the design, building, and debugging of software for effecting conversions among complex communication protocols, custom device mappings, and unique data-manipulation algorithms is replaced with metadata specifications that map to an abstract platform-independent communications model. AGSP Services is modular and has been shown to be easily integrable into new and legacy NASA flight and ground communication systems.

  9. A real-time spike sorting method based on the embedded GPU.

    PubMed

    Zelan Yang; Kedi Xu; Xiang Tian; Shaomin Zhang; Xiaoxiang Zheng

    2017-07-01

    Microelectrode arrays with hundreds of channels have been widely used to acquire neuron population signals in neuroscience studies. Online spike sorting is becoming one of the most important challenges for high-throughput neural signal acquisition systems. Graphic processing unit (GPU) with high parallel computing capability might provide an alternative solution for increasing real-time computational demands on spike sorting. This study reported a method of real-time spike sorting through computing unified device architecture (CUDA) which was implemented on an embedded GPU (NVIDIA JETSON Tegra K1, TK1). The sorting approach is based on the principal component analysis (PCA) and K-means. By analyzing the parallelism of each process, the method was further optimized in the thread memory model of GPU. Our results showed that the GPU-based classifier on TK1 is 37.92 times faster than the MATLAB-based classifier on PC while their accuracies were the same with each other. The high-performance computing features of embedded GPU demonstrated in our studies suggested that the embedded GPU provide a promising platform for the real-time neural signal processing.

  10. Efficient fuzzy C-means architecture for image segmentation.

    PubMed

    Li, Hui-Ya; Hwang, Wen-Jyi; Chang, Chia-Yen

    2011-01-01

    This paper presents a novel VLSI architecture for image segmentation. The architecture is based on the fuzzy c-means algorithm with spatial constraint for reducing the misclassification rate. In the architecture, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. In addition, an efficient pipelined circuit is used for the updating process for accelerating the computational speed. Experimental results show that the the proposed circuit is an effective alternative for real-time image segmentation with low area cost and low misclassification rate.

  11. Models and techniques for evaluating the effectiveness of aircraft computing systems

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1982-01-01

    Models, measures, and techniques for evaluating the effectiveness of aircraft computing systems were developed. By "effectiveness" in this context we mean the extent to which the user, i.e., a commercial air carrier, may expect to benefit from the computational tasks accomplished by a computing system in the environment of an advanced commercial aircraft. Thus, the concept of effectiveness involves aspects of system performance, reliability, and worth (value, benefit) which are appropriately integrated in the process of evaluating system effectiveness. Specifically, the primary objectives are: the development of system models that provide a basis for the formulation and evaluation of aircraft computer system effectiveness, the formulation of quantitative measures of system effectiveness, and the development of analytic and simulation techniques for evaluating the effectiveness of a proposed or existing aircraft computer.

  12. Big data mining analysis method based on cloud computing

    NASA Astrophysics Data System (ADS)

    Cai, Qing Qiu; Cui, Hong Gang; Tang, Hao

    2017-08-01

    Information explosion era, large data super-large, discrete and non-(semi) structured features have gone far beyond the traditional data management can carry the scope of the way. With the arrival of the cloud computing era, cloud computing provides a new technical way to analyze the massive data mining, which can effectively solve the problem that the traditional data mining method cannot adapt to massive data mining. This paper introduces the meaning and characteristics of cloud computing, analyzes the advantages of using cloud computing technology to realize data mining, designs the mining algorithm of association rules based on MapReduce parallel processing architecture, and carries out the experimental verification. The algorithm of parallel association rule mining based on cloud computing platform can greatly improve the execution speed of data mining.

  13. Effect of roll compaction on granule size distribution of microcrystalline cellulose–mannitol mixtures: computational intelligence modeling and parametric analysis

    PubMed Central

    Kazemi, Pezhman; Khalid, Mohammad Hassan; Pérez Gago, Ana; Kleinebudde, Peter; Jachowicz, Renata; Szlęk, Jakub; Mendyk, Aleksander

    2017-01-01

    Dry granulation using roll compaction is a typical unit operation for producing solid dosage forms in the pharmaceutical industry. Dry granulation is commonly used if the powder mixture is sensitive to heat and moisture and has poor flow properties. The output of roll compaction is compacted ribbons that exhibit different properties based on the adjusted process parameters. These ribbons are then milled into granules and finally compressed into tablets. The properties of the ribbons directly affect the granule size distribution (GSD) and the quality of final products; thus, it is imperative to study the effect of roll compaction process parameters on GSD. The understanding of how the roll compactor process parameters and material properties interact with each other will allow accurate control of the process, leading to the implementation of quality by design practices. Computational intelligence (CI) methods have a great potential for being used within the scope of quality by design approach. The main objective of this study was to show how the computational intelligence techniques can be useful to predict the GSD by using different process conditions of roll compaction and material properties. Different techniques such as multiple linear regression, artificial neural networks, random forest, Cubist and k-nearest neighbors algorithm assisted by sevenfold cross-validation were used to present generalized models for the prediction of GSD based on roll compaction process setting and material properties. The normalized root-mean-squared error and the coefficient of determination (R2) were used for model assessment. The best fit was obtained by Cubist model (normalized root-mean-squared error =3.22%, R2=0.95). Based on the results, it was confirmed that the material properties (true density) followed by compaction force have the most significant effect on GSD. PMID:28176905

  14. Effect of roll compaction on granule size distribution of microcrystalline cellulose-mannitol mixtures: computational intelligence modeling and parametric analysis.

    PubMed

    Kazemi, Pezhman; Khalid, Mohammad Hassan; Pérez Gago, Ana; Kleinebudde, Peter; Jachowicz, Renata; Szlęk, Jakub; Mendyk, Aleksander

    2017-01-01

    Dry granulation using roll compaction is a typical unit operation for producing solid dosage forms in the pharmaceutical industry. Dry granulation is commonly used if the powder mixture is sensitive to heat and moisture and has poor flow properties. The output of roll compaction is compacted ribbons that exhibit different properties based on the adjusted process parameters. These ribbons are then milled into granules and finally compressed into tablets. The properties of the ribbons directly affect the granule size distribution (GSD) and the quality of final products; thus, it is imperative to study the effect of roll compaction process parameters on GSD. The understanding of how the roll compactor process parameters and material properties interact with each other will allow accurate control of the process, leading to the implementation of quality by design practices. Computational intelligence (CI) methods have a great potential for being used within the scope of quality by design approach. The main objective of this study was to show how the computational intelligence techniques can be useful to predict the GSD by using different process conditions of roll compaction and material properties. Different techniques such as multiple linear regression, artificial neural networks, random forest, Cubist and k-nearest neighbors algorithm assisted by sevenfold cross-validation were used to present generalized models for the prediction of GSD based on roll compaction process setting and material properties. The normalized root-mean-squared error and the coefficient of determination ( R 2 ) were used for model assessment. The best fit was obtained by Cubist model (normalized root-mean-squared error =3.22%, R 2 =0.95). Based on the results, it was confirmed that the material properties (true density) followed by compaction force have the most significant effect on GSD.

  15. Synchrotron-based X-ray computed tomography during compression loading of cellular materials

    DOE PAGES

    Cordes, Nikolaus L.; Henderson, Kevin; Stannard, Tyler; ...

    2015-04-29

    Three-dimensional X-ray computed tomography (CT) of in situ dynamic processes provides internal snapshot images as a function of time. Tomograms are mathematically reconstructed from a series of radiographs taken in rapid succession as the specimen is rotated in small angular increments. In addition to spatial resolution, temporal resolution is important. Thus temporal resolution indicates how close together in time two distinct tomograms can be acquired. Tomograms taken in rapid succession allow detailed analyses of internal processes that cannot be obtained by other means. This article describes the state-of-the-art for such measurements acquired using synchrotron radiation as the X-ray source.

  16. PC-assisted translation of photogrammetric papers

    NASA Astrophysics Data System (ADS)

    Güthner, Karlheinz; Peipe, Jürgen

    A PC-based system for machine translation of photogrammetric papers from the English into the German language and vice versa is described. The computer-assisted translating process is not intended to create a perfect interpretation of a text but to produce a rough rendering of the content of a paper. Starting with the original text, a continuous data flow is effected into the translated version by means of hardware (scanner, personal computer, printer) and software (OCR, translation, word processing, DTP). An essential component of the system is a photogrammetric microdictionary which is being established at present. It is based on several sources, including e.g. the ISPRS Multilingual Dictionary.

  17. Detection of motile micro-organisms in biological samples by means of a fully automated image processing system

    NASA Astrophysics Data System (ADS)

    Alanis, Elvio; Romero, Graciela; Alvarez, Liliana; Martinez, Carlos C.; Hoyos, Daniel; Basombrio, Miguel A.

    2001-08-01

    A fully automated image processing system for detection of motile microorganism is biological samples is presented. The system is specifically calibrated for determining the concentration of Trypanosoma Cruzi parasites in blood samples of mice infected with Chagas disease. The method can be adapted for use in other biological samples. A thin layer of blood infected by T. cruzi parasites is examined in a common microscope in which the images of the vision field are taken by a CCD camera and temporarily stored in the computer memory. In a typical field, a few motile parasites are observable surrounded by blood red cells. The parasites have low contrast. Thus, they are difficult to detect visually but their great motility betrays their presence by the movement of the nearest neighbor red cells. Several consecutive images of the same field are taken, decorrelated with each other where parasites are present, and digitally processed in order to measure the number of parasites present in the field. Several fields are sequentially processed in the same fashion, displacing the sample by means of step motors driven by the computer. A direct advantage of this system is that its results are more reliable and the process is less time consuming than the current subjective evaluations made visually by technicians.

  18. On the Accuracy and Parallelism of GPGPU-Powered Incremental Clustering Algorithms.

    PubMed

    Chen, Chunlei; He, Li; Zhang, Huixiang; Zheng, Hao; Wang, Lei

    2017-01-01

    Incremental clustering algorithms play a vital role in various applications such as massive data analysis and real-time data processing. Typical application scenarios of incremental clustering raise high demand on computing power of the hardware platform. Parallel computing is a common solution to meet this demand. Moreover, General Purpose Graphic Processing Unit (GPGPU) is a promising parallel computing device. Nevertheless, the incremental clustering algorithm is facing a dilemma between clustering accuracy and parallelism when they are powered by GPGPU. We formally analyzed the cause of this dilemma. First, we formalized concepts relevant to incremental clustering like evolving granularity. Second, we formally proved two theorems. The first theorem proves the relation between clustering accuracy and evolving granularity. Additionally, this theorem analyzes the upper and lower bounds of different-to-same mis-affiliation. Fewer occurrences of such mis-affiliation mean higher accuracy. The second theorem reveals the relation between parallelism and evolving granularity. Smaller work-depth means superior parallelism. Through the proofs, we conclude that accuracy of an incremental clustering algorithm is negatively related to evolving granularity while parallelism is positively related to the granularity. Thus the contradictory relations cause the dilemma. Finally, we validated the relations through a demo algorithm. Experiment results verified theoretical conclusions.

  19. VLSI implementation of a new LMS-based algorithm for noise removal in ECG signal

    NASA Astrophysics Data System (ADS)

    Satheeskumaran, S.; Sabrigiriraj, M.

    2016-06-01

    Least mean square (LMS)-based adaptive filters are widely deployed for removing artefacts in electrocardiogram (ECG) due to less number of computations. But they posses high mean square error (MSE) under noisy environment. The transform domain variable step-size LMS algorithm reduces the MSE at the cost of computational complexity. In this paper, a variable step-size delayed LMS adaptive filter is used to remove the artefacts from the ECG signal for improved feature extraction. The dedicated digital Signal processors provide fast processing, but they are not flexible. By using field programmable gate arrays, the pipelined architectures can be used to enhance the system performance. The pipelined architecture can enhance the operation efficiency of the adaptive filter and save the power consumption. This technique provides high signal-to-noise ratio and low MSE with reduced computational complexity; hence, it is a useful method for monitoring patients with heart-related problem.

  20. Unveiling the Atomic-Level Determinants of Acylase-Ligand Complexes: An Experimental and Computational Study.

    PubMed

    Mollica, Luca; Conti, Gianluca; Pollegioni, Loredano; Cavalli, Andrea; Rosini, Elena

    2015-10-26

    The industrial production of higher-generation semisynthetic cephalosporins starts from 7-aminocephalosporanic acid (7-ACA), which is obtained by deacylation of the naturally occurring antibiotic cephalosporin C (CephC). The enzymatic process in which CephC is directly converted into 7-ACA by a cephalosporin C acylase has attracted industrial interest because of the prospects of simplifying the process and reducing costs. We recently enhanced the catalytic efficiency on CephC of a glutaryl acylase from Pseudomonas N176 (named VAC) by a protein engineering approach and solved the crystal structures of wild-type VAC and the H57βS-H70βS VAC double variant. In the present work, experimental measurements on several CephC derivatives and six VAC variants were carried out, and the binding of ligands into the VAC active site was investigated at an atomistic level by means of molecular docking and molecular dynamics simulations and analyzed on the basis of the molecular geometry of encounter complex formation and protein-ligand potential of mean force profiles. The observed significant correlation between the experimental data and estimated binding energies highlights the predictive power of our computational method to identify the ligand binding mode. The present experimental-computational study is well-suited both to provide deep insight into the reaction mechanism of cephalosporin C acylase and to improve the efficiency of the corresponding industrial process.

  1. Two-part models with stochastic processes for modelling longitudinal semicontinuous data: Computationally efficient inference and modelling the overall marginal mean.

    PubMed

    Yiu, Sean; Tom, Brian Dm

    2017-01-01

    Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.

  2. Optical Signal Processing: Poisson Image Restoration and Shearing Interferometry

    NASA Technical Reports Server (NTRS)

    Hong, Yie-Ming

    1973-01-01

    Optical signal processing can be performed in either digital or analog systems. Digital computers and coherent optical systems are discussed as they are used in optical signal processing. Topics include: image restoration; phase-object visualization; image contrast reversal; optical computation; image multiplexing; and fabrication of spatial filters. Digital optical data processing deals with restoration of images degraded by signal-dependent noise. When the input data of an image restoration system are the numbers of photoelectrons received from various areas of a photosensitive surface, the data are Poisson distributed with mean values proportional to the illuminance of the incoherently radiating object and background light. Optical signal processing using coherent optical systems is also discussed. Following a brief review of the pertinent details of Ronchi's diffraction grating interferometer, moire effect, carrier-frequency photography, and achromatic holography, two new shearing interferometers based on them are presented. Both interferometers can produce variable shear.

  3. Perceptual organization in computer vision - A review and a proposal for a classificatory structure

    NASA Technical Reports Server (NTRS)

    Sarkar, Sudeep; Boyer, Kim L.

    1993-01-01

    The evolution of perceptual organization in biological vision, and its necessity in advanced computer vision systems, arises from the characteristic that perception, the extraction of meaning from sensory input, is an intelligent process. This is particularly so for high order organisms and, analogically, for more sophisticated computational models. The role of perceptual organization in computer vision systems is explored. This is done from four vantage points. First, a brief history of perceptual organization research in both humans and computer vision is offered. Next, a classificatory structure in which to cast perceptual organization research to clarify both the nomenclature and the relationships among the many contributions is proposed. Thirdly, the perceptual organization work in computer vision in the context of this classificatory structure is reviewed. Finally, the array of computational techniques applied to perceptual organization problems in computer vision is surveyed.

  4. Brain-computer interface signal processing at the Wadsworth Center: mu and sensorimotor beta rhythms.

    PubMed

    McFarland, Dennis J; Krusienski, Dean J; Wolpaw, Jonathan R

    2006-01-01

    The Wadsworth brain-computer interface (BCI), based on mu and beta sensorimotor rhythms, uses one- and two-dimensional cursor movement tasks and relies on user training. This is a real-time closed-loop system. Signal processing consists of channel selection, spatial filtering, and spectral analysis. Feature translation uses a regression approach and normalization. Adaptation occurs at several points in this process on the basis of different criteria and methods. It can use either feedforward (e.g., estimating the signal mean for normalization) or feedback control (e.g., estimating feature weights for the prediction equation). We view this process as the interaction between a dynamic user and a dynamic system that coadapt over time. Understanding the dynamics of this interaction and optimizing its performance represent a major challenge for BCI research.

  5. Photogrammetry on glaciers: Old and new knowledge

    NASA Astrophysics Data System (ADS)

    Pfeffer, W. T.; Welty, E.; O'Neel, S.

    2014-12-01

    In the past few decades terrestrial photogrammetry has become a widely used tool for glaciological research, brought about in part by the proliferation of high-quality, low-cost digital cameras, dramatic increases in image-processing power of computers, and very innovative progress in image processing, much of which has come from computer vision research and from the computer gaming industry. At present, glaciologists have developed their capacity to gather images much further than their ability to process them. Many researchers have accumulated vast inventories of imagery, but have no efficient means to extract the data they desire from them. In many cases these are single-image time series where the processing limitation lies in the paucity of methods to obtain 3-dimension object space information from measurements in the 2-dimensional image space; in other cases camera pairs have been operated but no automated means is in hand for conventional stereometric analysis of many thousands of image pairs. Often the processing task is further complicated by weak camera geometry or ground control distribution, either of which will compromise the quality of 3-dimensional object space solutions. Solutions exist for many of these problems, found sometimes among the latest computer vision results, and sometimes buried in decades-old pre-digital terrestrial photogrammetric literature. Other problems, particularly those arising from poorly constrained or underdetermined camera and ground control geometry, may be unsolvable. Small-scale, ground-based photography and photogrammetry of glaciers has grown over the past few decades in an organic and disorganized fashion, with much duplication of effort and little coordination or sharing of knowledge among researchers. Given the utility of terrestrial photogrammetry, its low cost (if properly developed and implemented), and the substantial value of the information to be had from it, some further effort to share knowledge and methods would be a great benefit for the community. We consider some of the main problems to be solved, and aspects of how optimal knowledge sharing might be accomplished.

  6. Itô and Stratonovich integrals on compound renewal processes: the normal/Poisson case

    NASA Astrophysics Data System (ADS)

    Germano, Guido; Politi, Mauro; Scalas, Enrico; Schilling, René L.

    2010-06-01

    Continuous-time random walks, or compound renewal processes, are pure-jump stochastic processes with several applications in insurance, finance, economics and physics. Based on heuristic considerations, a definition is given for stochastic integrals driven by continuous-time random walks, which includes the Itô and Stratonovich cases. It is then shown how the definition can be used to compute these two stochastic integrals by means of Monte Carlo simulations. Our example is based on the normal compound Poisson process, which in the diffusive limit converges to the Wiener process.

  7. Numerical Simulation of Cast Distortion in Gas Turbine Engine Components

    NASA Astrophysics Data System (ADS)

    Inozemtsev, A. A.; Dubrovskaya, A. S.; Dongauser, K. A.; Trufanov, N. A.

    2015-06-01

    In this paper the process of multiple airfoilvanes manufacturing through investment casting is considered. The mathematical model of the full contact problem is built to determine stress strain state in a cast during the process of solidification. Studies are carried out in viscoelastoplastic statement. Numerical simulation of the explored process is implemented with ProCASTsoftware package. The results of simulation are compared with the real production process. By means of computer analysis the optimization of technical process parameters is done in order to eliminate the defect of cast walls thickness variation.

  8. A new iterative triclass thresholding technique in image segmentation.

    PubMed

    Cai, Hongmin; Yang, Zhong; Cao, Xinhua; Xia, Weiming; Xu, Xiaoyin

    2014-03-01

    We present a new method in image segmentation that is based on Otsu's method but iteratively searches for subregions of the image for segmentation, instead of treating the full image as a whole region for processing. The iterative method starts with Otsu's threshold and computes the mean values of the two classes as separated by the threshold. Based on the Otsu's threshold and the two mean values, the method separates the image into three classes instead of two as the standard Otsu's method does. The first two classes are determined as the foreground and background and they will not be processed further. The third class is denoted as a to-be-determined (TBD) region that is processed at next iteration. At the succeeding iteration, Otsu's method is applied on the TBD region to calculate a new threshold and two class means and the TBD region is again separated into three classes, namely, foreground, background, and a new TBD region, which by definition is smaller than the previous TBD regions. Then, the new TBD region is processed in the similar manner. The process stops when the Otsu's thresholds calculated between two iterations is less than a preset threshold. Then, all the intermediate foreground and background regions are, respectively, combined to create the final segmentation result. Tests on synthetic and real images showed that the new iterative method can achieve better performance than the standard Otsu's method in many challenging cases, such as identifying weak objects and revealing fine structures of complex objects while the added computational cost is minimal.

  9. Advanced non-contrasted computed tomography post-processing by CT-Calculometry (CT-CM) outperforms established predictors for the outcome of shock wave lithotripsy.

    PubMed

    Langenauer, J; Betschart, P; Hechelhammer, L; Güsewell, S; Schmid, H P; Engeler, D S; Abt, D; Zumstein, V

    2018-05-29

    To evaluate the predictive value of advanced non-contrasted computed tomography (NCCT) post-processing using novel CT-calculometry (CT-CM) parameters compared to established predictors of success of shock wave lithotripsy (SWL) for urinary calculi. NCCT post-processing was retrospectively performed in 312 patients suffering from upper tract urinary calculi who were treated by SWL. Established predictors such as skin to stone distance, body mass index, stone diameter or mean stone attenuation values were assessed. Precise stone size and shape metrics, 3-D greyscale measurements and homogeneity parameters such as skewness and kurtosis, were analysed using CT-CM. Predictive values for SWL outcome were analysed using logistic regression and receiver operating characteristics (ROC) statistics. Overall success rate (stone disintegration and no re-intervention needed) of SWL was 59% (184 patients). CT-CM metrics mainly outperformed established predictors. According to ROC analyses, stone volume and surface area performed better than established stone diameter, mean 3D attenuation value was a stronger predictor than established mean attenuation value, and parameters skewness and kurtosis performed better than recently emerged variation coefficient of stone density. Moreover, prediction of SWL outcome with 80% probability to be correct would be possible in a clearly higher number of patients (up to fivefold) using CT-CM-derived parameters. Advanced NCCT post-processing by CT-CM provides novel parameters that seem to outperform established predictors of SWL response. Implementation of these parameters into clinical routine might reduce SWL failure rates.

  10. Analysis and Synthesis of Pseudo-Periodic[InlineEquation not available: see fulltext.]-Like Noise by Means of Wavelets with Applications to Digital Audio

    NASA Astrophysics Data System (ADS)

    Polotti, Pietro; Evangelista, Gianpaolo

    2001-12-01

    Voiced musical sounds have nonzero energy in sidebands of the frequency partials. Our work is based on the assumption, often experimentally verified, that the energy distribution of the sidebands is shaped as powers of the inverse of the distance from the closest partial. The power spectrum of these pseudo-periodic processes is modeled by means of a superposition of modulated[InlineEquation not available: see fulltext.] components, that is, by a pseudo-periodic[InlineEquation not available: see fulltext.]-like process. Due to the fundamental selfsimilar character of the wavelet transform,[InlineEquation not available: see fulltext.] processes can be fruitfully analyzed and synthesized by means of wavelets. We obtain a set of very loosely correlated coefficients at each scale level that can be well approximated by white noise in the synthesis process. Our computational scheme is based on an orthogonal[InlineEquation not available: see fulltext.]-band filter bank and a dyadic wavelet transform per channel. The[InlineEquation not available: see fulltext.] channels are tuned to the left and right sidebands of the harmonics so that sidebands are mutually independent. The structure computes the expansion coefficients of a new orthogonal and complete set of harmonic-band wavelets. The main point of our scheme is that we need only two parameters per harmonic in order to model the stochastic fluctuations of sounds from a pure periodic behavior.

  11. Algorithms and analytical solutions for rapidly approximating long-term dispersion from line and area sources

    NASA Astrophysics Data System (ADS)

    Barrett, Steven R. H.; Britter, Rex E.

    Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean dispersion are shown to produce results several orders of magnitude more efficiently with a loss of accuracy small compared to the absolute accuracy of advanced dispersion models near sources. The method can be readily incorporated into existing dispersion models, and may allow for additional computation time to be expended on modelling dispersion processes more accurately in future, rather than on accounting for source geometry.

  12. Hypermatrix scheme for finite element systems on CDC STAR-100 computer

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Voigt, S. J.

    1975-01-01

    A study is made of the adaptation of the hypermatrix (block matrix) scheme for solving large systems of finite element equations to the CDC STAR-100 computer. Discussion is focused on the organization of the hypermatrix computation using Cholesky decomposition and the mode of storage of the different submatrices to take advantage of the STAR pipeline (streaming) capability. Consideration is also given to the associated data handling problems and the means of balancing the I/Q and cpu times in the solution process. Numerical examples are presented showing anticipated gain in cpu speed over the CDC 6600 to be obtained by using the proposed algorithms on the STAR computer.

  13. Ordinal optimization and its application to complex deterministic problems

    NASA Astrophysics Data System (ADS)

    Yang, Mike Shang-Yu

    1998-10-01

    We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.

  14. Artificial Intelligence and CALL.

    ERIC Educational Resources Information Center

    Underwood, John H.

    The potential application of artificial intelligence (AI) to computer-assisted language learning (CALL) is explored. Two areas of AI that hold particular interest to those who deal with language meaning--knowledge representation and expert systems, and natural-language processing--are described and examples of each are presented. AI contribution…

  15. Feature Statistics Modulate the Activation of Meaning During Spoken Word Processing.

    PubMed

    Devereux, Barry J; Taylor, Kirsten I; Randall, Billi; Geertzen, Jeroen; Tyler, Lorraine K

    2016-03-01

    Understanding spoken words involves a rapid mapping from speech to conceptual representations. One distributed feature-based conceptual account assumes that the statistical characteristics of concepts' features--the number of concepts they occur in (distinctiveness/sharedness) and likelihood of co-occurrence (correlational strength)--determine conceptual activation. To test these claims, we investigated the role of distinctiveness/sharedness and correlational strength in speech-to-meaning mapping, using a lexical decision task and computational simulations. Responses were faster for concepts with higher sharedness, suggesting that shared features are facilitatory in tasks like lexical decision that require access to them. Correlational strength facilitated responses for slower participants, suggesting a time-sensitive co-occurrence-driven settling mechanism. The computational simulation showed similar effects, with early effects of shared features and later effects of correlational strength. These results support a general-to-specific account of conceptual processing, whereby early activation of shared features is followed by the gradual emergence of a specific target representation. Copyright © 2015 The Authors. Cognitive Science published by Cognitive Science Society, Inc.

  16. What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm.

    PubMed

    Raykov, Yordan P; Boukouvalas, Alexis; Baig, Fahd; Little, Max A

    The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism.

  17. What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm

    PubMed Central

    Baig, Fahd; Little, Max A.

    2016-01-01

    The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism. PMID:27669525

  18. High-performance parallel computing in the classroom using the public goods game as an example

    NASA Astrophysics Data System (ADS)

    Perc, Matjaž

    2017-07-01

    The use of computers in statistical physics is common because the sheer number of equations that describe the behaviour of an entire system particle by particle often makes it impossible to solve them exactly. Monte Carlo methods form a particularly important class of numerical methods for solving problems in statistical physics. Although these methods are simple in principle, their proper use requires a good command of statistical mechanics, as well as considerable computational resources. The aim of this paper is to demonstrate how the usage of widely accessible graphics cards on personal computers can elevate the computing power in Monte Carlo simulations by orders of magnitude, thus allowing live classroom demonstration of phenomena that would otherwise be out of reach. As an example, we use the public goods game on a square lattice where two strategies compete for common resources in a social dilemma situation. We show that the second-order phase transition to an absorbing phase in the system belongs to the directed percolation universality class, and we compare the time needed to arrive at this result by means of the main processor and by means of a suitable graphics card. Parallel computing on graphics processing units has been developed actively during the last decade, to the point where today the learning curve for entry is anything but steep for those familiar with programming. The subject is thus ripe for inclusion in graduate and advanced undergraduate curricula, and we hope that this paper will facilitate this process in the realm of physics education. To that end, we provide a documented source code for an easy reproduction of presented results and for further development of Monte Carlo simulations of similar systems.

  19. Signal design study for shuttle/TDRSS Ku-band uplink

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The adequacy of the signal design approach chosen for the TDRSS/orbiter uplink was evaluated. Critical functions and/or components associated with the baseline design were identified, and design alternatives were developed for those areas considered high risk. A detailed set of RF and signal processing performance specifications for the orbiter hardware associated with the TDRSS/orbiter Ku band uplink was analyzed. Performances of a detailed design of the PN despreader, the PSK carrier synchronization loop, and the symbol synchronizer are identified. The performance of the downlink signal by means of computer simulation to obtain a realistic determination of bit error rate degradations was studied. The three channel PM downlink signal was detailed by means of analysis and computer simulation.

  20. On the performances of computer vision algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.

    2012-01-01

    Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.

  1. Implementing vertex dynamics models of cell populations in biology within a consistent computational framework.

    PubMed

    Fletcher, Alexander G; Osborne, James M; Maini, Philip K; Gavaghan, David J

    2013-11-01

    The dynamic behaviour of epithelial cell sheets plays a central role during development, growth, disease and wound healing. These processes occur as a result of cell adhesion, migration, division, differentiation and death, and involve multiple processes acting at the cellular and molecular level. Computational models offer a useful means by which to investigate and test hypotheses about these processes, and have played a key role in the study of cell-cell interactions. However, the necessarily complex nature of such models means that it is difficult to make accurate comparison between different models, since it is often impossible to distinguish between differences in behaviour that are due to the underlying model assumptions, and those due to differences in the in silico implementation of the model. In this work, an approach is described for the implementation of vertex dynamics models, a discrete approach that represents each cell by a polygon (or polyhedron) whose vertices may move in response to forces. The implementation is undertaken in a consistent manner within a single open source computational framework, Chaste, which comprises fully tested, industrial-grade software that has been developed using an agile approach. This framework allows one to easily change assumptions regarding force generation and cell rearrangement processes within these models. The versatility and generality of this framework is illustrated using a number of biological examples. In each case we provide full details of all technical aspects of our model implementations, and in some cases provide extensions to make the models more generally applicable. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Syllabus Computer in Astronomy

    NASA Astrophysics Data System (ADS)

    Hojaev, Alisher S.

    2015-08-01

    One of the most important and actual subjects and training courses in the curricula for undergraduate level students at the National university of Uzbekistan is ‘Computer Methods in Astronomy’. It covers two semesters and includes both lecture and practice classes. Based on the long term experience we prepared the tutorial for students which contain the description of modern computer applications in astronomy.The main directions of computer application in field of astronomy briefly as follows:1) Automating the process of observation, data acquisition and processing2) Create and store databases (the results of observations, experiments and theoretical calculations) their generalization, classification and cataloging, working with large databases3) The decisions of the theoretical problems (physical modeling, mathematical modeling of astronomical objects and phenomena, derivation of model parameters to obtain a solution of the corresponding equations, numerical simulations), appropriate software creation4) The utilization in the educational process (e-text books, presentations, virtual labs, remote education, testing), amateur astronomy and popularization of the science5) The use as a means of communication and data transfer, research result presenting and dissemination (web-journals), the creation of a virtual information system (local and global computer networks).During the classes the special attention is drawn on the practical training and individual work of students including the independent one.

  3. A strand graph semantics for DNA-based computation

    PubMed Central

    Petersen, Rasmus L.; Lakin, Matthew R.; Phillips, Andrew

    2015-01-01

    DNA nanotechnology is a promising approach for engineering computation at the nanoscale, with potential applications in biofabrication and intelligent nanomedicine. DNA strand displacement is a general strategy for implementing a broad range of nanoscale computations, including any computation that can be expressed as a chemical reaction network. Modelling and analysis of DNA strand displacement systems is an important part of the design process, prior to experimental realisation. As experimental techniques improve, it is important for modelling languages to keep pace with the complexity of structures that can be realised experimentally. In this paper we present a process calculus for modelling DNA strand displacement computations involving rich secondary structures, including DNA branches and loops. We prove that our calculus is also sufficiently expressive to model previous work on non-branching structures, and propose a mapping from our calculus to a canonical strand graph representation, in which vertices represent DNA strands, ordered sites represent domains, and edges between sites represent bonds between domains. We define interactions between strands by means of strand graph rewriting, and prove the correspondence between the process calculus and strand graph behaviours. Finally, we propose a mapping from strand graphs to an efficient implementation, which we use to perform modelling and simulation of DNA strand displacement systems with rich secondary structure. PMID:27293306

  4. Model-based VQ for image data archival, retrieval and distribution

    NASA Technical Reports Server (NTRS)

    Manohar, Mareboyana; Tilton, James C.

    1995-01-01

    An ideal image compression technique for image data archival, retrieval and distribution would be one with the asymmetrical computational requirements of Vector Quantization (VQ), but without the complications arising from VQ codebooks. Codebook generation and maintenance are stumbling blocks which have limited the use of VQ as a practical image compression algorithm. Model-based VQ (MVQ), a variant of VQ described here, has the computational properties of VQ but does not require explicit codebooks. The codebooks are internally generated using mean removed error and Human Visual System (HVS) models. The error model assumed is the Laplacian distribution with mean, lambda-computed from a sample of the input image. A Laplacian distribution with mean, lambda, is generated with uniform random number generator. These random numbers are grouped into vectors. These vectors are further conditioned to make them perceptually meaningful by filtering the DCT coefficients from each vector. The DCT coefficients are filtered by multiplying by a weight matrix that is found to be optimal for human perception. The inverse DCT is performed to produce the conditioned vectors for the codebook. The only image dependent parameter used in the generation of codebook is the mean, lambda, that is included in the coded file to repeat the codebook generation process for decoding.

  5. Photogrammetry of the Viking-Lander imagery.

    USGS Publications Warehouse

    Wu, S.S.C.; Schafer, F.J.

    1982-01-01

    We have solved the problem of photogrammetric mapping from the Viking Lander photography in two ways: 1) by converting the azimuth and elevation scanning imagery to the equivalent of a frame picture by means of computerized rectification; and 2) by interfacing a high-speed, general-purpose computer to the AS-11A analytical plotter so that all computations of corrections can be performed in real time during the process of model orientation and map compilation. Examples are presented of photographs and maps of Earth and Mars. -from Authors

  6. European Workshop Industrical Computer Science Systems approach to design for safety

    NASA Technical Reports Server (NTRS)

    Zalewski, Janusz

    1992-01-01

    This paper presents guidelines on designing systems for safety, developed by the Technical Committee 7 on Reliability and Safety of the European Workshop on Industrial Computer Systems. The focus is on complementing the traditional development process by adding the following four steps: (1) overall safety analysis; (2) analysis of the functional specifications; (3) designing for safety; (4) validation of design. Quantitative assessment of safety is possible by means of a modular questionnaire covering various aspects of the major stages of system development.

  7. Process Defects in Composites.

    DTIC Science & Technology

    1995-01-30

    mean velocity, U, a high kinematic viscosity, v , and a small diameter of the fibers, D , lead to a very small Reynolds number Re = UD << 1 (1) where p is...partial credit to ARO). 9. D . Krajcinovic and S . Mastilovic, "Damage Evolution and Failure Modes", in: Proc. of the Int. Conf. on Computational...34Computer Simulation of a Model for Irreversible Gelation", Journal of Physics A, Vol. 16., pp. 1221-1239. Kuksenko, V . S . and Tamuzs, V . P., 1981

  8. A review of combined experimental and computational procedures for assessing biopolymer structure-process-property relationships.

    PubMed

    Gronau, Greta; Krishnaji, Sreevidhya T; Kinahan, Michelle E; Giesa, Tristan; Wong, Joyce Y; Kaplan, David L; Buehler, Markus J

    2012-11-01

    Tailored biomaterials with tunable functional properties are desirable for many applications ranging from drug delivery to regenerative medicine. To improve the predictability of biopolymer materials functionality, multiple design parameters need to be considered, along with appropriate models. In this article we review the state of the art of synthesis and processing related to the design of biopolymers, with an emphasis on the integration of bottom-up computational modeling in the design process. We consider three prominent examples of well-studied biopolymer materials - elastin, silk, and collagen - and assess their hierarchical structure, intriguing functional properties and categorize existing approaches to study these materials. We find that an integrated design approach in which both experiments and computational modeling are used has rarely been applied for these materials due to difficulties in relating insights gained on different length- and time-scales. In this context, multiscale engineering offers a powerful means to accelerate the biomaterials design process for the development of tailored materials that suit the needs posed by the various applications. The combined use of experimental and computational tools has a very broad applicability not only in the field of biopolymers, but can be exploited to tailor the properties of other polymers and composite materials in general. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Garment Counting in a Textile Warehouse by Means of a Laser Imaging System

    PubMed Central

    Martínez-Sala, Alejandro Santos; Sánchez-Aartnoutse, Juan Carlos; Egea-López, Esteban

    2013-01-01

    Textile logistic warehouses are highly automated mechanized places where control points are needed to count and validate the number of garments in each batch. This paper proposes and describes a low cost and small size automated system designed to count the number of garments by processing an image of the corresponding hanger hooks generated using an array of phototransistors sensors and a linear laser beam. The generated image is processed using computer vision techniques to infer the number of garment units. The system has been tested on two logistic warehouses with a mean error in the estimated number of hangers of 0.13%. PMID:23628760

  10. Garment counting in a textile warehouse by means of a laser imaging system.

    PubMed

    Martínez-Sala, Alejandro Santos; Sánchez-Aartnoutse, Juan Carlos; Egea-López, Esteban

    2013-04-29

    Textile logistic warehouses are highly automated mechanized places where control points are needed to count and validate the number of garments in each batch. This paper proposes and describes a low cost and small size automated system designed to count the number of garments by processing an image of the corresponding hanger hooks generated using an array of phototransistors sensors and a linear laser beam. The generated image is processed using computer vision techniques to infer the number of garment units. The system has been tested on two logistic warehouses with a mean error in the estimated number of hangers of 0.13%.

  11. Numerical image manipulation and display in solar astronomy

    NASA Technical Reports Server (NTRS)

    Levine, R. H.; Flagg, J. C.

    1977-01-01

    The paper describes the system configuration and data manipulation capabilities of a solar image display system which allows interactive analysis of visual images and on-line manipulation of digital data. Image processing features include smoothing or filtering of images stored in the display, contrast enhancement, and blinking or flickering images. A computer with a core memory of 28,672 words provides the capacity to perform complex calculations based on stored images, including computing histograms, selecting subsets of images for further analysis, combining portions of images to produce images with physical meaning, and constructing mathematical models of features in an image. Some of the processing modes are illustrated by some image sequences from solar observations.

  12. 14 CFR 1240.102 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Contributions Board. (d) Commercial quality refers to computer software that is not in an experimental or beta..., engineering or scientific concept, idea, design, process, or product. (h) Innovator means any person listed as..., machine, manufacture, design, or composition of matter, or any new and useful improvement thereof, or any...

  13. Poisson pre-processing of nonstationary photonic signals: Signals with equality between mean and variance.

    PubMed

    Poplová, Michaela; Sovka, Pavel; Cifra, Michal

    2017-01-01

    Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal.

  14. Poisson pre-processing of nonstationary photonic signals: Signals with equality between mean and variance

    PubMed Central

    Poplová, Michaela; Sovka, Pavel

    2017-01-01

    Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal. PMID:29216207

  15. Computational thinking in life science education.

    PubMed

    Rubinstein, Amir; Chor, Benny

    2014-11-01

    We join the increasing call to take computational education of life science students a step further, beyond teaching mere programming and employing existing software tools. We describe a new course, focusing on enriching the curriculum of life science students with abstract, algorithmic, and logical thinking, and exposing them to the computational "culture." The design, structure, and content of our course are influenced by recent efforts in this area, collaborations with life scientists, and our own instructional experience. Specifically, we suggest that an effective course of this nature should: (1) devote time to explicitly reflect upon computational thinking processes, resisting the temptation to drift to purely practical instruction, (2) focus on discrete notions, rather than on continuous ones, and (3) have basic programming as a prerequisite, so students need not be preoccupied with elementary programming issues. We strongly recommend that the mere use of existing bioinformatics tools and packages should not replace hands-on programming. Yet, we suggest that programming will mostly serve as a means to practice computational thinking processes. This paper deals with the challenges and considerations of such computational education for life science students. It also describes a concrete implementation of the course and encourages its use by others.

  16. An almost general theory of mean size perception.

    PubMed

    Allik, Jüri; Toom, Mai; Raidvee, Aire; Averin, Kristiina; Kreegipuu, Kairi

    2013-05-03

    A general explanation for the observer's ability to judge the mean size of simple geometrical figures, such as circles, was advanced. Results indicated that, contrary to what would be predicted by statistical averaging, the precision of mean size perception decreases with the number of judged elements. Since mean size discrimination was insensitive to how total size differences were distributed among individual elements, this suggests that the observer has a limited cognitive access to the size of individual elements pooled together in a compulsory manner before size information reaches awareness. Confirming the associative law of addition means, observers are indeed sensitive to the mean, not the sizes of individual elements. All existing data can be explained by an almost general theory, namely, the Noise and Selection (N&S) Theory, formulated in exact quantitative terms, implementing two familiar psychophysical principles: the size of an element cannot be measured with absolute accuracy and only a limited number of elements can be taken into account in the computation of the average size. It was concluded that the computation of ensemble characteristics is not necessarily a tool for surpassing the capacity limitations of perceptual processing. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Unexpected formation of 2,1-benzisothiazol-3-ones from oxathiolano ketenimines: a rare tandem process.

    PubMed

    Alajarin, Mateo; Bonillo, Baltasar; Sanchez-Andrada, Pilar; Vidal, Angel; Bautista, Delia

    2009-03-19

    A rare one-pot reaction, a tandem [1,5]-H shift/1,5 electrocyclization/[3 + 2] cycloreversion process, leading from N-[2-(1,3-oxathiolan-2-yl)]phenyl ketenimines to 1-(beta-styryl)-2,1-benzisothiazol-3-ones and ethylene, is disclosed and mechanistically unraveled by means of a computational DFT study. The two latter stages of the tandem process are calculated to occur in a single mechanistic step via a transition structure of pseudopericyclic characteristics.

  18. Stimulus Sensitivity of a Spiking Neural Network Model

    NASA Astrophysics Data System (ADS)

    Chevallier, Julien

    2018-02-01

    Some recent papers relate the criticality of complex systems to their maximal capacity of information processing. In the present paper, we consider high dimensional point processes, known as age-dependent Hawkes processes, which have been used to model spiking neural networks. Using mean-field approximation, the response of the network to a stimulus is computed and we provide a notion of stimulus sensitivity. It appears that the maximal sensitivity is achieved in the sub-critical regime, yet almost critical for a range of biologically relevant parameters.

  19. Rapid Generation of Conceptual and Preliminary Design Aerodynamic Data by a Computer Aided Process

    DTIC Science & Technology

    2000-06-01

    methodologies, oftenpeculiar requirements such as flexibility and robustness of blended with sensible ’guess-estimated’ values. Due to peculiaremequirments...from the ’raw’ appropriate blending interpolation between the given data aerodynamic data is a process which certainly requires yields generally...like component patches are described by defining the evolution of a conic curve between two opposite boundary curves by means of blending functions

  20. Adaptive Filtering in the Wavelet Transform Domain Via Genetic Algorithms

    DTIC Science & Technology

    2004-08-01

    inverse transform process. 2. BACKGROUND The image processing research conducted at the AFRL/IFTA Reconfigurable Computing Laboratory has been...coefficients from the wavelet domain back into the original signal domain. In other words, the inverse transform produces the original signal x(t) from the...coefficients for an inverse wavelet transform, such that the MSE of images reconstructed by this inverse transform is significantly less than the mean squared

  1. A review of computer aided interpretation technology for the evaluation of radiographs of aluminum welds

    NASA Technical Reports Server (NTRS)

    Lloyd, J. F., Sr.

    1987-01-01

    Industrial radiography is a well established, reliable means of providing nondestructive structural integrity information. The majority of industrial radiographs are interpreted by trained human eyes using transmitted light and various visual aids. Hundreds of miles of radiographic information are evaluated, documented and archived annually. In many instances, there are serious considerations in terms of interpreter fatigue, subjectivity and limited archival space. Quite often it is difficult to quickly retrieve radiographic information for further analysis or investigation. Methods of improving the quality and efficiency of the radiographic process are being explored, developed and incorporated whenever feasible. High resolution cameras, digital image processing, and mass digital data storage offer interesting possibilities for improving the industrial radiographic process. A review is presented of computer aided radiographic interpretation technology in terms of how it could be used to enhance the radiographic interpretation process in evaluating radiographs of aluminum welds.

  2. On Writing and Reading Artistic Computational Ecosystems.

    PubMed

    Antunes, Rui Filipe; Leymarie, Frederic Fol; Latham, William

    2015-01-01

    We study the use of the generative systems known as computational ecosystems to convey artistic and narrative aims. These are virtual worlds running on computers, composed of agents that trade units of energy and emulate cycles of life and behaviors adapted from biological life forms. In this article we propose a conceptual framework in order to understand these systems, which are involved in processes of authorship and interpretation that this investigation analyzes in order to identify critical instruments for artistic exploration. We formulate a model of narrative that we call system stories (after Mitchell Whitelaw), characterized by the dynamic network of material and conceptual processes that define these artefacts. They account for narrative constellations with multiple agencies from which meaning and messages emerge. Finally, we present three case studies to explore the potential of this model within an artistic and generative domain, arguing that this understanding expands and enriches the palette of the language of these systems.

  3. A framework for the computer-aided planning and optimisation of manufacturing processes for components with functional graded properties

    NASA Astrophysics Data System (ADS)

    Biermann, D.; Gausemeier, J.; Heim, H.-P.; Hess, S.; Petersen, M.; Ries, A.; Wagner, T.

    2014-05-01

    In this contribution a framework for the computer-aided planning and optimisation of functional graded components is presented. The framework is divided into three modules - the "Component Description", the "Expert System" for the synthetisation of several process chains and the "Modelling and Process Chain Optimisation". The Component Description module enhances a standard computer-aided design (CAD) model by a voxel-based representation of the graded properties. The Expert System synthesises process steps stored in the knowledge base to generate several alternative process chains. Each process chain is capable of producing components according to the enhanced CAD model and usually consists of a sequence of heating-, cooling-, and forming processes. The dependencies between the component and the applied manufacturing processes as well as between the processes themselves need to be considered. The Expert System utilises an ontology for that purpose. The ontology represents all dependencies in a structured way and connects the information of the knowledge base via relations. The third module performs the evaluation of the generated process chains. To accomplish this, the parameters of each process are optimised with respect to the component specification, whereby the result of the best parameterisation is used as representative value. Finally, the process chain which is capable of manufacturing a functionally graded component in an optimal way regarding to the property distributions of the component description is presented by means of a dedicated specification technique.

  4. MapReduce SVM Game

    DOE PAGES

    Vineyard, Craig M.; Verzi, Stephen J.; James, Conrad D.; ...

    2015-08-10

    Despite technological advances making computing devices faster, smaller, and more prevalent in today's age, data generation and collection has outpaced data processing capabilities. Simply having more compute platforms does not provide a means of addressing challenging problems in the big data era. Rather, alternative processing approaches are needed and the application of machine learning to big data is hugely important. The MapReduce programming paradigm is an alternative to conventional supercomputing approaches, and requires less stringent data passing constrained problem decompositions. Rather, MapReduce relies upon defining a means of partitioning the desired problem so that subsets may be computed independently andmore » recom- bined to yield the net desired result. However, not all machine learning algorithms are amenable to such an approach. Game-theoretic algorithms are often innately distributed, consisting of local interactions between players without requiring a central authority and are iterative by nature rather than requiring extensive retraining. Effectively, a game-theoretic approach to machine learning is well suited for the MapReduce paradigm and provides a novel, alternative new perspective to addressing the big data problem. In this paper we present a variant of our Support Vector Machine (SVM) Game classifier which may be used in a distributed manner, and show an illustrative example of applying this algorithm.« less

  5. On the Accuracy and Parallelism of GPGPU-Powered Incremental Clustering Algorithms

    PubMed Central

    He, Li; Zheng, Hao; Wang, Lei

    2017-01-01

    Incremental clustering algorithms play a vital role in various applications such as massive data analysis and real-time data processing. Typical application scenarios of incremental clustering raise high demand on computing power of the hardware platform. Parallel computing is a common solution to meet this demand. Moreover, General Purpose Graphic Processing Unit (GPGPU) is a promising parallel computing device. Nevertheless, the incremental clustering algorithm is facing a dilemma between clustering accuracy and parallelism when they are powered by GPGPU. We formally analyzed the cause of this dilemma. First, we formalized concepts relevant to incremental clustering like evolving granularity. Second, we formally proved two theorems. The first theorem proves the relation between clustering accuracy and evolving granularity. Additionally, this theorem analyzes the upper and lower bounds of different-to-same mis-affiliation. Fewer occurrences of such mis-affiliation mean higher accuracy. The second theorem reveals the relation between parallelism and evolving granularity. Smaller work-depth means superior parallelism. Through the proofs, we conclude that accuracy of an incremental clustering algorithm is negatively related to evolving granularity while parallelism is positively related to the granularity. Thus the contradictory relations cause the dilemma. Finally, we validated the relations through a demo algorithm. Experiment results verified theoretical conclusions. PMID:29123546

  6. MapReduce SVM Game

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vineyard, Craig M.; Verzi, Stephen J.; James, Conrad D.

    Despite technological advances making computing devices faster, smaller, and more prevalent in today's age, data generation and collection has outpaced data processing capabilities. Simply having more compute platforms does not provide a means of addressing challenging problems in the big data era. Rather, alternative processing approaches are needed and the application of machine learning to big data is hugely important. The MapReduce programming paradigm is an alternative to conventional supercomputing approaches, and requires less stringent data passing constrained problem decompositions. Rather, MapReduce relies upon defining a means of partitioning the desired problem so that subsets may be computed independently andmore » recom- bined to yield the net desired result. However, not all machine learning algorithms are amenable to such an approach. Game-theoretic algorithms are often innately distributed, consisting of local interactions between players without requiring a central authority and are iterative by nature rather than requiring extensive retraining. Effectively, a game-theoretic approach to machine learning is well suited for the MapReduce paradigm and provides a novel, alternative new perspective to addressing the big data problem. In this paper we present a variant of our Support Vector Machine (SVM) Game classifier which may be used in a distributed manner, and show an illustrative example of applying this algorithm.« less

  7. Relationship of Selected Abilities to Problem Solving Performance.

    ERIC Educational Resources Information Center

    Harmel, Sarah Jane

    This study investigated five ability tests related to the water-jug problem. Previous analyses identified two processes used during solution: means-ends analysis and memory of visited states. Subjects were 240 undergraduate psychology students. A real-time computer system presented the problem and recorded responses. Ability tests were paper and…

  8. A computerized system for portrayal of landscape alterations

    Treesearch

    A. E. Stevenson; J. A. Conley; J. B. Carey

    1979-01-01

    The growing public awareness of and participation in the visual resource decision process has stimulated interest to find improved means of accurately and realistically displaying proposed alterations. The traditional artist renderings often lack the accuracy and objectivity needed for critical decisions. One approach, using computer graphics, led to the MOSAIC system...

  9. Standards for Teleprocessing; New Approaches for New Needs.

    ERIC Educational Resources Information Center

    Istvan, Edwin J.

    The rapidly expanding use of teleprocessing, which is taken to mean automated data processing (ADP) which makes direct use of data transmission via switched or long distance non-switched telecommunications facilities, has highlighted the urgent need for the development of standards for data communications and the computer-communications interface.…

  10. The Intellectual Assembly Line is Already Here

    ERIC Educational Resources Information Center

    Vanderburg, Willem H.

    2004-01-01

    The universal attempt to link computers by means of business process reengineering, enterprise integration, and the management of technology is creating large systems that structure and control the flows of information within institutions. Human work associated with these systems must be reorganized in the image of these technologies. The…

  11. English Complex Verb Constructions: Identification and Inference

    ERIC Educational Resources Information Center

    Tu, Yuancheng

    2012-01-01

    The fundamental problem faced by automatic text understanding in Natural Language Processing (NLP) is to identify semantically related pieces of text and integrate them together to compute the meaning of the whole text. However, the principle of compositionality runs into trouble very quickly when real language is examined with its frequent…

  12. Learning Vocabulary via Computer-Assisted Scaffolding for Text Processing

    ERIC Educational Resources Information Center

    Li, Jia

    2010-01-01

    A substantial amount of literature regarding first language (L1) acquisition has shown that reading for meaning significantly contributes to vocabulary expansion and strongly relates to overall academic success. Research in the English as a Second Language (ESL) context, however, has presented mixed results, in particular for recent immigrant…

  13. Photophysical and photochemical insights into the photodegradation of sulfapyridine in water: A joint experimental and theoretical study.

    PubMed

    Zhang, Heming; Wei, Xiaoxuan; Song, Xuedan; Shah, Shaheen; Chen, Jingwen; Liu, Jianhui; Hao, Ce; Chen, Zhongfang

    2018-01-01

    For organic pollutants, photodegradation, as a major abiotic elimination process and of great importance to the environmental fate and risk, involves rather complicated physical and chemical processes of excited molecules. Herein, we systematically studied the photophysical and photochemical processes of a widely used antibiotic, namely sulfapyridine. By means of density functional theory (DFT) computations, we examined the rate constants and the competition of both photophysical and photochemical processes, elucidated the photochemical reaction mechanism, calculated reaction quantum yield (Φ) based on both photophysical and photochemical processes, and subsequently estimated the photodegradation rate constant. We further conducted photolysis experiments to measure the photodegradation rate constant of sulfapyridine. Our computations showed that sulfapyridine at the lowest excited singlet state (S 1 ) mainly undergoes internal conversion to its ground state, and is difficult to transfer to the lowest excited triplet states (T 1 ) via intersystem crossing (ISC) and emit fluorescence. In T 1 state, compared with phosphorescence emission and ISC, chemical reaction is much easier to initiate. Encouragingly, the theoretically predicted photodegradation rate constant is close to the experimentally observed value, indicating that quantum chemistry computation is powerful enough to study photodegradation involving ultra-fast photophysical and photochemical processes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Massive Cloud Computing Processing of P-SBAS Time Series for Displacement Analyses at Large Spatial Scale

    NASA Astrophysics Data System (ADS)

    Casu, F.; de Luca, C.; Lanari, R.; Manunta, M.; Zinno, I.

    2016-12-01

    A methodology for computing surface deformation time series and mean velocity maps of large areas is presented. Our approach relies on the availability of a multi-temporal set of Synthetic Aperture Radar (SAR) data collected from ascending and descending orbits over an area of interest, and also permits to estimate the vertical and horizontal (East-West) displacement components of the Earth's surface. The adopted methodology is based on an advanced Cloud Computing implementation of the Differential SAR Interferometry (DInSAR) Parallel Small Baseline Subset (P-SBAS) processing chain which allows the unsupervised processing of large SAR data volumes, from the raw data (level-0) imagery up to the generation of DInSAR time series and maps. The presented solution, which is highly scalable, has been tested on the ascending and descending ENVISAT SAR archives, which have been acquired over a large area of Southern California (US) that extends for about 90.000 km2. Such an input dataset has been processed in parallel by exploiting 280 computing nodes of the Amazon Web Services Cloud environment. Moreover, to produce the final mean deformation velocity maps of the vertical and East-West displacement components of the whole investigated area, we took also advantage of the information available from external GPS measurements that permit to account for possible regional trends not easily detectable by DInSAR and to refer the P-SBAS measurements to an external geodetic datum. The presented results clearly demonstrate the effectiveness of the proposed approach that paves the way to the extensive use of the available ERS and ENVISAT SAR data archives. Furthermore, the proposed methodology can be particularly suitable to deal with the very huge data flow provided by the Sentinel-1 constellation, thus permitting to extend the DInSAR analyses at a nearly global scale. This work is partially supported by: the DPC-CNR agreement, the EPOS-IP project and the ESA GEP project.

  15. CSM research: Methods and application studies

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.

    1989-01-01

    Computational mechanics is that discipline of applied science and engineering devoted to the study of physical phenomena by means of computational methods based on mathematical modeling and simulation, utilizing digital computers. The discipline combines theoretical and applied mechanics, approximation theory, numerical analysis, and computer science. Computational mechanics has had a major impact on engineering analysis and design. When applied to structural mechanics, the discipline is referred to herein as computational structural mechanics. Complex structures being considered by NASA for the 1990's include composite primary aircraft structures and the space station. These structures will be much more difficult to analyze than today's structures and necessitate a major upgrade in computerized structural analysis technology. NASA has initiated a research activity in structural analysis called Computational Structural Mechanics (CSM). The broad objective of the CSM activity is to develop advanced structural analysis technology that will exploit modern and emerging computers, such as those with vector and/or parallel processing capabilities. Here, the current research directions for the Methods and Application Studies Team of the Langley CSM activity are described.

  16. Semi-Markov adjunction to the Computer-Aided Markov Evaluator (CAME)

    NASA Technical Reports Server (NTRS)

    Rosch, Gene; Hutchins, Monica A.; Leong, Frank J.; Babcock, Philip S., IV

    1988-01-01

    The rule-based Computer-Aided Markov Evaluator (CAME) program was expanded in its ability to incorporate the effect of fault-handling processes into the construction of a reliability model. The fault-handling processes are modeled as semi-Markov events and CAME constructs and appropriate semi-Markov model. To solve the model, the program outputs it in a form which can be directly solved with the Semi-Markov Unreliability Range Evaluator (SURE) program. As a means of evaluating the alterations made to the CAME program, the program is used to model the reliability of portions of the Integrated Airframe/Propulsion Control System Architecture (IAPSA 2) reference configuration. The reliability predictions are compared with a previous analysis. The results bear out the feasibility of utilizing CAME to generate appropriate semi-Markov models to model fault-handling processes.

  17. SAR processing in the cloud for oil detection in the Arctic

    NASA Astrophysics Data System (ADS)

    Garron, J.; Stoner, C.; Meyer, F. J.

    2016-12-01

    A new world of opportunity is being thawed from the ice of the Arctic, driven by decreased persistent Arctic sea-ice cover, increases in shipping, tourism, natural resource development. Tools that can automatically monitor key sea ice characteristics and potential oil spills are essential for safe passage in these changing waters. Synthetic aperture radar (SAR) data can be used to discriminate sea ice types and oil on the ocean surface and also for feature tracking. Additionally, SAR can image the earth through the night and most weather conditions. SAR data is volumetrically large and requires significant computing power to manipulate. Algorithms designed to identify key environmental features, like oil spills, in SAR imagery require secondary processing, and are computationally intensive, which can functionally limit their application in a real-time setting. Cloud processing is designed to manage big data and big data processing jobs by means of small cycles of off-site computations, eliminating up-front hardware costs. Pairing SAR data with cloud processing has allowed us to create and solidify a processing pipeline for SAR data products in the cloud to compare operational algorithms efficiency and effectiveness when run using an Alaska Satellite Facility (ASF) defined Amazon Machine Image (AMI). The products created from this secondary processing, were compared to determine which algorithm was most accurate in Arctic feature identification, and what operational conditions were required to produce the results on the ASF defined AMI. Results will be used to inform a series of recommendations to oil-spill response data managers and SAR users interested in expanding their analytical computing power.

  18. Gate sequence for continuous variable one-way quantum computation

    PubMed Central

    Su, Xiaolong; Hao, Shuhong; Deng, Xiaowei; Ma, Lingyu; Wang, Meihong; Jia, Xiaojun; Xie, Changde; Peng, Kunchi

    2013-01-01

    Measurement-based one-way quantum computation using cluster states as resources provides an efficient model to perform computation and information processing of quantum codes. Arbitrary Gaussian quantum computation can be implemented sufficiently by long single-mode and two-mode gate sequences. However, continuous variable gate sequences have not been realized so far due to an absence of cluster states larger than four submodes. Here we present the first continuous variable gate sequence consisting of a single-mode squeezing gate and a two-mode controlled-phase gate based on a six-mode cluster state. The quantum property of this gate sequence is confirmed by the fidelities and the quantum entanglement of two output modes, which depend on both the squeezing and controlled-phase gates. The experiment demonstrates the feasibility of implementing Gaussian quantum computation by means of accessible gate sequences.

  19. Eye/Brain/Task Testbed And Software

    NASA Technical Reports Server (NTRS)

    Janiszewski, Thomas; Mainland, Nora; Roden, Joseph C.; Rothenheber, Edward H.; Ryan, Arthur M.; Stokes, James M.

    1994-01-01

    Eye/brain/task (EBT) testbed records electroencephalograms, movements of eyes, and structures of tasks to provide comprehensive data on neurophysiological experiments. Intended to serve continuing effort to develop means for interactions between human brain waves and computers. Software library associated with testbed provides capabilities to recall collected data, to process data on movements of eyes, to correlate eye-movement data with electroencephalographic data, and to present data graphically. Cognitive processes investigated in ways not previously possible.

  20. The absorption of energetic electrons by molecular hydrogen gas

    NASA Technical Reports Server (NTRS)

    Cravens, T. E.; Victor, G. A.; Dalgarno, A.

    1975-01-01

    The processes by which energetic electrons lose energy in a weakly ionized gas of molecular hydrogen are analyzed, and calculations are carried out taking into account the discrete nature of the excitation processes. The excitation, ionization, and heating efficiencies are computed for electrons with energies up to 100 eV absorbed in a gas with fractional ionizations up to 0.01, and the mean energy per pair of neutral hydrogen atoms is calculated.

  1. Availability and mean time between failures of redundant systems with random maintenance of subsystems

    NASA Technical Reports Server (NTRS)

    Schneeweiss, W.

    1977-01-01

    It is shown how the availability and MTBF (Mean Time Between Failures) of a redundant system with subsystems maintenanced at the points of so-called stationary renewal processes can be determined from the distributions of the intervals between maintenance actions and of the failure-free operating intervals of the subsystems. The results make it possible, for example, to determine the frequency and duration of hidden failure states in computers which are incidentally corrected during the repair of observed failures.

  2. Fine grained event processing on HPCs with the ATLAS Yoda system

    NASA Astrophysics Data System (ADS)

    Calafiura, Paolo; De, Kaushik; Guan, Wen; Maeno, Tadashi; Nilsson, Paul; Oleynik, Danila; Panitkin, Sergey; Tsulaia, Vakhtang; Van Gemmeren, Peter; Wenaus, Torre

    2015-12-01

    High performance computing facilities present unique challenges and opportunities for HEP event processing. The massive scale of many HPC systems means that fractionally small utilization can yield large returns in processing throughput. Parallel applications which can dynamically and efficiently fill any scheduling opportunities the resource presents benefit both the facility (maximal utilization) and the (compute-limited) science. The ATLAS Yoda system provides this capability to HEP-like event processing applications by implementing event-level processing in an MPI-based master-client model that integrates seamlessly with the more broadly scoped ATLAS Event Service. Fine grained, event level work assignments are intelligently dispatched to parallel workers to sustain full utilization on all cores, with outputs streamed off to destination object stores in near real time with similarly fine granularity, such that processing can proceed until termination with full utilization. The system offers the efficiency and scheduling flexibility of preemption without requiring the application actually support or employ check-pointing. We will present the new Yoda system, its motivations, architecture, implementation, and applications in ATLAS data processing at several US HPC centers.

  3. [Computer-assisted analysis of the results of training in internal medicine].

    PubMed

    Vrbová, H; Spunda, M

    1991-06-01

    Analysis of the results of teaching of clinical disciplines has in the long run an impact on the standard and value of medical care. It requires processing of quantitative and qualitative data. The selection of indicators which will be followed up and procedures used for their processing are of fundamental importance. The submitted investigation is an example how to use possibilities to process results of effectiveness analysis in teaching internal medicine by means of computer technique. As an indicator of effectiveness the authors selected the percentage of students who had an opportunity during the given period of their studies to observe a certain pathological condition, and as method of data collection a survey by means of questionnaires was used. The task permits to differentiate the students' experience (whether the student examined the patient himself or whether the patient was only demonstrated) and it makes it possible to differentiate the place of observation (at the university teaching hospital or regional non-teaching hospital attachment). The task permits also to form sub-groups of respondents to combine them as desired and to compare their results. The described computer programme support comprises primary processing of the output of the questionnaire survey. The questionnaires are transformed and stored by groups of respondents in data files of suitable format (programme SDFORM); the processing of results is described as well as their presentation as output listing or on the display in the interactive way (SDRESULT programme). Using the above programmes, the authors processed the results of a survey made among students during and after completion of the studies in a series of 70 recommended pathological conditions. As an example the authors compare results of observations in 20 selected pathological conditions important for the diagnosis and therapy in primary care in the final stage of the medical course in 1981 and 1985.

  4. The design of an m-Health monitoring system based on a cloud computing platform

    NASA Astrophysics Data System (ADS)

    Xu, Boyi; Xu, Lida; Cai, Hongming; Jiang, Lihong; Luo, Yang; Gu, Yizhi

    2017-01-01

    Compared to traditional medical services provided within hospitals, m-Health monitoring systems (MHMSs) face more challenges in personalised health data processing. To achieve personalised and high-quality health monitoring by means of new technologies, such as mobile network and cloud computing, in this paper, a framework of an m-Health monitoring system based on a cloud computing platform (Cloud-MHMS) is designed to implement pervasive health monitoring. Furthermore, the modules of the framework, which are Cloud Storage and Multiple Tenants Access Control Layer, Healthcare Data Annotation Layer, and Healthcare Data Analysis Layer, are discussed. In the data storage layer, a multiple tenant access method is designed to protect patient privacy. In the data annotation layer, linked open data are adopted to augment health data interoperability semantically. In the data analysis layer, the process mining algorithm and similarity calculating method are implemented to support personalised treatment plan selection. These three modules cooperate to implement the core functions in the process of health monitoring, which are data storage, data processing, and data analysis. Finally, we study the application of our architecture in the monitoring of antimicrobial drug usage to demonstrate the usability of our method in personal healthcare analysis.

  5. Performance Modeling in CUDA Streams - A Means for High-Throughput Data Processing.

    PubMed

    Li, Hao; Yu, Di; Kumar, Anand; Tu, Yi-Cheng

    2014-10-01

    Push-based database management system (DBMS) is a new type of data processing software that streams large volume of data to concurrent query operators. The high data rate of such systems requires large computing power provided by the query engine. In our previous work, we built a push-based DBMS named G-SDMS to harness the unrivaled computational capabilities of modern GPUs. A major design goal of G-SDMS is to support concurrent processing of heterogenous query processing operations and enable resource allocation among such operations. Understanding the performance of operations as a result of resource consumption is thus a premise in the design of G-SDMS. With NVIDIA's CUDA framework as the system implementation platform, we present our recent work on performance modeling of CUDA kernels running concurrently under a runtime mechanism named CUDA stream . Specifically, we explore the connection between performance and resource occupancy of compute-bound kernels and develop a model that can predict the performance of such kernels. Furthermore, we provide an in-depth anatomy of the CUDA stream mechanism and summarize the main kernel scheduling disciplines in it. Our models and derived scheduling disciplines are verified by extensive experiments using synthetic and real-world CUDA kernels.

  6. Exploring the dynamics of collective cognition using a computational model of cognitive dissonance

    NASA Astrophysics Data System (ADS)

    Smart, Paul R.; Sycara, Katia; Richardson, Darren P.

    2013-05-01

    The socially-distributed nature of cognitive processing in a variety of organizational settings means that there is increasing scientific interest in the factors that affect collective cognition. In military coalitions, for example, there is a need to understand how factors such as communication network topology, trust, cultural differences and the potential for miscommunication affects the ability of distributed teams to generate high quality plans, to formulate effective decisions and to develop shared situation awareness. The current paper presents a computational model and associated simulation capability for performing in silico experimental analyses of collective sensemaking. This model can be used in combination with the results of human experimental studies in order to improve our understanding of the factors that influence collective sensemaking processes.

  7. An Engineering Solution for Solving Mesh Size Effects in the Simulation of Delamination with Cohesive Zone Models

    NASA Technical Reports Server (NTRS)

    Turon, A.; Davila, C. G.; Camanho, P. P.; Costa, J.

    2007-01-01

    This paper presents a methodology to determine the parameters to be used in the constitutive equations of Cohesive Zone Models employed in the simulation of delamination in composite materials by means of decohesion finite elements. A closed-form expression is developed to define the stiffness of the cohesive layer. A novel procedure that allows the use of coarser meshes of decohesion elements in large-scale computations is also proposed. The procedure ensures that the energy dissipated by the fracture process is computed correctly. It is shown that coarse-meshed models defined using the approach proposed here yield the same results as the models with finer meshes normally used for the simulation of fracture processes.

  8. Infrared image construction with computer-generated reflection holograms. [using carbon dioxide laser

    NASA Technical Reports Server (NTRS)

    Angus, J. C.; Coffield, F. E.; Edwards, R. V.; Mann, J. A., Jr.; Rugh, R. W.; Gallagher, N. C.

    1977-01-01

    Computer-generated reflection holograms hold substantial promise as a means of carrying out complex machining, marking, scribing, welding, soldering, heat treating, and similar processing operations simultaneously and without moving the work piece or laser beam. In the study described, a photographically reduced transparency of a 64 x 64 element Lohmann hologram was used to make a mask which, in turn, was used (with conventional photoresist techniques) to produce a holographic reflector. Images from a commercial CO2 laser (150W TEM(00)) and the holographic reflector are illustrated and discussed.

  9. Integration of Openstack cloud resources in BES III computing cluster

    NASA Astrophysics Data System (ADS)

    Li, Haibo; Cheng, Yaodong; Huang, Qiulan; Cheng, Zhenjing; Shi, Jingyan

    2017-10-01

    Cloud computing provides a new technical means for data processing of high energy physics experiment. However, the resource of each queue is fixed and the usage of the resource is static in traditional job management system. In order to make it simple and transparent for physicist to use, we developed a virtual cluster system (vpmanager) to integrate IHEPCloud and different batch systems such as Torque and HTCondor. Vpmanager provides dynamic virtual machines scheduling according to the job queue. The BES III use case results show that resource efficiency is greatly improved.

  10. Real-time computer-generated hologram by means of liquid-crystal television spatial light modulator

    NASA Technical Reports Server (NTRS)

    Mok, Fai; Psaltis, Demetri; Diep, Joseph; Liu, Hua-Kuang

    1986-01-01

    The usefulness of an inexpensive liquid-crystal television) (LCTV) as a spatial light modulator for coherent-optical processing in the writing and reconstruction of a single computer-generated hologram has been demonstrated. The thickness nonuniformities of the LCTV screen were examined in a Mach-Zehnder interferometer, and the phase distortions were successfully removed using a technique in which the LCTV screen was submerged in a liquid gate filled with an index-matching nonconductive mineral oil with refractive index of about 1.45.

  11. Demonstration of Inexact Computing Implemented in the JPEG Compression Algorithm using Probabilistic Boolean Logic applied to CMOS Components

    DTIC Science & Technology

    2015-12-24

    Ripple-Carry RCA Ripple-Carry Adder RF Radio Frequency RMS Root-Mean-Square SEU Single Event Upset SIPI Signal and Image Processing Institute SNR...correctness, where 0.5 < p < 1, and a probability (1−p) of error. Errors could be caused by noise, radio frequency (RF) interference, crosstalk...utilized in the Apollo Guidance Computer is the three input NOR Gate. . . At the time that the decision was made to use in- 11 tegrated circuits, the

  12. Demonstration of Inexact Computing Implemented in the JPEG Compression Algorithm Using Probabilistic Boolean Logic Applied to CMOS Components

    DTIC Science & Technology

    2015-12-24

    Ripple-Carry RCA Ripple-Carry Adder RF Radio Frequency RMS Root-Mean-Square SEU Single Event Upset SIPI Signal and Image Processing Institute SNR...correctness, where 0.5 < p < 1, and a probability (1−p) of error. Errors could be caused by noise, radio frequency (RF) interference, crosstalk...utilized in the Apollo Guidance Computer is the three input NOR Gate. . . At the time that the decision was made to use in- 11 tegrated circuits, the

  13. Computer-aided system for interactive psychomotor testing

    NASA Astrophysics Data System (ADS)

    Selivanova, Karina G.; Ignashchuk, Olena V.; Koval, Leonid G.; Kilivnik, Volodymyr S.; Zlepko, Alexandra S.; Sawicki, Daniel; Kalizhanova, Aliya; Zhanpeisova, Aizhan; Smailova, Saule

    2017-08-01

    Nowadays research of psychomotor actions has taken a special place in education, sports, medicine, psychology etc. Development of computer system for psychomotor testing could help solve many operational problems in psychoneurology and psychophysiology and also determine the individual characteristics of fine motor skills. This is particularly relevant issue when it comes to children, students, athletes for definition of personal and professional features. The article presents the dynamics of a developing psychomotor skills and application in the training process of means. The results of testing indicated their significant impact on psychomotor skills development.

  14. Upper and lower bounds for semi-Markov reliability models of reconfigurable systems

    NASA Technical Reports Server (NTRS)

    White, A. L.

    1984-01-01

    This paper determines the information required about system recovery to compute the reliability of a class of reconfigurable systems. Upper and lower bounds are derived for these systems. The class consists of those systems that satisfy five assumptions: the components fail independently at a low constant rate, fault occurrence and system reconfiguration are independent processes, the reliability model is semi-Markov, the recovery functions which describe system configuration have small means and variances, and the system is well designed. The bounds are easy to compute, and examples are included.

  15. Software for Acoustic Rendering

    NASA Technical Reports Server (NTRS)

    Miller, Joel D.

    2003-01-01

    SLAB is a software system that can be run on a personal computer to simulate an acoustic environment in real time. SLAB was developed to enable computational experimentation in which one can exert low-level control over a variety of signal-processing parameters, related to spatialization, for conducting psychoacoustic studies. Among the parameters that can be manipulated are the number and position of reflections, the fidelity (that is, the number of taps in finite-impulse-response filters), the system latency, and the update rate of the filters. Another goal in the development of SLAB was to provide an inexpensive means of dynamic synthesis of virtual audio over headphones, without need for special-purpose signal-processing hardware. SLAB has a modular, object-oriented design that affords the flexibility and extensibility needed to accommodate a variety of computational experiments and signal-flow structures. SLAB s spatial renderer has a fixed signal-flow architecture corresponding to a set of parallel signal paths from each source to a listener. This fixed architecture can be regarded as a compromise that optimizes efficiency at the expense of complete flexibility. Such a compromise is necessary, given the design goal of enabling computational psychoacoustic experimentation on inexpensive personal computers.

  16. A simulation of remote sensor systems and data processing algorithms for spectral feature classification

    NASA Technical Reports Server (NTRS)

    Arduini, R. F.; Aherron, R. M.; Samms, R. W.

    1984-01-01

    A computational model of the deterministic and stochastic processes involved in multispectral remote sensing was designed to evaluate the performance of sensor systems and data processing algorithms for spectral feature classification. Accuracy in distinguishing between categories of surfaces or between specific types is developed as a means to compare sensor systems and data processing algorithms. The model allows studies to be made of the effects of variability of the atmosphere and of surface reflectance, as well as the effects of channel selection and sensor noise. Examples of these effects are shown.

  17. Computational science: shifting the focus from tools to models

    PubMed Central

    Hinsen, Konrad

    2014-01-01

    Computational techniques have revolutionized many aspects of scientific research over the last few decades. Experimentalists use computation for data analysis, processing ever bigger data sets. Theoreticians compute predictions from ever more complex models. However, traditional articles do not permit the publication of big data sets or complex models. As a consequence, these crucial pieces of information no longer enter the scientific record. Moreover, they have become prisoners of scientific software: many models exist only as software implementations, and the data are often stored in proprietary formats defined by the software. In this article, I argue that this emphasis on software tools over models and data is detrimental to science in the long term, and I propose a means by which this can be reversed. PMID:25309728

  18. Analyzing the Effect of Consultation Training on the Development of Consultation Competence

    ERIC Educational Resources Information Center

    Newell, Markeda L.; Newell, Terrance

    2018-01-01

    The purpose of this study was to examine the effectiveness of one consultation course on the development of pre-service school psychologists' consultation knowledge, confidence, and skills. Computer-simulation was used as a means to replicate the school environment and capture consultants' engagement throughout the consultation process without…

  19. DNA as information.

    PubMed

    Wills, Peter R

    2016-03-13

    This article reviews contributions to this theme issue covering the topic 'DNA as information' in relation to the structure of DNA, the measure of its information content, the role and meaning of information in biology and the origin of genetic coding as a transition from uninformed to meaningful computational processes in physical systems. © 2016 The Author(s).

  20. Developing Argumentation Skills in Mathematics through Computer-Supported Collaborative Learning: The Role of Transactivity

    ERIC Educational Resources Information Center

    Vogel, Freydis; Kollar, Ingo; Ufer, Stefan; Reichersdorfer, Elisabeth; Reiss, Kristina; Fischer, Frank

    2016-01-01

    Collaboration scripts and heuristic worked examples are effective means to scaffold university freshmen's mathematical argumentation skills. Yet, which collaborative learning processes are responsible for these effects has remained unclear. Learners presumably will gain the most out of collaboration if the collaborators refer to each other's…

  1. Students as Simulation Designers and Developers--Using Computer Simulations for Teaching Boundary Layer Processes.

    ERIC Educational Resources Information Center

    Johnson, Tristan E.; Clayson, Carol Anne

    As technology developments seek to improve learning, researchers, developers, and educators seek to understand how technological properties impact performance. This paper delineates how a traditional science course is enhanced through the use of simulation projects directed by the students themselves as a means to increase their level of knowledge…

  2. Human-Machine Cooperation in Large-Scale Multimedia Retrieval: A Survey

    ERIC Educational Resources Information Center

    Shirahama, Kimiaki; Grzegorzek, Marcin; Indurkhya, Bipin

    2015-01-01

    "Large-Scale Multimedia Retrieval" (LSMR) is the task to fast analyze a large amount of multimedia data like images or videos and accurately find the ones relevant to a certain semantic meaning. Although LSMR has been investigated for more than two decades in the fields of multimedia processing and computer vision, a more…

  3. Radiation Transport in Random Media With Large Fluctuations

    NASA Astrophysics Data System (ADS)

    Olson, Aaron; Prinja, Anil; Franke, Brian

    2017-09-01

    Neutral particle transport in media exhibiting large and complex material property spatial variation is modeled by representing cross sections as lognormal random functions of space and generated through a nonlinear memory-less transformation of a Gaussian process with covariance uniquely determined by the covariance of the cross section. A Karhunen-Loève decomposition of the Gaussian process is implemented to effciently generate realizations of the random cross sections and Woodcock Monte Carlo used to transport particles on each realization and generate benchmark solutions for the mean and variance of the particle flux as well as probability densities of the particle reflectance and transmittance. A computationally effcient stochastic collocation method is implemented to directly compute the statistical moments such as the mean and variance, while a polynomial chaos expansion in conjunction with stochastic collocation provides a convenient surrogate model that also produces probability densities of output quantities of interest. Extensive numerical testing demonstrates that use of stochastic reduced-order modeling provides an accurate and cost-effective alternative to random sampling for particle transport in random media.

  4. Tracking radar advanced signal processing and computing for Kwajalein Atoll (KA) application

    NASA Astrophysics Data System (ADS)

    Cottrill, Stanley D.

    1992-11-01

    Two means are examined whereby the operations of KMR during mission execution may be improved through the introduction of advanced signal processing techniques. In the first approach, the addition of real time coherent signal processing technology to the FPQ-19 radar is considered. In the second approach, the incorporation of the MMW radar, with its very fine range precision, to the MMS system is considered. The former appears very attractive and a Phase 2 SBIR has been proposed. The latter does not appear promising enough to warrant further development.

  5. Design and implementation of a UNIX based distributed computing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Love, J.S.; Michael, M.W.

    1994-12-31

    We have designed, implemented, and are running a corporate-wide distributed processing batch queue on a large number of networked workstations using the UNIX{reg_sign} operating system. Atlas Wireline researchers and scientists have used the system for over a year. The large increase in available computer power has greatly reduced the time required for nuclear and electromagnetic tool modeling. Use of remote distributed computing has simultaneously reduced computation costs and increased usable computer time. The system integrates equipment from different manufacturers, using various CPU architectures, distinct operating system revisions, and even multiple processors per machine. Various differences between the machines have tomore » be accounted for in the master scheduler. These differences include shells, command sets, swap spaces, memory sizes, CPU sizes, and OS revision levels. Remote processing across a network must be performed in a manner that is seamless from the users` perspective. The system currently uses IBM RISC System/6000{reg_sign}, SPARCstation{sup TM}, HP9000s700, HP9000s800, and DEC Alpha AXP{sup TM} machines. Each CPU in the network has its own speed rating, allowed working hours, and workload parameters. The system if designed so that all of the computers in the network can be optimally scheduled without adversely impacting the primary users of the machines. The increase in the total usable computational capacity by means of distributed batch computing can change corporate computing strategy. The integration of disparate computer platforms eliminates the need to buy one type of computer for computations, another for graphics, and yet another for day-to-day operations. It might be possible, for example, to meet all research and engineering computing needs with existing networked computers.« less

  6. Random noise effects in pulse-mode digital multilayer neural networks.

    PubMed

    Kim, Y C; Shanblatt, M A

    1995-01-01

    A pulse-mode digital multilayer neural network (DMNN) based on stochastic computing techniques is implemented with simple logic gates as basic computing elements. The pulse-mode signal representation and the use of simple logic gates for neural operations lead to a massively parallel yet compact and flexible network architecture, well suited for VLSI implementation. Algebraic neural operations are replaced by stochastic processes using pseudorandom pulse sequences. The distributions of the results from the stochastic processes are approximated using the hypergeometric distribution. Synaptic weights and neuron states are represented as probabilities and estimated as average pulse occurrence rates in corresponding pulse sequences. A statistical model of the noise (error) is developed to estimate the relative accuracy associated with stochastic computing in terms of mean and variance. Computational differences are then explained by comparison to deterministic neural computations. DMNN feedforward architectures are modeled in VHDL using character recognition problems as testbeds. Computational accuracy is analyzed, and the results of the statistical model are compared with the actual simulation results. Experiments show that the calculations performed in the DMNN are more accurate than those anticipated when Bernoulli sequences are assumed, as is common in the literature. Furthermore, the statistical model successfully predicts the accuracy of the operations performed in the DMNN.

  7. Theory, Modeling, Software and Hardware Development for Analytical and Computational Materials Science

    NASA Technical Reports Server (NTRS)

    Young, Gerald W.; Clemons, Curtis B.

    2004-01-01

    The focus of this Cooperative Agreement between the Computational Materials Laboratory (CML) of the Processing Science and Technology Branch of the NASA Glenn Research Center (GRC) and the Department of Theoretical and Applied Mathematics at The University of Akron was in the areas of system development of the CML workstation environment, modeling of microgravity and earth-based material processing systems, and joint activities in laboratory projects. These efforts complement each other as the majority of the modeling work involves numerical computations to support laboratory investigations. Coordination and interaction between the modelers, system analysts, and laboratory personnel are essential toward providing the most effective simulations and communication of the simulation results. Toward these means, The University of Akron personnel involved in the agreement worked at the Applied Mathematics Research Laboratory (AMRL) in the Department of Theoretical and Applied Mathematics while maintaining a close relationship with the personnel of the Computational Materials Laboratory at GRC. Network communication between both sites has been established. A summary of the projects we undertook during the time period 9/1/03 - 6/30/04 is included.

  8. An algorithm of discovering signatures from DNA databases on a computer cluster.

    PubMed

    Lee, Hsiao Ping; Sheu, Tzu-Fang

    2014-10-05

    Signatures are short sequences that are unique and not similar to any other sequence in a database that can be used as the basis to identify different species. Even though several signature discovery algorithms have been proposed in the past, these algorithms require the entirety of databases to be loaded in the memory, thus restricting the amount of data that they can process. It makes those algorithms unable to process databases with large amounts of data. Also, those algorithms use sequential models and have slower discovery speeds, meaning that the efficiency can be improved. In this research, we are debuting the utilization of a divide-and-conquer strategy in signature discovery and have proposed a parallel signature discovery algorithm on a computer cluster. The algorithm applies the divide-and-conquer strategy to solve the problem posed to the existing algorithms where they are unable to process large databases and uses a parallel computing mechanism to effectively improve the efficiency of signature discovery. Even when run with just the memory of regular personal computers, the algorithm can still process large databases such as the human whole-genome EST database which were previously unable to be processed by the existing algorithms. The algorithm proposed in this research is not limited by the amount of usable memory and can rapidly find signatures in large databases, making it useful in applications such as Next Generation Sequencing and other large database analysis and processing. The implementation of the proposed algorithm is available at http://www.cs.pu.edu.tw/~fang/DDCSDPrograms/DDCSD.htm.

  9. Laser Doppler, velocimeter system for turbine stator cascade studies and analysis of statistical biasing errors

    NASA Technical Reports Server (NTRS)

    Seasholtz, R. G.

    1977-01-01

    A laser Doppler velocimeter (LDV) built for use in the Lewis Research Center's turbine stator cascade facilities is described. The signal processing and self contained data processing are based on a computing counter. A procedure is given for mode matching the laser to the probe volume. An analysis is presented of biasing errors that were observed in turbulent flow when the mean flow was not normal to the fringes.

  10. Telescience - Optimizing aerospace science return through geographically distributed operations

    NASA Technical Reports Server (NTRS)

    Rasmussen, Daryl N.; Mian, Arshad M.

    1990-01-01

    The paper examines the objectives and requirements of teleoperations, defined as the means and process for scientists, NASA operations personnel, and astronauts to conduct payload operations as if these were colocated. This process is described in terms of Space Station era platforms. Some of the enabling technologies are discussed, including open architecture workstations, distributed computing, transaction management, expert systems, and high-speed networks. Recent testbedding experiments are surveyed to highlight some of the human factors requirements.

  11. Computer program documentation: Raw-to-processed SINDA program (RTOPHS) user's guide

    NASA Technical Reports Server (NTRS)

    Damico, S. J.

    1980-01-01

    Use of the Raw to Processed SINDA(System Improved Numerical Differencing Analyzer) Program, RTOPHS, which provides a means of making the temperature prediction data on binary HSTFLO and HISTRY units generated by SINDA available to engineers in an easy to use format, is discussed. The program accomplishes this by reading the HISTRY unit and according to user input instructions, the desired times and temperature prediction data are extracted and written to a word addressable drum file.

  12. Maximally Permissive Composition of Actors in Ptolemy II

    DTIC Science & Technology

    2013-03-20

    into our physical world by means of sensors and actuators . This global network of Cyber-Physical Systems (i.e., integrations of computation with...physical processes [Lee, 2008]), is often referred to as the “Internet of Things” ( IoT ). This term was coined by Kevin Ashton [Ashton, 2009] in 1999 to...processing capabilities. A newly emerging outer- most peripheral layer of the Cloud that is key to the full realization of the IoT , is identified as “The

  13. Study on the Application of the Combination of TMD Simulation and Umbrella Sampling in PMF Calculation for Molecular Conformational Transitions

    PubMed Central

    Wang, Qing; Xue, Tuo; Song, Chunnian; Wang, Yan; Chen, Guangju

    2016-01-01

    Free energy calculations of the potential of mean force (PMF) based on the combination of targeted molecular dynamics (TMD) simulations and umbrella samplings as a function of physical coordinates have been applied to explore the detailed pathways and the corresponding free energy profiles for the conformational transition processes of the butane molecule and the 35-residue villin headpiece subdomain (HP35). The accurate PMF profiles for describing the dihedral rotation of butane under both coordinates of dihedral rotation and root mean square deviation (RMSD) variation were obtained based on the different umbrella samplings from the same TMD simulations. The initial structures for the umbrella samplings can be conveniently selected from the TMD trajectories. For the application of this computational method in the unfolding process of the HP35 protein, the PMF calculation along with the coordinate of the radius of gyration (Rg) presents the gradual increase of free energies by about 1 kcal/mol with the energy fluctuations. The feature of conformational transition for the unfolding process of the HP35 protein shows that the spherical structure extends and the middle α-helix unfolds firstly, followed by the unfolding of other α-helices. The computational method for the PMF calculations based on the combination of TMD simulations and umbrella samplings provided a valuable strategy in investigating detailed conformational transition pathways for other allosteric processes. PMID:27171075

  14. Interval versions of statistical techniques with applications to environmental analysis, bioinformatics, and privacy in statistical databases

    NASA Astrophysics Data System (ADS)

    Kreinovich, Vladik; Longpre, Luc; Starks, Scott A.; Xiang, Gang; Beck, Jan; Kandathi, Raj; Nayak, Asis; Ferson, Scott; Hajagos, Janos

    2007-02-01

    In many areas of science and engineering, it is desirable to estimate statistical characteristics (mean, variance, covariance, etc.) under interval uncertainty. For example, we may want to use the measured values x(t) of a pollution level in a lake at different moments of time to estimate the average pollution level; however, we do not know the exact values x(t)--e.g., if one of the measurement results is 0, this simply means that the actual (unknown) value of x(t) can be anywhere between 0 and the detection limit (DL). We must, therefore, modify the existing statistical algorithms to process such interval data. Such a modification is also necessary to process data from statistical databases, where, in order to maintain privacy, we only keep interval ranges instead of the actual numeric data (e.g., a salary range instead of the actual salary). Most resulting computational problems are NP-hard--which means, crudely speaking, that in general, no computationally efficient algorithm can solve all particular cases of the corresponding problem. In this paper, we overview practical situations in which computationally efficient algorithms exist: e.g., situations when measurements are very accurate, or when all the measurements are done with one (or few) instruments. As a case study, we consider a practical problem from bioinformatics: to discover the genetic difference between the cancer cells and the healthy cells, we must process the measurements results and find the concentrations c and h of a given gene in cancer and in healthy cells. This is a particular case of a general situation in which, to estimate states or parameters which are not directly accessible by measurements, we must solve a system of equations in which coefficients are only known with interval uncertainty. We show that in general, this problem is NP-hard, and we describe new efficient algorithms for solving this problem in practically important situations.

  15. LES FOR SIMULATING THE GAS EXCHANGE PROCESS IN A SPARK IGNITION ENGINE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ameen, Muhsin M; yang, xiaofeng; kuo, tang-wei

    2015-01-01

    The gas exchange process is known to be a significant source of cyclic variability in Internal Combustion Engines (ICE). Traditionally, Large Eddy Simulations (LES) are expected to capture these cycle-to-cycle variations. This paper reports a numerical effort to establish best practices for capturing cyclic variability with LES tools in a Transparent Combustion Chamber (TCC) spark ignition engine. The main intention is to examine the sensitivity of cycle averaged mean and Root Mean Square (RMS) flow fields and Proper Orthogonal Decomposition (POD) modes to different computational hardware, adaptive mesh refinement (AMR) and LES sub-grid scale (SGS) models, since these aspects havemore » received little attention in the past couple of decades. This study also examines the effect of near-wall resolution on the predicted wall shear stresses. LES is pursued with commercially available CONVERGE code. Two different SGS models are tested, a one-equation eddy viscosity model and dynamic structure model. The results seem to indicate that both mean and RMS fields without any SGS model are not much different than those with LES models, either one-equation eddy viscosity or dynamic structure model. Computational hardware results in subtle quantitative differences, especially in RMS distributions. The influence of AMR on both mean and RMS fields is negligible. The predicted shear stresses near the liner walls is also found to be relatively insensitive to near-wall resolution except in the valve curtain region.« less

  16. Distributed computing feasibility in a non-dedicated homogeneous distributed system

    NASA Technical Reports Server (NTRS)

    Leutenegger, Scott T.; Sun, Xian-He

    1993-01-01

    The low cost and availability of clusters of workstations have lead researchers to re-explore distributed computing using independent workstations. This approach may provide better cost/performance than tightly coupled multiprocessors. In practice, this approach often utilizes wasted cycles to run parallel jobs. The feasibility of such a non-dedicated parallel processing environment assuming workstation processes have preemptive priority over parallel tasks is addressed. An analytical model is developed to predict parallel job response times. Our model provides insight into how significantly workstation owner interference degrades parallel program performance. A new term task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It was proposed that task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a non-dedicated distributed system.

  17. Spin wave Feynman diagram vertex computation package

    NASA Astrophysics Data System (ADS)

    Price, Alexander; Javernick, Philip; Datta, Trinanjan

    Spin wave theory is a well-established theoretical technique that can correctly predict the physical behavior of ordered magnetic states. However, computing the effects of an interacting spin wave theory incorporating magnons involve a laborious by hand derivation of Feynman diagram vertices. The process is tedious and time consuming. Hence, to improve productivity and have another means to check the analytical calculations, we have devised a Feynman Diagram Vertex Computation package. In this talk, we will describe our research group's effort to implement a Mathematica based symbolic Feynman diagram vertex computation package that computes spin wave vertices. Utilizing the non-commutative algebra package NCAlgebra as an add-on to Mathematica, symbolic expressions for the Feynman diagram vertices of a Heisenberg quantum antiferromagnet are obtained. Our existing code reproduces the well-known expressions of a nearest neighbor square lattice Heisenberg model. We also discuss the case of a triangular lattice Heisenberg model where non collinear terms contribute to the vertex interactions.

  18. Advanced Architectures for Astrophysical Supercomputing

    NASA Astrophysics Data System (ADS)

    Barsdell, B. R.; Barnes, D. G.; Fluke, C. J.

    2010-12-01

    Astronomers have come to rely on the increasing performance of computers to reduce, analyze, simulate and visualize their data. In this environment, faster computation can mean more science outcomes or the opening up of new parameter spaces for investigation. If we are to avoid major issues when implementing codes on advanced architectures, it is important that we have a solid understanding of our algorithms. A recent addition to the high-performance computing scene that highlights this point is the graphics processing unit (GPU). The hardware originally designed for speeding-up graphics rendering in video games is now achieving speed-ups of O(100×) in general-purpose computation - performance that cannot be ignored. We are using a generalized approach, based on the analysis of astronomy algorithms, to identify the optimal problem-types and techniques for taking advantage of both current GPU hardware and future developments in computing architectures.

  19. A review of combined experimental and computational procedures for assessing biopolymer structure–process–property relationships

    PubMed Central

    Gronau, Greta; Krishnaji, Sreevidhya T.; Kinahan, Michelle E.; Giesa, Tristan; Wong, Joyce Y.; Kaplan, David L.; Buehler, Markus J.

    2013-01-01

    Tailored biomaterials with tunable functional properties are desirable for many applications ranging from drug delivery to regenerative medicine. To improve the predictability of biopolymer materials functionality, multiple design parameters need to be considered, along with appropriate models. In this article we review the state of the art of synthesis and processing related to the design of biopolymers, with an emphasis on the integration of bottom-up computational modeling in the design process. We consider three prominent examples of well-studied biopolymer materials – elastin, silk, and collagen – and assess their hierarchical structure, intriguing functional properties and categorize existing approaches to study these materials. We find that an integrated design approach in which both experiments and computational modeling are used has rarely been applied for these materials due to difficulties in relating insights gained on different length- and time-scales. In this context, multiscale engineering offers a powerful means to accelerate the biomaterials design process for the development of tailored materials that suit the needs posed by the various applications. The combined use of experimental and computational tools has a very broad applicability not only in the field of biopolymers, but can be exploited to tailor the properties of other polymers and composite materials in general. PMID:22938765

  20. Evaluation of three electronic report processing systems for preparing hydrologic reports of the U.S Geological Survey, Water Resources Division

    USGS Publications Warehouse

    Stiltner, G.J.

    1990-01-01

    In 1987, the Water Resources Division of the U.S. Geological Survey undertook three pilot projects to evaluate electronic report processing systems as a means to improve the quality and timeliness of reports pertaining to water resources investigations. The three projects selected for study included the use of the following configuration of software and hardware: Ventura Publisher software on an IBM model AT personal computer, PageMaker software on a Macintosh computer, and FrameMaker software on a Sun Microsystems workstation. The following assessment criteria were to be addressed in the pilot studies: The combined use of text, tables, and graphics; analysis of time; ease of learning; compatibility with the existing minicomputer system; and technical limitations. It was considered essential that the camera-ready copy produced be in a format suitable for publication. Visual improvement alone was not a consideration. This report consolidates and summarizes the findings of the electronic report processing pilot projects. Text and table files originating on the existing minicomputer system were successfully transformed to the electronic report processing systems in American Standard Code for Information Interchange (ASCII) format. Graphics prepared using a proprietary graphics software package were transferred to all the electronic report processing software through the use of Computer Graphic Metafiles. Graphics from other sources were entered into the systems by scanning paper images. Comparative analysis of time needed to process text and tables by the electronic report processing systems and by conventional methods indicated that, although more time is invested in creating the original page composition for an electronically processed report , substantial time is saved in producing subsequent reports because the format can be stored and re-used by electronic means as a template. Because of the more compact page layouts, costs of printing the reports were 15% to 25% less than costs of printing the reports prepared by conventional methods. Because the largest report workload in the offices conducting water resources investigations is preparation of Water-Resources Investigations Reports, Open-File Reports, and annual State Data Reports, the pilot studies only involved these projects. (USGS)

  1. Scarce means with alternative uses: robbins' definition of economics and its extension to the behavioral and neurobiological study of animal decision making.

    PubMed

    Shizgal, Peter

    2012-01-01

    Almost 80 years ago, Lionel Robbins proposed a highly influential definition of the subject matter of economics: the allocation of scarce means that have alternative ends. Robbins confined his definition to human behavior, and he strove to separate economics from the natural sciences in general and from psychology in particular. Nonetheless, I extend his definition to the behavior of non-human animals, rooting my account in psychological processes and their neural underpinnings. Some historical developments are reviewed that render such a view more plausible today than would have been the case in Robbins' time. To illustrate a neuroeconomic perspective on decision making in non-human animals, I discuss research on the rewarding effect of electrical brain stimulation. Central to this discussion is an empirically based, functional/computational model of how the subjective intensity of the electrical reward is computed and combined with subjective costs so as to determine the allocation of time to the pursuit of reward. Some successes achieved by applying the model are discussed, along with limitations, and evidence is presented regarding the roles played by several different neural populations in processes posited by the model. I present a rationale for marshaling convergent experimental methods to ground psychological and computational processes in the activity of identified neural populations, and I discuss the strengths, weaknesses, and complementarity of the individual approaches. I then sketch some recent developments that hold great promise for advancing our understanding of structure-function relationships in neuroscience in general and in the neuroeconomic study of decision making in particular.

  2. Scarce Means with Alternative Uses: Robbins’ Definition of Economics and Its Extension to the Behavioral and Neurobiological Study of Animal Decision Making

    PubMed Central

    Shizgal, Peter

    2011-01-01

    Almost 80 years ago, Lionel Robbins proposed a highly influential definition of the subject matter of economics: the allocation of scarce means that have alternative ends. Robbins confined his definition to human behavior, and he strove to separate economics from the natural sciences in general and from psychology in particular. Nonetheless, I extend his definition to the behavior of non-human animals, rooting my account in psychological processes and their neural underpinnings. Some historical developments are reviewed that render such a view more plausible today than would have been the case in Robbins’ time. To illustrate a neuroeconomic perspective on decision making in non-human animals, I discuss research on the rewarding effect of electrical brain stimulation. Central to this discussion is an empirically based, functional/computational model of how the subjective intensity of the electrical reward is computed and combined with subjective costs so as to determine the allocation of time to the pursuit of reward. Some successes achieved by applying the model are discussed, along with limitations, and evidence is presented regarding the roles played by several different neural populations in processes posited by the model. I present a rationale for marshaling convergent experimental methods to ground psychological and computational processes in the activity of identified neural populations, and I discuss the strengths, weaknesses, and complementarity of the individual approaches. I then sketch some recent developments that hold great promise for advancing our understanding of structure–function relationships in neuroscience in general and in the neuroeconomic study of decision making in particular. PMID:22363253

  3. Computer vision and soft computing for automatic skull-face overlay in craniofacial superimposition.

    PubMed

    Campomanes-Álvarez, B Rosario; Ibáñez, O; Navarro, F; Alemán, I; Botella, M; Damas, S; Cordón, O

    2014-12-01

    Craniofacial superimposition can provide evidence to support that some human skeletal remains belong or not to a missing person. It involves the process of overlaying a skull with a number of ante mortem images of an individual and the analysis of their morphological correspondence. Within the craniofacial superimposition process, the skull-face overlay stage just focuses on achieving the best possible overlay of the skull and a single ante mortem image of the suspect. Although craniofacial superimposition has been in use for over a century, skull-face overlay is still applied by means of a trial-and-error approach without an automatic method. Practitioners finish the process once they consider that a good enough overlay has been attained. Hence, skull-face overlay is a very challenging, subjective, error prone, and time consuming part of the whole process. Though the numerical assessment of the method quality has not been achieved yet, computer vision and soft computing arise as powerful tools to automate it, dramatically reducing the time taken by the expert and obtaining an unbiased overlay result. In this manuscript, we justify and analyze the use of these techniques to properly model the skull-face overlay problem. We also present the automatic technical procedure we have developed using these computational methods and show the four overlays obtained in two craniofacial superimposition cases. This automatic procedure can be thus considered as a tool to aid forensic anthropologists to develop the skull-face overlay, automating and avoiding subjectivity of the most tedious task within craniofacial superimposition. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  4. Shorebird Migration Patterns in Response to Climate Change: A Modeling Approach

    NASA Technical Reports Server (NTRS)

    Smith, James A.

    2010-01-01

    The availability of satellite remote sensing observations at multiple spatial and temporal scales, coupled with advances in climate modeling and information technologies offer new opportunities for the application of mechanistic models to predict how continental scale bird migration patterns may change in response to environmental change. In earlier studies, we explored the phenotypic plasticity of a migratory population of Pectoral sandpipers by simulating the movement patterns of an ensemble of 10,000 individual birds in response to changes in stopover locations as an indicator of the impacts of wetland loss and inter-annual variability on the fitness of migratory shorebirds. We used an individual based, biophysical migration model, driven by remotely sensed land surface data, climate data, and biological field data. Mean stop-over durations and stop-over frequency with latitude predicted from our model for nominal cases were consistent with results reported in the literature and available field data. In this study, we take advantage of new computing capabilities enabled by recent GP-GPU computing paradigms and commodity hardware (general purchase computing on graphics processing units). Several aspects of our individual based (agent modeling) approach lend themselves well to GP-GPU computing. We have been able to allocate compute-intensive tasks to the graphics processing units, and now simulate ensembles of 400,000 birds at varying spatial resolutions along the central North American flyway. We are incorporating additional, species specific, mechanistic processes to better reflect the processes underlying bird phenotypic plasticity responses to different climate change scenarios in the central U.S.

  5. The influence of glass fibers on elongational viscosity studied by means of optical coherence tomography and X-ray computed tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aigner, M., E-mail: michael.aigner@jku.at; Köpplmayr, T., E-mail: thomas.koepplmayr@jku.at, E-mail: Christian.lang@jku.at; Lang, C., E-mail: thomas.koepplmayr@jku.at, E-mail: Christian.lang@jku.at

    2014-05-15

    We report on the flow characteristics of glass-fiber-reinforced polymers in elongational rheometry. Unlike polymers with geometrically isotropic fillers, glass-fiber-reinforced polymers exhibit flow behavior and rheology that depend heavily on the orientation, the length distribution and the content of the fibers. One of the primary objectives of this study was to determine the effect of fiber orientation, concentration and distribution on the entrance pressure drop by means of optical coherence tomography (OCT), full-field optical coherence microscopy (FF-OCM), and X-ray computed tomography (X-CT). Both pressure drop and melt flow were analyzed using a special elongation die (Thermo Scientific X-Die [3]) for inlinemore » measurements. Samples with a variety of fiber volume fractions, fiber lengths and processing temperatures were measured.« less

  6. Real-time processing of radar return on a parallel computer

    NASA Technical Reports Server (NTRS)

    Aalfs, David D.

    1992-01-01

    NASA is working with the FAA to demonstrate the feasibility of pulse Doppler radar as a candidate airborne sensor to detect low altitude windshears. The need to provide the pilot with timely information about possible hazards has motivated a demand for real-time processing of a radar return. Investigated here is parallel processing as a means of accommodating the high data rates required. A PC based parallel computer, called the transputer, is used to investigate issues in real time concurrent processing of radar signals. A transputer network is made up of an array of single instruction stream processors that can be networked in a variety of ways. They are easily reconfigured and software development is largely independent of the particular network topology. The performance of the transputer is evaluated in light of the computational requirements. A number of algorithms have been implemented on the transputers in OCCAM, a language specially designed for parallel processing. These include signal processing algorithms such as the Fast Fourier Transform (FFT), pulse-pair, and autoregressive modelling, as well as routing software to support concurrency. The most computationally intensive task is estimating the spectrum. Two approaches have been taken on this problem, the first and most conventional of which is to use the FFT. By using table look-ups for the basis function and other optimizing techniques, an algorithm has been developed that is sufficient for real time. The other approach is to model the signal as an autoregressive process and estimate the spectrum based on the model coefficients. This technique is attractive because it does not suffer from the spectral leakage problem inherent in the FFT. Benchmark tests indicate that autoregressive modeling is feasible in real time.

  7. Recreation of three-dimensional objects in a real-time simulated environment by means of a panoramic single lens stereoscopic image-capturing device

    NASA Astrophysics Data System (ADS)

    Wong, Erwin

    2000-03-01

    Traditional methods of linear based imaging limits the viewer to a single fixed-point perspective. By means of a single lens multiple perspective mirror system, a 360-degree representation of the area around the camera is reconstructed. This reconstruction is used overcome the limitations of a traditional camera by providing the viewer with many different perspectives. By constructing the mirror into a hemispherical surface with multiple focal lengths at various diameters on the mirror, and by placing a parabolic mirror overhead, a stereoscopic image can be extracted from the image captured by a high-resolution camera placed beneath the mirror. Image extraction and correction is made by computer processing of the image obtained by camera; the image present up to five distinguishable different viewpoints that a computer can extrapolate pseudo- perspective data from. Geometric and depth for field can be extrapolated via comparison and isolation of objects within a virtual scene post processed by the computer. Combining data with scene rendering software provides the viewer with the ability to choose a desired viewing position, multiple dynamic perspectives, and virtually constructed perspectives based on minimal existing data. An examination into the workings of the mirror relay system is provided, including possible image extrapolation and correctional methods. Generation of data and virtual interpolated and constructed data is also mentioned.

  8. Passing in Command Line Arguments and Parallel Cluster/Multicore Batching in R with batch.

    PubMed

    Hoffmann, Thomas J

    2011-03-01

    It is often useful to rerun a command line R script with some slight change in the parameters used to run it - a new set of parameters for a simulation, a different dataset to process, etc. The R package batch provides a means to pass in multiple command line options, including vectors of values in the usual R format, easily into R. The same script can be setup to run things in parallel via different command line arguments. The R package batch also provides a means to simplify this parallel batching by allowing one to use R and an R-like syntax for arguments to spread a script across a cluster or local multicore/multiprocessor computer, with automated syntax for several popular cluster types. Finally it provides a means to aggregate the results together of multiple processes run on a cluster.

  9. Craniofacial reconstruction using patient-specific implants polyether ether ketone with computer-assisted planning.

    PubMed

    Manrique, Oscar J; Lalezarzadeh, Frank; Dayan, Erez; Shin, Joseph; Buchbinder, Daniel; Smith, Mark

    2015-05-01

    Reconstruction of bony craniofacial defects requires precise understanding of the anatomic relationships. The ideal reconstructive technique should be fast as well as economical, with minimal donor-site morbidity, and provide a lasting and aesthetically pleasing result. There are some circumstances in which a patient's own tissue is not sufficient to reconstruct defects. The development of sophisticated software has facilitated the manufacturing of patient-specific implants (PSIs). The aim of this study was to analyze the utility of polyether ether ketone (PEEK) PSIs for craniofacial reconstruction. We performed a retrospective chart review from July 2009 to July 2013 in patients who underwent craniofacial reconstruction using PEEK-PSIs using a virtual process based on computer-aided design and computer-aided manufacturing. A total of 6 patients were identified. The mean age was 46 years (16-68 y). Operative indications included cancer (n = 4), congenital deformities (n = 1), and infection (n = 1). The mean surgical time was 3.7 hours and the mean hospital stay was 1.5 days. The mean surface area of the defect was 93.4 ± 43.26 cm(2), the mean implant cost was $8493 ± $837.95, and the mean time required to manufacture the implants was 2 weeks. No major or minor complications were seen during the 4-year follow-up. We found PEEK implants to be useful in the reconstruction of complex calvarial defects, demonstrating a low complication rate, good outcomes, and high patient satisfaction in this small series of patients. Polyether ether ketone implants show promising potential and warrant further study to better establish the role of this technology in cranial reconstruction.

  10. The mathematical and computer modeling of the worm tool shaping

    NASA Astrophysics Data System (ADS)

    Panchuk, K. L.; Lyashkov, A. A.; Ayusheev, T. V.

    2017-06-01

    Traditionally mathematical profiling of the worm tool is carried out on the first T. Olivier method, known in the theory of gear gearings, with receiving an intermediate surface of the making lath. It complicates process of profiling and its realization by means of computer 3D-modeling. The purpose of the work is the improvement of mathematical model of profiling and its realization based on the methods of 3D-modeling. Research problems are: receiving of the mathematical model of profiling which excludes the presence of the making lath in it; realization of the received model by means of frame and superficial modeling; development and approbation of technology of solid-state modeling for the solution of the problem of profiling. As the basic, the kinematic method of research of the mutually envelope surfaces is accepted. Computer research is executed by means of CAD based on the methods of 3D-modeling. We have developed mathematical model of profiling of the worm tool; frame, superficial and solid-state models of shaping of the mutually enveloping surfaces of the detail and the tool are received. The offered mathematical models and the technologies of 3D-modeling of shaping represent tools for theoretical and experimental profiling of the worm tool. The results of researches can be used at design of metal-cutting tools.

  11. Parameterization of cloud lidar backscattering profiles by means of asymmetrical Gaussians

    NASA Astrophysics Data System (ADS)

    del Guasta, Massimo; Morandi, Marco; Stefanutti, Leopoldo

    1995-06-01

    A fitting procedure for cloud lidar data processing is shown that is based on the computation of the first three moments of the vertical-backscattering (or -extinction) profile. Single-peak clouds or single cloud layers are approximated to asymmetrical Gaussians. The algorithm is particularly stable with respect to noise and processing errors, and it is much faster than the equivalent least-squares approach. Multilayer clouds can easily be treated as a sum of single asymmetrical Gaussian peaks. The method is suitable for cloud-shape parametrization in noisy lidar signatures (like those expected from satellite lidars). It also permits an improvement of cloud radiative-property computations that are based on huge lidar data sets for which storage and careful examination of single lidar profiles can't be carried out.

  12. Using Problem-Based Learning to Increase Computer Self-Efficacy in Taiwanese Students

    ERIC Educational Resources Information Center

    Smith, Cary Stacy; Hung, Li-Ching

    2017-01-01

    In Taiwan, teaching focuses around lecturing, with students having little opportunity to interact with each other. Problem-based learning (PBL) is a means of instruction where students learn the subject by being active participants in the pedagogical process, with the emphasis on problem-solving. In this study, the authors investigated whether PBL…

  13. Radio Frequency Propagation and Performance Assessment Suite (RFPPAS)

    DTIC Science & Technology

    2016-11-15

    Intelligence, Surveillance, and Reconnaissance Clutter-to-Noise Ratio Central Processing Unit Evaporation Duct Climatology Engineer’s Refractive Effects...and maximum trapped wavelength (right) PCS display ...23 12. AREPS surface layer (evaporation duct) climatology regions...evaporation duct profiles computed from surface layer climatological statistics. The impetus for building such a database is to provide a means for instant

  14. Modeling the fundamental characteristics and processes of the spacecraft functioning

    NASA Technical Reports Server (NTRS)

    Bazhenov, V. I.; Osin, M. I.; Zakharov, Y. V.

    1986-01-01

    The fundamental aspects of modeling of spacecraft characteristics by using computing means are considered. Particular attention is devoted to the design studies, the description of physical appearance of the spacecraft, and simulated modeling of spacecraft systems. The fundamental questions of organizing the on-the-ground spacecraft testing and the methods of mathematical modeling were presented.

  15. Predicting Aircraft Spray Patterns on Crops

    NASA Technical Reports Server (NTRS)

    Teske, M. E.; Bilanin, A. J.

    1986-01-01

    Agricultural Dispersion Prediction (AGDISP) system developed to predict deposition of agricultural material released from rotary- and fixed-wing aircraft. AGDISP computes ensemble average mean motion resulting from turbulent fluid fluctuations. Used to examine ways of making dispersal process more efficient by insuring uniformity, reducing waste, and saving money. Programs in AGDISP system written in FORTRAN IV for interactive execution.

  16. Teaching Simulation and Computer-Aided Separation Optimization in Liquid Chromatography by Means of Illustrative Microsoft Excel Spreadsheets

    ERIC Educational Resources Information Center

    Fasoula, S.; Nikitas, P.; Pappa-Louisi, A.

    2017-01-01

    A series of Microsoft Excel spreadsheets were developed to simulate the process of separation optimization under isocratic and simple gradient conditions. The optimization procedure is performed in a stepwise fashion using simple macros for an automatic application of this approach. The proposed optimization approach involves modeling of the peak…

  17. Screen Capture Technology: A Digital Window into Students' Writing Processes

    ERIC Educational Resources Information Center

    Seror, Jeremie

    2013-01-01

    Technological innovations and the prevalence of the computer as a means of producing and engaging with texts have dramatically transformed how literacy is defined and developed in modern society. This rise in digital writing practices has led to a growing number of tools and methods that can be used to explore second language (L2) writing…

  18. Boundary and object detection in real world images. [by means of algorithms

    NASA Technical Reports Server (NTRS)

    Yakimovsky, Y.

    1974-01-01

    A solution to the problem of automatic location of objects in digital pictures by computer is presented. A self-scaling local edge detector which can be applied in parallel on a picture is described. Clustering algorithms and boundary following algorithms which are sequential in nature process the edge data to locate images of objects.

  19. Experience in Education Environment Virtualization within the Automated Information System "Platonus" (Kazakhstan)

    ERIC Educational Resources Information Center

    Abeldina, Zhaidary; Moldumarova, Zhibek; Abeldina, Rauza; Makysh, Gulmira; Moldumarova, Zhuldyz Ilibaevna

    2016-01-01

    This work reports on the use of virtual tools as means of learning process activation. A good result can be achieved by combining the classical learning with modern computer technology. By creating a virtual learning environment and using multimedia learning tools one can obtain a significant result while facilitating the development of students'…

  20. What Do Students Do in a F2F CSCL Classroom? The Optimization of Multiple Communications Modes

    ERIC Educational Resources Information Center

    Chen, Wenli; Looi, Chee-Kit; Tan, Sini

    2010-01-01

    This exploratory study analyzes how students use different communication modes to share information, negotiate meaning and construct knowledge in the process of doing a group learning activity in a Primary Grade 5 blended learning environment in Singapore. Small groups of students interacted face-to-face over a computer-mediated communication…

  1. A computer program for analyzing channel geometry

    USGS Publications Warehouse

    Regan, R.S.; Schaffranek, R.W.

    1985-01-01

    The Channel Geometry Analysis Program (CGAP) provides the capability to process, analyze, and format cross-sectional data for input to flow/transport simulation models or other computational programs. CGAP allows for a variety of cross-sectional data input formats through use of variable format specification. The program accepts data from various computer media and provides for modification of machine-stored parameter values. CGAP has been devised to provide a rapid and efficient means of computing and analyzing the physical properties of an open-channel reach defined by a sequence of cross sections. CGAP 's 16 options provide a wide range of methods by which to analyze and depict a channel reach and its individual cross-sectional properties. The primary function of the program is to compute the area, width, wetted perimeter, and hydraulic radius of cross sections at successive increments of water surface elevation (stage) from data that consist of coordinate pairs of cross-channel distances and land surface or channel bottom elevations. Longitudinal rates-of-change of cross-sectional properties are also computed, as are the mean properties of a channel reach. Output products include tabular lists of cross-sectional area, channel width, wetted perimeter, hydraulic radius, average depth, and cross-sectional symmetry computed as functions of stage; plots of cross sections; plots of cross-sectional area and (or) channel width as functions of stage; tabular lists of cross-sectional area and channel width computed as functions of stage for subdivisions of a cross section; plots of cross sections in isometric projection; and plots of cross-sectional area at a fixed stage as a function of longitudinal distance along an open-channel reach. A Command Procedure Language program and Job Control Language procedure exist to facilitate program execution on the U.S. Geological Survey Prime and Amdahl computer systems respectively. (Lantz-PTT)

  2. Application of computer-aided design osteotomy template for treatment of cubitus varus deformity in teenagers: a pilot study.

    PubMed

    Zhang, Yuan Z; Lu, Sheng; Chen, Bin; Zhao, Jian M; Liu, Rui; Pei, Guo X

    2011-01-01

    Treatment of cubitus varus deformity from a malunited fracture is a challenge. Anatomically accurate correction is the key to obtaining good functional outcomes after corrective osteotomy. The aim of this study was to attempt to increase the accuracy of treatment by use of 3-dimensional (3D) computer-aided design. We describe a novel method for ensuring an accurate osteotomy method in the treatment of cubitus varus deformity in teenagers by means of 3D reconstruction and reverse engineering. Between January 2006 and May 2008, 12 male and 6 female patients with cubitus varus deformities underwent scanning with spiral computed tomography (CT) preoperatively. The mean age was 15.7 years, ranging from 13 to 19 years. Three-dimensional CT image data of the affected and contralateral normal bones of cubitus were transferred to a computer workstation. Three-dimensional models of cubitus were reconstructed by use of MIMICS software. The 3D models were then processed by Imageware software. An osteotomy template that best fitted the angle and range of osteotomy was "reversely" built from the 3D model. These templates were manufactured by a rapid prototyping machine. The osteotomy templates guide the osteotomy of cubitus. An accurate angle of osteotomy was confirmed by postoperative radiography. After 12 to 24 months' follow-up, the mean postoperative carrying angle in 18 patients with cubitus varus deformity was 7.3° (range, 5° to 11°), with a mean correction of 21.9° (range, 12° to 41°). The patient-specific template technique is easy to use, can simplify the surgical act, and generates highly accurate osteotomy in cubitus varus deformity in teenagers. Copyright © 2011 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.

  3. Hand held data collection and monitoring system for nuclear facilities

    DOEpatents

    Brayton, D.D.; Scharold, P.G.; Thornton, M.W.; Marquez, D.L.

    1999-01-26

    Apparatus and method is disclosed for a data collection and monitoring system that utilizes a pen based hand held computer unit which has contained therein interaction software that allows the user to review maintenance procedures, collect data, compare data with historical trends and safety limits, and input new information at various collection sites. The system has a means to allow automatic transfer of the collected data to a main computer data base for further review, reporting, and distribution purposes and uploading updated collection and maintenance procedures. The hand held computer has a running to-do list so sample collection and other general tasks, such as housekeeping are automatically scheduled for timely completion. A done list helps users to keep track of all completed tasks. The built-in check list assures that work process will meet the applicable processes and procedures. Users can hand write comments or drawings with an electronic pen that allows the users to directly interface information on the screen. 15 figs.

  4. Hand held data collection and monitoring system for nuclear facilities

    DOEpatents

    Brayton, Darryl D.; Scharold, Paul G.; Thornton, Michael W.; Marquez, Diana L.

    1999-01-01

    Apparatus and method is disclosed for a data collection and monitoring system that utilizes a pen based hand held computer unit which has contained therein interaction software that allows the user to review maintenance procedures, collect data, compare data with historical trends and safety limits, and input new information at various collection sites. The system has a means to allow automatic transfer of the collected data to a main computer data base for further review, reporting, and distribution purposes and uploading updated collection and maintenance procedures. The hand held computer has a running to-do list so sample collection and other general tasks, such as housekeeping are automatically scheduled for timely completion. A done list helps users to keep track of all completed tasks. The built-in check list assures that work process will meet the applicable processes and procedures. Users can hand write comments or drawings with an electronic pen that allows the users to directly interface information on the screen.

  5. A computer program for processing impedance cardiographic data: Improving accuracy through user-interactive software

    NASA Technical Reports Server (NTRS)

    Cowings, Patricia S.; Naifeh, Karen; Thrasher, Chet

    1988-01-01

    This report contains the source code and documentation for a computer program used to process impedance cardiography data. The cardiodynamic measures derived from impedance cardiography are ventricular stroke column, cardiac output, cardiac index and Heather index. The program digitizes data collected from the Minnesota Impedance Cardiograph, Electrocardiography (ECG), and respiratory cycles and then stores these data on hard disk. It computes the cardiodynamic functions using interactive graphics and stores the means and standard deviations of each 15-sec data epoch on floppy disk. This software was designed on a Digital PRO380 microcomputer and used version 2.0 of P/OS, with (minimally) a 4-channel 16-bit analog/digital (A/D) converter. Applications software is written in FORTRAN 77, and uses Digital's Pro-Tool Kit Real Time Interface Library, CORE Graphic Library, and laboratory routines. Source code can be readily modified to accommodate alternative detection, A/D conversion and interactive graphics. The object code utilizing overlays and multitasking has a maximum of 50 Kbytes.

  6. Integrable dissipative exclusion process: Correlation functions and physical properties

    NASA Astrophysics Data System (ADS)

    Crampe, N.; Ragoucy, E.; Rittenberg, V.; Vanicat, M.

    2016-09-01

    We study a one-parameter generalization of the symmetric simple exclusion process on a one-dimensional lattice. In addition to the usual dynamics (where particles can hop with equal rates to the left or to the right with an exclusion constraint), annihilation and creation of pairs can occur. The system is driven out of equilibrium by two reservoirs at the boundaries. In this setting the model is still integrable: it is related to the open XXZ spin chain through a gauge transformation. This allows us to compute the full spectrum of the Markov matrix using Bethe equations. We also show that the stationary state can be expressed in a matrix product form permitting to compute the multipoints correlation functions as well as the mean value of the lattice and the creation-annihilation currents. Finally, the variance of the lattice current is computed for a finite-size system. In the thermodynamic limit, it matches the value obtained from the associated macroscopic fluctuation theory.

  7. 48 CFR 52.227-14 - Rights in Data-General.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... software. Computer software—(1) Means (i) Computer programs that comprise a series of instructions, rules... or computer software documentation. Computer software documentation means owner's manuals, user's... medium, that explain the capabilities of the computer software or provide instructions for using the...

  8. Mean-Field-Game Model for Botnet Defense in Cyber-Security

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolokoltsov, V. N., E-mail: v.kolokoltsov@warwick.ac.uk; Bensoussan, A.

    We initiate the analysis of the response of computer owners to various offers of defence systems against a cyber-hacker (for instance, a botnet attack), as a stochastic game of a large number of interacting agents. We introduce a simple mean-field game that models their behavior. It takes into account both the random process of the propagation of the infection (controlled by the botner herder) and the decision making process of customers. Its stationary version turns out to be exactly solvable (but not at all trivial) under an additional natural assumption that the execution time of the decisions of the customersmore » (say, switch on or out the defence system) is much faster that the infection rates.« less

  9. Generalizing the Nonlocal-Means to Super-Resolution Reconstruction

    DTIC Science & Technology

    2008-12-12

    Image Process., vol. 5, no. 6, pp. 996–1011, Jun. 1996. [7] A. J. Patti, M. I. Sezan, and M. A. Tekalp, “ Superresolution video reconstruction with...computationally efficient image superresolution algorithm,” IEEE Trans. Image Process., vol. 10, no. 4, pp. 573–583, Apr. 2001. [13] M. Elad and Y...pp. 21–36, May 2003. [18] S. Farsiu, D. Robinson, M. Elad, and P. Milanfar, “Robust shift and add approach to superresolution ,” in Proc. SPIE Conf

  10. On the Stability of Jump-Linear Systems Driven by Finite-State Machines with Markovian Inputs

    NASA Technical Reports Server (NTRS)

    Patilkulkarni, Sudarshan; Herencia-Zapana, Heber; Gray, W. Steven; Gonzalez, Oscar R.

    2004-01-01

    This paper presents two mean-square stability tests for a jump-linear system driven by a finite-state machine with a first-order Markovian input process. The first test is based on conventional Markov jump-linear theory and avoids the use of any higher-order statistics. The second test is developed directly using the higher-order statistics of the machine s output process. The two approaches are illustrated with a simple model for a recoverable computer control system.

  11. Estimating the effects of harmonic voltage fluctuations on the temperature rise of squirrel-cage motors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emanuel, A.E.

    1991-03-01

    This article presents a preliminary analysis of the effect of randomly varying harmonic voltages on the temperature rise of squirrel-cage motors. The stochastic process of random variations of harmonic voltages is defined by means of simple statistics (mean, standard deviation, type of distribution). Computational models based on a first-order approximation of the motor losses and on the Monte Carlo method yield results which prove that equipment with large thermal time-constant is capable of withstanding for a short period of time larger distortions than THD = 5%.

  12. Testing all six person-oriented principles in dynamic factor analysis.

    PubMed

    Molenaar, Peter C M

    2010-05-01

    All six person-oriented principles identified by Sterba and Bauer's Keynote Article can be tested by means of dynamic factor analysis in its current form. In particular, it is shown how complex interactions and interindividual differences/intraindividual change can be tested in this way. In addition, the necessity to use single-subject methods in the analysis of developmental processes is emphasized, and attention is drawn to the possibility to optimally treat developmental psychopathology by means of new computational techniques that can be integrated with dynamic factor analysis.

  13. Neural correlate of the construction of sentence meaning

    PubMed Central

    Fedorenko, Evelina; Brunner, Peter; Pritchett, Brianna; Kanwisher, Nancy

    2016-01-01

    The neural processes that underlie your ability to read and understand this sentence are unknown. Sentence comprehension occurs very rapidly, and can only be understood at a mechanistic level by discovering the precise sequence of underlying computational and neural events. However, we have no continuous and online neural measure of sentence processing with high spatial and temporal resolution. Here we report just such a measure: intracranial recordings from the surface of the human brain show that neural activity, indexed by γ-power, increases monotonically over the course of a sentence as people read it. This steady increase in activity is absent when people read and remember nonword-lists, despite the higher cognitive demand entailed, ruling out accounts in terms of generic attention, working memory, and cognitive load. Response increases are lower for sentence structure without meaning (“Jabberwocky” sentences) and word meaning without sentence structure (word-lists), showing that this effect is not explained by responses to syntax or word meaning alone. Instead, the full effect is found only for sentences, implicating compositional processes of sentence understanding, a striking and unique feature of human language not shared with animal communication systems. This work opens up new avenues for investigating the sequence of neural events that underlie the construction of linguistic meaning. PMID:27671642

  14. 48 CFR 352.227-14 - Rights in Data-Exceptional Circumstances.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ....] Computer database or database means a collection of recorded information in a form capable of, and for the... databases or computer software documentation. Computer software documentation means owner's manuals, user's... nature (including computer databases and computer software documentation). This term does not include...

  15. Prognostic value of computed tomography classification systems for intra-articular calcaneus fractures.

    PubMed

    Swords, Michael P; Alton, Timothy B; Holt, Sarah; Sangeorzan, Bruce J; Shank, John R; Benirschke, Stephen K

    2014-10-01

    There are several published computed tomography (CT) classification systems for calcaneus fractures, each validated by a different standard. The goal of this study was to measure which system would best predict clinical outcomes as measured by a widely used and validated musculoskeletal health status questionnaire. Forty-nine patients with isolated intra-articular joint depression calcaneus fractures more than 2 years after treatment were identified. All had preoperative CT studies and were treated with open reduction and plate fixation using a lateral extensile approach. Four different blinded reviewers classified injuries according to the CT classification systems of Crosby and Fitzgibbons, Eastwood, and Sanders. Functional outcomes evaluated with a Musculoskeletal Functional Assessment (MFA). The mean follow-up was 4.3 years. The mean MFA score was 15.7 (SD = 11.6), which is not significantly different from published values for midfoot injuries, hindfoot injuries, or both, 1 year after injury (mean = 22.1, SD = 18.4). The classification systems of Crosby and Fitzgibbons, Eastwood, and Sanders, the number of fragments of the posterior facet, and payer status were not significantly associated with outcome as determined by the MFA. The Sanders classification trended toward significance. Anterior process comminution and surgeon's overall impression of severity were significantly associated with functional outcome. The amount of anterior process comminution was an important determinant of functional outcome with increasing anterior process comminution significantly associated with worsened functional outcome (P = .04). In addition, the surgeon's overall impression of severity of injury was predictive of functional outcome (P = .02), as determined by MFA. Level III, comparative series. © The Author(s) 2014.

  16. A quantitative model for transforming reflectance spectra into the Munsell color space using cone sensitivity functions and opponent process weights.

    PubMed

    D'Andrade, Roy G; Romney, A Kimball

    2003-05-13

    This article presents a computational model of the process through which the human visual system transforms reflectance spectra into perceptions of color. Using physical reflectance spectra data and standard human cone sensitivity functions we describe the transformations necessary for predicting the location of colors in the Munsell color space. These transformations include quantitative estimates of the opponent process weights needed to transform cone activations into Munsell color space coordinates. Using these opponent process weights, the Munsell position of specific colors can be predicted from their physical spectra with a mean correlation of 0.989.

  17. Orientation-modulated attention effect on visual evoked potential: Application for PIN system using brain-computer interface.

    PubMed

    Wilaiprasitporn, Theerawit; Yagi, Tohru

    2015-01-01

    This research demonstrates the orientation-modulated attention effect on visual evoked potential. We combined this finding with our previous findings about the motion-modulated attention effect and used the result to develop novel visual stimuli for a personal identification number (PIN) application based on a brain-computer interface (BCI) framework. An electroencephalography amplifier with a single electrode channel was sufficient for our application. A computationally inexpensive algorithm and small datasets were used in processing. Seven healthy volunteers participated in experiments to measure offline performance. Mean accuracy was 83.3% at 13.9 bits/min. Encouraged by these results, we plan to continue developing the BCI-based personal identification application toward real-time systems.

  18. Service-Oriented Architecture for NVO and TeraGrid Computing

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph; Miller, Craig; Williams, Roy; Steenberg, Conrad; Graham, Matthew

    2008-01-01

    The National Virtual Observatory (NVO) Extensible Secure Scalable Service Infrastructure (NESSSI) is a Web service architecture and software framework that enables Web-based astronomical data publishing and processing on grid computers such as the National Science Foundation's TeraGrid. Characteristics of this architecture include the following: (1) Services are created, managed, and upgraded by their developers, who are trusted users of computing platforms on which the services are deployed. (2) Service jobs can be initiated by means of Java or Python client programs run on a command line or with Web portals. (3) Access is granted within a graduated security scheme in which the size of a job that can be initiated depends on the level of authentication of the user.

  19. Impact of configuration management system of computer center on support of scientific projects throughout their lifecycle

    NASA Astrophysics Data System (ADS)

    Bogdanov, A. V.; Iuzhanin, N. V.; Zolotarev, V. I.; Ezhakova, T. R.

    2017-12-01

    In this article the problem of scientific projects support throughout their lifecycle in the computer center is considered in every aspect of support. Configuration Management system plays a connecting role in processes related to the provision and support of services of a computer center. In view of strong integration of IT infrastructure components with the use of virtualization, control of infrastructure becomes even more critical to the support of research projects, which means higher requirements for the Configuration Management system. For every aspect of research projects support, the influence of the Configuration Management system is being reviewed and development of the corresponding elements of the system is being described in the present paper.

  20. Toward Usable Interactive Analytics: Coupling Cognition and Computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endert, Alexander; North, Chris; Chang, Remco

    Interactive analytics provide users a myriad of computational means to aid in extracting meaningful information from large and complex datasets. Much prior work focuses either on advancing the capabilities of machine-centric approaches by the data mining and machine learning communities, or human-driven methods by the visualization and CHI communities. However, these methods do not yet support a true human-machine symbiotic relationship where users and machines work together collaboratively and adapt to each other to advance an interactive analytic process. In this paper we discuss some of the inherent issues, outlining what we believe are the steps toward usable interactive analyticsmore » that will ultimately increase the effectiveness for both humans and computers to produce insights.« less

  1. Improving the quality of reconstructed X-ray CT images of polymer gel dosimeters: zero-scan coupled with adaptive mean filtering.

    PubMed

    Kakakhel, M B; Jirasek, A; Johnston, H; Kairn, T; Trapp, J V

    2017-03-01

    This study evaluated the feasibility of combining the 'zero-scan' (ZS) X-ray computed tomography (CT) based polymer gel dosimeter (PGD) readout with adaptive mean (AM) filtering for improving the signal to noise ratio (SNR), and to compare these results with available average scan (AS) X-ray CT readout techniques. NIPAM PGD were manufactured, irradiated with 6 MV photons, CT imaged and processed in Matlab. AM filter for two iterations, with 3 × 3 and 5 × 5 pixels (kernel size), was used in two scenarios (a) the CT images were subjected to AM filtering (pre-processing) and these were further employed to generate AS and ZS gel images, and (b) the AS and ZS images were first reconstructed from the CT images and then AM filtering was carried out (post-processing). SNR was computed in an ROI of 30 × 30 for different pre and post processing cases. Results showed that the ZS technique combined with AM filtering resulted in improved SNR. Using the previously-recommended 25 images for reconstruction the ZS pre-processed protocol can give an increase of 44% and 80% in SNR for 3 × 3 and 5 × 5 kernel sizes respectively. However, post processing using both techniques and filter sizes introduced blur and a reduction in the spatial resolution. Based on this work, it is possible to recommend that the ZS method may be combined with pre-processed AM filtering using appropriate kernel size, to produce a large increase in the SNR of the reconstructed PGD images.

  2. A New Method for Computed Tomography Angiography (CTA) Imaging via Wavelet Decomposition-Dependented Edge Matching Interpolation.

    PubMed

    Li, Zeyu; Chen, Yimin; Zhao, Yan; Zhu, Lifeng; Lv, Shengqing; Lu, Jiahui

    2016-08-01

    The interpolation technique of computed tomography angiography (CTA) image provides the ability for 3D reconstruction, as well as reduces the detect cost and the amount of radiation. However, most of the image interpolation algorithms cannot take the automation and accuracy into account. This study provides a new edge matching interpolation algorithm based on wavelet decomposition of CTA. It includes mark, scale and calculation (MSC). Combining the real clinical image data, this study mainly introduces how to search for proportional factor and use the root mean square operator to find a mean value. Furthermore, we re- synthesize the high frequency and low frequency parts of the processed image by wavelet inverse operation, and get the final interpolation image. MSC can make up for the shortage of the conventional Computed Tomography (CT) and Magnetic Resonance Imaging(MRI) examination. The radiation absorption and the time to check through the proposed synthesized image were significantly reduced. In clinical application, it can help doctor to find hidden lesions in time. Simultaneously, the patients get less economic burden as well as less radiation exposure absorbed.

  3. Geometry program for aerodynamic lifting surface theory

    NASA Technical Reports Server (NTRS)

    Medan, R. T.

    1973-01-01

    A computer program that provides the geometry and boundary conditions appropriate for an analysis of a lifting, thin wing with control surfaces in linearized, subsonic, steady flow is presented. The kernel function method lifting surface theory is applied. The data which is generated by the program is stored on disk files or tapes for later use by programs which calculate an influence matrix, plot the wing planform, and evaluate the loads on the wing. In addition to processing data for subsequent use in a lifting surface analysis, the program is useful for computing area and mean geometric chords of the wing and control surfaces.

  4. Application of Krylov exponential propagation to fluid dynamics equations

    NASA Technical Reports Server (NTRS)

    Saad, Youcef; Semeraro, David

    1991-01-01

    An application of matrix exponentiation via Krylov subspace projection to the solution of fluid dynamics problems is presented. The main idea is to approximate the operation exp(A)v by means of a projection-like process onto a krylov subspace. This results in a computation of an exponential matrix vector product similar to the one above but of a much smaller size. Time integration schemes can then be devised to exploit this basic computational kernel. The motivation of this approach is to provide time-integration schemes that are essentially of an explicit nature but which have good stability properties.

  5. Cumulus cloud base height estimation from high spatial resolution Landsat data - A Hough transform approach

    NASA Technical Reports Server (NTRS)

    Berendes, Todd; Sengupta, Sailes K.; Welch, Ron M.; Wielicki, Bruce A.; Navar, Murgesh

    1992-01-01

    A semiautomated methodology is developed for estimating cumulus cloud base heights on the basis of high spatial resolution Landsat MSS data, using various image-processing techniques to match cloud edges with their corresponding shadow edges. The cloud base height is then estimated by computing the separation distance between the corresponding generalized Hough transform reference points. The differences between the cloud base heights computed by these means and a manual verification technique are of the order of 100 m or less; accuracies of 50-70 m may soon be possible via EOS instruments.

  6. Input Scanners: A Growing Impact In A Diverse Marketplace

    NASA Astrophysics Data System (ADS)

    Marks, Kevin E.

    1989-08-01

    Just as newly invented photographic processes revolutionized the printing industry at the turn of the century, electronic imaging has affected almost every computer application today. To completely emulate traditionally mechanical means of information handling, computer based systems must be able to capture graphic images. Thus, there is a widespread need for the electronic camera, the digitizer, the input scanner. This paper will review how various types of input scanners are being used in many diverse applications. The following topics will be covered: - Historical overview of input scanners - New applications for scanners - Impact of scanning technology on select markets - Scanning systems issues

  7. Real-time model learning using Incremental Sparse Spectrum Gaussian Process Regression.

    PubMed

    Gijsberts, Arjan; Metta, Giorgio

    2013-05-01

    Novel applications in unstructured and non-stationary human environments require robots that learn from experience and adapt autonomously to changing conditions. Predictive models therefore not only need to be accurate, but should also be updated incrementally in real-time and require minimal human intervention. Incremental Sparse Spectrum Gaussian Process Regression is an algorithm that is targeted specifically for use in this context. Rather than developing a novel algorithm from the ground up, the method is based on the thoroughly studied Gaussian Process Regression algorithm, therefore ensuring a solid theoretical foundation. Non-linearity and a bounded update complexity are achieved simultaneously by means of a finite dimensional random feature mapping that approximates a kernel function. As a result, the computational cost for each update remains constant over time. Finally, algorithmic simplicity and support for automated hyperparameter optimization ensures convenience when employed in practice. Empirical validation on a number of synthetic and real-life learning problems confirms that the performance of Incremental Sparse Spectrum Gaussian Process Regression is superior with respect to the popular Locally Weighted Projection Regression, while computational requirements are found to be significantly lower. The method is therefore particularly suited for learning with real-time constraints or when computational resources are limited. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Performance Modeling in CUDA Streams - A Means for High-Throughput Data Processing

    PubMed Central

    Li, Hao; Yu, Di; Kumar, Anand; Tu, Yi-Cheng

    2015-01-01

    Push-based database management system (DBMS) is a new type of data processing software that streams large volume of data to concurrent query operators. The high data rate of such systems requires large computing power provided by the query engine. In our previous work, we built a push-based DBMS named G-SDMS to harness the unrivaled computational capabilities of modern GPUs. A major design goal of G-SDMS is to support concurrent processing of heterogenous query processing operations and enable resource allocation among such operations. Understanding the performance of operations as a result of resource consumption is thus a premise in the design of G-SDMS. With NVIDIA’s CUDA framework as the system implementation platform, we present our recent work on performance modeling of CUDA kernels running concurrently under a runtime mechanism named CUDA stream. Specifically, we explore the connection between performance and resource occupancy of compute-bound kernels and develop a model that can predict the performance of such kernels. Furthermore, we provide an in-depth anatomy of the CUDA stream mechanism and summarize the main kernel scheduling disciplines in it. Our models and derived scheduling disciplines are verified by extensive experiments using synthetic and real-world CUDA kernels. PMID:26566545

  9. A computational feedforward model predicts categorization of masked emotional body language for longer, but not for shorter, latencies.

    PubMed

    Stienen, Bernard M C; Schindler, Konrad; de Gelder, Beatrice

    2012-07-01

    Given the presence of massive feedback loops in brain networks, it is difficult to disentangle the contribution of feedforward and feedback processing to the recognition of visual stimuli, in this case, of emotional body expressions. The aim of the work presented in this letter is to shed light on how well feedforward processing explains rapid categorization of this important class of stimuli. By means of parametric masking, it may be possible to control the contribution of feedback activity in human participants. A close comparison is presented between human recognition performance and the performance of a computational neural model that exclusively modeled feedforward processing and was engineered to fulfill the computational requirements of recognition. Results show that the longer the stimulus onset asynchrony (SOA), the closer the performance of the human participants was to the values predicted by the model, with an optimum at an SOA of 100 ms. At short SOA latencies, human performance deteriorated, but the categorization of the emotional expressions was still above baseline. The data suggest that, although theoretically, feedback arising from inferotemporal cortex is likely to be blocked when the SOA is 100 ms, human participants still seem to rely on more local visual feedback processing to equal the model's performance.

  10. Use of surface drifters to increase resolution and accuracy of oceanic geostrophic circulation mapped from satellite only (altimetry and gravimetry)

    NASA Astrophysics Data System (ADS)

    Mulet, Sandrine; Rio, Marie-Hélène; Etienne, Hélène

    2017-04-01

    Strong improvements have been made in our knowledge of the surface ocean geostrophic circulation thanks to satellite observations. For instance, the use of the latest GOCE (Gravity field and steady-state Ocean Circulation Explorer) geoid model with altimetry data gives good estimate of the mean oceanic circulation at spatial scales down to 125 km. However, surface drifters are essential to resolve smaller scales, it is thus mandatory to carefully process drifter data and then to combine these different data sources. In this framework, the global 1/4° CNES-CLS13 Mean Dynamic Topography (MDT) and associated mean geostrophic currents have been computed (Rio et al, 2014). First a satellite only MDT was computed from altimetric and gravimetric data. Then, an important work was to pre-process drifter data to extract only the geostrophic component in order to be consistent with physical content of satellite only MDT. This step include estimate and remove of Ekman current and wind slippage. Finally drifters and satellite only MDT were combined. Similar approaches are used regionally to go further toward higher resolution, for instance in the Agulhas current or along the Brazilian coast. Also, a case study in the Gulf of Mexico intends to use drifters in the same way to improve weekly geostrophic current estimate.

  11. Multivariate statistics of the Jacobian matrices in tensor based morphometry and their application to HIV/AIDS.

    PubMed

    Lepore, Natasha; Brun, Caroline A; Chiang, Ming-Chang; Chou, Yi-Yu; Dutton, Rebecca A; Hayashi, Kiralee M; Lopez, Oscar L; Aizenstein, Howard J; Toga, Arthur W; Becker, James T; Thompson, Paul M

    2006-01-01

    Tensor-based morphometry (TBM) is widely used in computational anatomy as a means to understand shape variation between structural brain images. A 3D nonlinear registration technique is typically used to align all brain images to a common neuroanatomical template, and the deformation fields are analyzed statistically to identify group differences in anatomy. However, the differences are usually computed solely from the determinants of the Jacobian matrices that are associated with the deformation fields computed by the registration procedure. Thus, much of the information contained within those matrices gets thrown out in the process. Only the magnitude of the expansions or contractions is examined, while the anisotropy and directional components of the changes are ignored. Here we remedy this problem by computing multivariate shape change statistics using the strain matrices. As the latter do not form a vector space, means and covariances are computed on the manifold of positive-definite matrices to which they belong. We study the brain morphology of 26 HIV/AIDS patients and 14 matched healthy control subjects using our method. The images are registered using a high-dimensional 3D fluid registration algorithm, which optimizes the Jensen-Rényi divergence, an information-theoretic measure of image correspondence. The anisotropy of the deformation is then computed. We apply a manifold version of Hotelling's T2 test to the strain matrices. Our results complement those found from the determinants of the Jacobians alone and provide greater power in detecting group differences in brain structure.

  12. A diabetic retinopathy detection method using an improved pillar K-means algorithm.

    PubMed

    Gogula, Susmitha Valli; Divakar, Ch; Satyanarayana, Ch; Rao, Allam Appa

    2014-01-01

    The paper presents a new approach for medical image segmentation. Exudates are a visible sign of diabetic retinopathy that is the major reason of vision loss in patients with diabetes. If the exudates extend into the macular area, blindness may occur. Automated detection of exudates will assist ophthalmologists in early diagnosis. This segmentation process includes a new mechanism for clustering the elements of high-resolution images in order to improve precision and reduce computation time. The system applies K-means clustering to the image segmentation after getting optimized by Pillar algorithm; pillars are constructed in such a way that they can withstand the pressure. Improved pillar algorithm can optimize the K-means clustering for image segmentation in aspects of precision and computation time. This evaluates the proposed approach for image segmentation by comparing with Kmeans and Fuzzy C-means in a medical image. Using this method, identification of dark spot in the retina becomes easier and the proposed algorithm is applied on diabetic retinal images of all stages to identify hard and soft exudates, where the existing pillar K-means is more appropriate for brain MRI images. This proposed system help the doctors to identify the problem in the early stage and can suggest a better drug for preventing further retinal damage.

  13. 14 CFR 1214.801 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... customer's pro rata share of Shuttle services and used to compute the Shuttle charge factor. Means of... compute the customer's pro rata share of each element's services and used to compute the element charge... element charge factor. Parameters used in computation of the customer's flight price. Means of computing...

  14. 14 CFR 1214.801 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... customer's pro rata share of Shuttle services and used to compute the Shuttle charge factor. Means of... compute the customer's pro rata share of each element's services and used to compute the element charge... element charge factor. Parameters used in computation of the customer's flight price. Means of computing...

  15. 14 CFR 1214.801 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... customer's pro rata share of Shuttle services and used to compute the Shuttle charge factor. Means of... compute the customer's pro rata share of each element's services and used to compute the element charge... element charge factor. Parameters used in computation of the customer's flight price. Means of computing...

  16. 14 CFR 1214.801 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... customer's pro rata share of Shuttle services and used to compute the Shuttle charge factor. Means of... compute the customer's pro rata share of each element's services and used to compute the element charge... element charge factor. Parameters used in computation of the customer's flight price. Means of computing...

  17. 14 CFR § 1214.801 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... customer's pro rata share of Shuttle services and used to compute the Shuttle charge factor. Means of... compute the customer's pro rata share of each element's services and used to compute the element charge... element charge factor. Parameters used in computation of the customer's flight price. Means of computing...

  18. Evaluation of user input methods for manipulating a tablet personal computer in sterile techniques.

    PubMed

    Yamada, Akira; Komatsu, Daisuke; Suzuki, Takeshi; Kurozumi, Masahiro; Fujinaga, Yasunari; Ueda, Kazuhiko; Kadoya, Masumi

    2017-02-01

    To determine a quick and accurate user input method for manipulating tablet personal computers (PCs) in sterile techniques. We evaluated three different manipulation methods, (1) Computer mouse and sterile system drape, (2) Fingers and sterile system drape, and (3) Digitizer stylus and sterile ultrasound probe cover with a pinhole, in terms of the central processing unit (CPU) performance, manipulation performance, and contactlessness. A significant decrease in CPU score ([Formula: see text]) and an increase in CPU temperature ([Formula: see text]) were observed when a system drape was used. The respective mean times taken to select a target image from an image series (ST) and the mean times for measuring points on an image (MT) were [Formula: see text] and [Formula: see text] s for the computer mouse method, [Formula: see text] and [Formula: see text] s for the finger method, and [Formula: see text] and [Formula: see text] s for the digitizer stylus method, respectively. The ST for the finger method was significantly longer than for the digitizer stylus method ([Formula: see text]). The MT for the computer mouse method was significantly longer than for the digitizer stylus method ([Formula: see text]). The mean success rate for measuring points on an image was significantly lower for the finger method when the diameter of the target was equal to or smaller than 8 mm than for the other methods. No significant difference in the adenosine triphosphate amount at the surface of the tablet PC was observed before, during, or after manipulation via the digitizer stylus method while wearing starch-powdered sterile gloves ([Formula: see text]). Quick and accurate manipulation of tablet PCs in sterile techniques without CPU load is feasible using a digitizer stylus and sterile ultrasound probe cover with a pinhole.

  19. Stochastic simulation of the spray formation assisted by a high pressure

    NASA Astrophysics Data System (ADS)

    Gorokhovski, M.; Chtab-Desportes, A.; Voloshina, I.; Askarova, A.

    2010-03-01

    The stochastic model of spray formation in the vicinity of the injector and in the far-field has been described and assessed by comparison with measurements in Diesel-like conditions. In the proposed mesh-free approach, the 3D configuration of continuous liquid core is simulated stochastically by ensemble of spatial trajectories of the specifically introduced stochastic particles. The parameters of the stochastic process are presumed from the physics of primary atomization. The spray formation model consists in computation of spatial distribution of the probability of finding the non-fragmented liquid jet in the near-to-injector region. This model is combined with KIVA II computation of atomizing Diesel spray in two-ways. First, simultaneously with the gas phase RANS computation, the ensemble of stochastic particles is tracking and the probability field of their positions is calculated, which is used for sampling of initial locations of primary blobs. Second, the velocity increment of the gas due to the liquid injection is computed from the mean volume fraction of the simulated liquid core. Two novelties are proposed in the secondary atomization modeling. The first one is due to unsteadiness of the injection velocity. When the injection velocity increment in time is decreasing, the supplementary breakup may be induced. Therefore the critical Weber number is based on such increment. Second, a new stochastic model of the secondary atomization is proposed, in which the intermittent turbulent stretching is taken into account as the main mechanism. The measurements reported by Arcoumanis et al. (time-history of the mean axial centre-line velocity of droplet, and of the centre-line Sauter Mean Diameter), are compared with computations.

  20. Reconfigurability in MDO Problem Synthesis. Part 1

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2004-01-01

    Integrating autonomous disciplines into a problem amenable to solution presents a major challenge in realistic multidisciplinary design optimization (MDO). We propose a linguistic approach to MDO problem description, formulation, and solution we call reconfigurable multidisciplinary synthesis (REMS). With assistance from computer science techniques, REMS comprises an abstract language and a collection of processes that provide a means for dynamic reasoning about MDO problems in a range of contexts. The approach may be summarized as follows. Description of disciplinary data according to the rules of a grammar, followed by lexical analysis and compilation, yields basic computational components that can be assembled into various MDO problem formulations and solution algorithms, including hybrid strategies, with relative ease. The ability to re-use the computational components is due to the special structure of the MDO problem. The range of contexts for reasoning about MDO spans tasks from error checking and derivative computation to formulation and reformulation of optimization problem statements. In highly structured contexts, reconfigurability can mean a straightforward transformation among problem formulations with a single operation. We hope that REMS will enable experimentation with a variety of problem formulations in research environments, assist in the assembly of MDO test problems, and serve as a pre-processor in computational frameworks in production environments. This paper, Part 1 of two companion papers, discusses the fundamentals of REMS. Part 2 illustrates the methodology in more detail.

  1. Reconfigurability in MDO Problem Synthesis. Part 2

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2004-01-01

    Integrating autonomous disciplines into a problem amenable to solution presents a major challenge in realistic multidisciplinary design optimization (MDO). We propose a linguistic approach to MDO problem description, formulation, and solution we call reconfigurable multidisciplinary synthesis (REMS). With assistance from computer science techniques, REMS comprises an abstract language and a collection of processes that provide a means for dynamic reasoning about MDO problems in a range of contexts. The approach may be summarized as follows. Description of disciplinary data according to the rules of a grammar, followed by lexical analysis and compilation, yields basic computational components that can be assembled into various MDO problem formulations and solution algorithms, including hybrid strategies, with relative ease. The ability to re-use the computational components is due to the special structure of the MDO problem. The range of contexts for reasoning about MDO spans tasks from error checking and derivative computation to formulation and reformulation of optimization problem statements. In highly structured contexts, reconfigurability can mean a straightforward transformation among problem formulations with a single operation. We hope that REMS will enable experimentation with a variety of problem formulations in research environments, assist in the assembly of MDO test problems, and serve as a pre-processor in computational frameworks in production environments. Part 1 of two companion papers, discusses the fundamentals of REMS. This paper, Part 2 illustrates the methodology in more detail.

  2. Get SUNREL | Buildings | NREL

    Science.gov Websites

    ; means (a) copies of the computer program commonly known as SUNREL, and all of the contents of files accordance with the Documentation. 1.2 "Computer" means an electronic device that accepts computer." 1.4 "Licensee" means the Individual Licensee. 1.5 "Licensed Single Site"

  3. Inferring Group Processes from Computer-Mediated Affective Text Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, Jack C; Begoli, Edmon; Jose, Ajith

    2011-02-01

    Political communications in the form of unstructured text convey rich connotative meaning that can reveal underlying group social processes. Previous research has focused on sentiment analysis at the document level, but we extend this analysis to sub-document levels through a detailed analysis of affective relationships between entities extracted from a document. Instead of pure sentiment analysis, which is just positive or negative, we explore nuances of affective meaning in 22 affect categories. Our affect propagation algorithm automatically calculates and displays extracted affective relationships among entities in graphical form in our prototype (TEAMSTER), starting with seed lists of affect terms. Severalmore » useful metrics are defined to infer underlying group processes by aggregating affective relationships discovered in a text. Our approach has been validated with annotated documents from the MPQA corpus, achieving a performance gain of 74% over comparable random guessers.« less

  4. Distributed acoustic sensing: how to make the best out of the Rayleigh-backscattered energy?

    NASA Astrophysics Data System (ADS)

    Eyal, A.; Gabai, H.; Shpatz, I.

    2017-04-01

    Coherent fading noise (also known as speckle noise) affects the SNR and sensitivity of Distributed Acoustic Sensing (DAS) systems and makes them random processes of position and time. As in speckle noise, the statistical distribution of DAS SNR is particularly wide and its standard deviation (STD) roughly equals its mean (σSNR/ ≍ 0.89). Trading resolution for SNR may improve the mean SNR but not necessarily narrow its distribution. Here a new approach to achieve both SNR improvement (by sacrificing resolution) and narrowing of the distribution is introduced. The method is based on acquiring high resolution complex backscatter profiles of the sensing fiber, using them to compute complex power profiles of the fiber which retain phase variation information and filtering of the power profiles. The approach is tested via a computer simulation and demonstrates distribution narrowing up to σSNR/ < 0.2.

  5. High pressure jet flame numerical analysis of CO emissions by means of the flamelet generated manifolds technique

    NASA Astrophysics Data System (ADS)

    Donini, A.; Martin, S. M.; Bastiaans, R. J. M.; van Oijen, J. A.; de Goey, L. P. H.

    2013-10-01

    In the present paper a computational analysis of a high pressure confined premixed turbulent methane/air jet flames is presented. In this scope, chemistry is reduced by the use of the Flamelet Generated Manifold method [1] and the fluid flow is modeled in an LES and RANS context. The reaction evolution is described by the reaction progress variable, the heat loss is described by the enthalpy and the turbulence effect on the reaction is represented by the progress variable variance. The interaction between chemistry and turbulence is considered through a presumed probability density function (PDF) approach. The use of FGM as a combustion model shows that combustion features at gas turbine conditions can be satisfactorily reproduced with a reasonable computational effort. Furthermore, the present analysis indicates that the physical and chemical processes controlling carbon monoxide (CO) emissions can be captured only by means of unsteady simulations.

  6. Efficient Redundancy Techniques in Cloud and Desktop Grid Systems using MAP/G/c-type Queues

    NASA Astrophysics Data System (ADS)

    Chakravarthy, Srinivas R.; Rumyantsev, Alexander

    2018-03-01

    Cloud computing is continuing to prove its flexibility and versatility in helping industries and businesses as well as academia as a way of providing needed computing capacity. As an important alternative to cloud computing, desktop grids allow to utilize the idle computer resources of an enterprise/community by means of distributed computing system, providing a more secure and controllable environment with lower operational expenses. Further, both cloud computing and desktop grids are meant to optimize limited resources and at the same time to decrease the expected latency for users. The crucial parameter for optimization both in cloud computing and in desktop grids is the level of redundancy (replication) for service requests/workunits. In this paper we study the optimal replication policies by considering three variations of Fork-Join systems in the context of a multi-server queueing system with a versatile point process for the arrivals. For services we consider phase type distributions as well as shifted exponential and Weibull. We use both analytical and simulation approach in our analysis and report some interesting qualitative results.

  7. The aerospace plane design challenge: Credible computational fluid dynamics results

    NASA Technical Reports Server (NTRS)

    Mehta, Unmeel B.

    1990-01-01

    Computational fluid dynamics (CFD) is necessary in the design processes of all current aerospace plane programs. Single-stage-to-orbit (STTO) aerospace planes with air-breathing supersonic combustion are going to be largely designed by means of CFD. The challenge of the aerospace plane design is to provide credible CFD results to work from, to assess the risk associated with the use of those results, and to certify CFD codes that produce credible results. To establish the credibility of CFD results used in design, the following topics are discussed: CFD validation vis-a-vis measurable fluid dynamics (MFD) validation; responsibility for credibility; credibility requirement; and a guide for establishing credibility. Quantification of CFD uncertainties helps to assess success risk and safety risks, and the development of CFD as a design tool requires code certification. This challenge is managed by designing the designers to use CFD effectively, by ensuring quality control, and by balancing the design process. For designing the designers, the following topics are discussed: how CFD design technology is developed; the reasons Japanese companies, by and large, produce goods of higher quality than the U.S. counterparts; teamwork as a new way of doing business; and how ideas, quality, and teaming can be brought together. Quality control for reducing the loss imparted to the society begins with the quality of the CFD results used in the design process, and balancing the design process means using a judicious balance of CFD and MFD.

  8. Visual based laser speckle pattern recognition method for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Park, Kyeongtaek; Torbol, Marco

    2017-04-01

    This study performed the system identification of a target structure by analyzing the laser speckle pattern taken by a camera. The laser speckle pattern is generated by the diffuse reflection of the laser beam on a rough surface of the target structure. The camera, equipped with a red filter, records the scattered speckle particles of the laser light in real time and the raw speckle image of the pixel data is fed to the graphic processing unit (GPU) in the system. The algorithm for laser speckle contrast analysis (LASCA) computes: the laser speckle contrast images and the laser speckle flow images. The k-mean clustering algorithm is used to classify the pixels in each frame and the clusters' centroids, which function as virtual sensors, track the displacement between different frames in time domain. The fast Fourier transform (FFT) and the frequency domain decomposition (FDD) compute the modal properties of the structure: natural frequencies and damping ratios. This study takes advantage of the large scale computational capability of GPU. The algorithm is written in Compute Unifies Device Architecture (CUDA C) that allows the processing of speckle images in real time.

  9. Soft tissue deformation estimation by spatio-temporal Kalman filter finite element method.

    PubMed

    Yarahmadian, Mehran; Zhong, Yongmin; Gu, Chengfan; Shin, Jaehyun

    2018-01-01

    Soft tissue modeling plays an important role in the development of surgical training simulators as well as in robot-assisted minimally invasive surgeries. It has been known that while the traditional Finite Element Method (FEM) promises the accurate modeling of soft tissue deformation, it still suffers from a slow computational process. This paper presents a Kalman filter finite element method to model soft tissue deformation in real time without sacrificing the traditional FEM accuracy. The proposed method employs the FEM equilibrium equation and formulates it as a filtering process to estimate soft tissue behavior using real-time measurement data. The model is temporally discretized using the Newmark method and further formulated as the system state equation. Simulation results demonstrate that the computational time of KF-FEM is approximately 10 times shorter than the traditional FEM and it is still as accurate as the traditional FEM. The normalized root-mean-square error of the proposed KF-FEM in reference to the traditional FEM is computed as 0.0116. It is concluded that the proposed method significantly improves the computational performance of the traditional FEM without sacrificing FEM accuracy. The proposed method also filters noises involved in system state and measurement data.

  10. Linear and passive silicon diodes, isolators, and logic gates

    NASA Astrophysics Data System (ADS)

    Li, Zhi-Yuan

    2013-12-01

    Silicon photonic integrated devices and circuits have offered a promising means to revolutionalize information processing and computing technologies. One important reason is that these devices are compatible with conventional complementary metal oxide semiconductor (CMOS) processing technology that overwhelms current microelectronics industry. Yet, the dream to build optical computers has yet to come without the breakthrough of several key elements including optical diodes, isolators, and logic gates with low power, high signal contrast, and large bandwidth. Photonic crystal has a great power to mold the flow of light in micrometer/nanometer scale and is a promising platform for optical integration. In this paper we present our recent efforts of design, fabrication, and characterization of ultracompact, linear, passive on-chip optical diodes, isolators and logic gates based on silicon two-dimensional photonic crystal slabs. Both simulation and experiment results show high performance of these novel designed devices. These linear and passive silicon devices have the unique properties of small fingerprint, low power request, large bandwidth, fast response speed, easy for fabrication, and being compatible with COMS technology. Further improving their performance would open up a road towards photonic logics and optical computing and help to construct nanophotonic on-chip processor architectures for future optical computers.

  11. Preventing the Cyber Zombie Apocalypse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Cybercrime rates are on the rise, but what exactly does that mean? Cybercrime is any sort of crime using a computer—simple enough. And now that most people in the United States have a computer or access to one, cybercrime is more common than ever. Los Alamos National Laboratory has been working on cybersecurity techniques, processes and tools to prevent and detect cyberattacks.

  12. Optical detection of Trypanosoma cruzi in blood samples for diagnosis purpose

    NASA Astrophysics Data System (ADS)

    Alanis, Elvio; Romero, Graciela; Alvarez, Liliana; Martinez, Carlos C.; Basombrio, Miguel A.

    2004-10-01

    An optical method for detection of Trypanosoma Cruzi (T. cruzi) parasites in blood samples of mice infected with Chagas disease is presented. The method is intended for use in human blood, for diagnosis purposes. A thin layer of blood infected by T. cruzi parasites, in small concentrations, is examined in an interferometric microscope in which the images of the vision field are taken by a CCD camera and temporarily stored in the memory of a host computer. The whole sample is scanned displacing the microscope plate by means of step motors driven by the computer. Several consecutive images of the same field are taken and digitally processed by means of image temporal differentiation in order to detect if a parasite is eventually present in the field. Each field of view is processed in the same fashion, until the full area of the sample is covered or until a parasite is detected, in which case an acoustical warning is activated and the corresponding image is displayed permitting the technician to corroborate the result visually. A discussion of the reliability of the method as well as a comparison with other well established techniques are presented.

  13. Flexible Description Language for HPC based Processing of Remote Sense Data

    NASA Astrophysics Data System (ADS)

    Nandra, Constantin; Gorgan, Dorian; Bacu, Victor

    2016-04-01

    When talking about Big Data, the most challenging aspect lays in processing them in order to gain new insight, find new patterns and gain knowledge from them. This problem is likely most apparent in the case of Earth Observation (EO) data. With ever higher numbers of data sources and increasing data acquisition rates, dealing with EO data is indeed a challenge [1]. Geoscientists should address this challenge by using flexible and efficient tools and platforms. To answer this trend, the BigEarth project [2] aims to combine the advantages of high performance computing solutions with flexible processing description methodologies in order to reduce both task execution times and task definition time and effort. As a component of the BigEarth platform, WorDeL (Workflow Description Language) [3] is intended to offer a flexible, compact and modular approach to the task definition process. WorDeL, unlike other description alternatives such as Python or shell scripts, is oriented towards the description topologies, using them as abstractions for the processing programs. This feature is intended to make it an attractive alternative for users lacking in programming experience. By promoting modular designs, WorDeL not only makes the processing descriptions more user-readable and intuitive, but also helps organizing the processing tasks into independent sub-tasks, which can be executed in parallel on multi-processor platforms in order to improve execution times. As a BigEarth platform [4] component, WorDeL represents the means by which the user interacts with the system, describing processing algorithms in terms of existing operators and workflows [5], which are ultimately translated into sets of executable commands. The WorDeL language has been designed to help in the definition of compute-intensive, batch tasks which can be distributed and executed on high-performance, cloud or grid-based architectures in order to improve the processing time. Main references for further information: [1] Gorgan, D., "Flexible and Adaptive Processing of Earth Observation Data over High Performance Computation Architectures", International Conference and Exhibition Satellite 2015, August 17-19, Houston, Texas, USA. [2] Bigearth project - flexible processing of big earth data over high performance computing architectures. http://cgis.utcluj.ro/bigearth, (2014) [3] Nandra, C., Gorgan, D., "Workflow Description Language for Defining Big Earth Data Processing Tasks", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp. 461-468, (2015). [4] Bacu, V., Stefan, T., Gorgan, D., "Adaptive Processing of Earth Observation Data on Cloud Infrastructures Based on Workflow Description", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp.444-454, (2015). [5] Mihon, D., Bacu, V., Colceriu, V., Gorgan, D., "Modeling of Earth Observation Use Cases through the KEOPS System", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp. 455-460, (2015).

  14. Estimating Missing Unit Process Data in Life Cycle Assessment Using a Similarity-Based Approach.

    PubMed

    Hou, Ping; Cai, Jiarui; Qu, Shen; Xu, Ming

    2018-05-01

    In life cycle assessment (LCA), collecting unit process data from the empirical sources (i.e., meter readings, operation logs/journals) is often costly and time-consuming. We propose a new computational approach to estimate missing unit process data solely relying on limited known data based on a similarity-based link prediction method. The intuition is that similar processes in a unit process network tend to have similar material/energy inputs and waste/emission outputs. We use the ecoinvent 3.1 unit process data sets to test our method in four steps: (1) dividing the data sets into a training set and a test set; (2) randomly removing certain numbers of data in the test set indicated as missing; (3) using similarity-weighted means of various numbers of most similar processes in the training set to estimate the missing data in the test set; and (4) comparing estimated data with the original values to determine the performance of the estimation. The results show that missing data can be accurately estimated when less than 5% data are missing in one process. The estimation performance decreases as the percentage of missing data increases. This study provides a new approach to compile unit process data and demonstrates a promising potential of using computational approaches for LCA data compilation.

  15. Computational power and generative capacity of genetic systems.

    PubMed

    Igamberdiev, Abir U; Shklovskiy-Kordi, Nikita E

    2016-01-01

    Semiotic characteristics of genetic sequences are based on the general principles of linguistics formulated by Ferdinand de Saussure, such as the arbitrariness of sign and the linear nature of the signifier. Besides these semiotic features that are attributable to the basic structure of the genetic code, the principle of generativity of genetic language is important for understanding biological transformations. The problem of generativity in genetic systems arises to a possibility of different interpretations of genetic texts, and corresponds to what Alexander von Humboldt called "the infinite use of finite means". These interpretations appear in the individual development as the spatiotemporal sequences of realizations of different textual meanings, as well as the emergence of hyper-textual statements about the text itself, which underlies the process of biological evolution. These interpretations are accomplished at the level of the readout of genetic texts by the structures defined by Efim Liberman as "the molecular computer of cell", which includes DNA, RNA and the corresponding enzymes operating with molecular addresses. The molecular computer performs physically manifested mathematical operations and possesses both reading and writing capacities. Generativity paradoxically resides in the biological computational system as a possibility to incorporate meta-statements about the system, and thus establishes the internal capacity for its evolution. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. Estimating generalized skew of the log-Pearson Type III distribution for annual peak floods in Illinois

    USGS Publications Warehouse

    Oberg, Kevin A.; Mades, Dean M.

    1987-01-01

    Four techniques for estimating generalized skew in Illinois were evaluated: (1) a generalized skew map of the US; (2) an isoline map; (3) a prediction equation; and (4) a regional-mean skew. Peak-flow records at 730 gaging stations having 10 or more annual peaks were selected for computing station skews. Station skew values ranged from -3.55 to 2.95, with a mean of -0.11. Frequency curves computed for 30 gaging stations in Illinois using the variations of the regional-mean skew technique are similar to frequency curves computed using a skew map developed by the US Water Resources Council (WRC). Estimates of the 50-, 100-, and 500-yr floods computed for 29 of these gaging stations using the regional-mean skew techniques are within the 50% confidence limits of frequency curves computed using the WRC skew map. Although the three variations of the regional-mean skew technique were slightly more accurate than the WRC map, there is no appreciable difference between flood estimates computed using the variations of the regional-mean technique and flood estimates computed using the WRC skew map. (Peters-PTT)

  17. Comparison of Classifier Architectures for Online Neural Spike Sorting.

    PubMed

    Saeed, Maryam; Khan, Amir Ali; Kamboh, Awais Mehmood

    2017-04-01

    High-density, intracranial recordings from micro-electrode arrays need to undergo Spike Sorting in order to associate the recorded neuronal spikes to particular neurons. This involves spike detection, feature extraction, and classification. To reduce the data transmission and power requirements, on-chip real-time processing is becoming very popular. However, high computational resources are required for classifiers in on-chip spike-sorters, making scalability a great challenge. In this review paper, we analyze several popular classifiers to propose five new hardware architectures using the off-chip training with on-chip classification approach. These include support vector classification, fuzzy C-means classification, self-organizing maps classification, moving-centroid K-means classification, and Cosine distance classification. The performance of these architectures is analyzed in terms of accuracy and resource requirement. We establish that the neural networks based Self-Organizing Maps classifier offers the most viable solution. A spike sorter based on the Self-Organizing Maps classifier, requires only 7.83% of computational resources of the best-reported spike sorter, hierarchical adaptive means, while offering a 3% better accuracy at 7 dB SNR.

  18. Directable weathering of concave rock using curvature estimation.

    PubMed

    Jones, Michael D; Farley, McKay; Butler, Joseph; Beardall, Matthew

    2010-01-01

    We address the problem of directable weathering of exposed concave rock for use in computer-generated animation or games. Previous weathering models that admit concave surfaces are computationally inefficient and difficult to control. In nature, the spheroidal and cavernous weathering rates depend on the surface curvature. Spheroidal weathering is fastest in areas with large positive mean curvature and cavernous weathering is fastest in areas with large negative mean curvature. We simulate both processes using an approximation of mean curvature on a voxel grid. Both weathering rates are also influenced by rock durability. The user controls rock durability by editing a durability graph before and during weathering simulation. Simulations of rockfall and colluvium deposition further improve realism. The profile of the final weathered rock matches the shape of the durability graph up to the effects of weathering and colluvium deposition. We demonstrate the top-down directability and visual plausibility of the resulting model through a series of screenshots and rendered images. The results include the weathering of a cube into a sphere and of a sheltered inside corner into a cavern as predicted by the underlying geomorphological models.

  19. Analyzing student conceptual understanding of resistor networks using binary, descriptive, and computational questions

    NASA Astrophysics Data System (ADS)

    Mujtaba, Abid H.

    2018-02-01

    This paper presents a case study assessing and analyzing student engagement with and responses to binary, descriptive, and computational questions testing the concepts underlying resistor networks (series and parallel combinations). The participants of the study were undergraduate students enrolled in a university in Pakistan. The majority of students struggled with the descriptive question, and while successfully answering the binary and computational ones, they failed to build an expectation for the answer, and betrayed significant lack of conceptual understanding in the process. The data collected was also used to analyze the relative efficacy of the three questions as a means of assessing conceptual understanding. The three questions were revealed to be uncorrelated and unlikely to be testing the same construct. The ability to answer the binary or computational question was observed to be divorced from a deeper understanding of the concepts involved.

  20. An automatic eye detection and tracking technique for stereo video sequences

    NASA Astrophysics Data System (ADS)

    Paduru, Anirudh; Charalampidis, Dimitrios; Fouts, Brandon; Jovanovich, Kim

    2009-05-01

    Human-computer interfacing (HCI) describes a system or process with which two information processors, namely a human and a computer, attempt to exchange information. Computer-to-human (CtH) information transfer has been relatively effective through visual displays and sound devices. On the other hand, the human-tocomputer (HtC) interfacing avenue has yet to reach its full potential. For instance, the most common HtC communication means are the keyboard and mouse, which are already becoming a bottleneck in the effective transfer of information. The solution to the problem is the development of algorithms that allow the computer to understand human intentions based on their facial expressions, head motion patterns, and speech. In this work, we are investigating the feasibility of a stereo system to effectively determine the head position, including the head rotation angles, based on the detection of eye pupils.

  1. Towards the computation of time-periodic inertial range dynamics

    NASA Astrophysics Data System (ADS)

    van Veen, L.; Vela-Martín, A.; Kawahara, G.

    2018-04-01

    We explore the possibility of computing simple invariant solutions, like travelling waves or periodic orbits, in Large Eddy Simulation (LES) on a periodic domain with constant external forcing. The absence of material boundaries and the simple forcing mechanism make this system a comparatively simple target for the study of turbulent dynamics through invariant solutions. We show, that in spite of the application of eddy viscosity the computations are still rather challenging and must be performed on GPU cards rather than conventional coupled CPUs. We investigate the onset of turbulence in this system by means of bifurcation analysis, and present a long-period, large-amplitude unstable periodic orbit that is filtered from a turbulent time series. Although this orbit is computed on a coarse grid, with only a small separation between the integral scale and the LES filter length, the periodic dynamics seem to capture a regeneration process of the large-scale vortices.

  2. Coagulation kinetics beyond mean field theory using an optimised Poisson representation.

    PubMed

    Burnett, James; Ford, Ian J

    2015-05-21

    Binary particle coagulation can be modelled as the repeated random process of the combination of two particles to form a third. The kinetics may be represented by population rate equations based on a mean field assumption, according to which the rate of aggregation is taken to be proportional to the product of the mean populations of the two participants, but this can be a poor approximation when the mean populations are small. However, using the Poisson representation, it is possible to derive a set of rate equations that go beyond mean field theory, describing pseudo-populations that are continuous, noisy, and complex, but where averaging over the noise and initial conditions gives the mean of the physical population. Such an approach is explored for the simple case of a size-independent rate of coagulation between particles. Analytical results are compared with numerical computations and with results derived by other means. In the numerical work, we encounter instabilities that can be eliminated using a suitable "gauge" transformation of the problem [P. D. Drummond, Eur. Phys. J. B 38, 617 (2004)] which we show to be equivalent to the application of the Cameron-Martin-Girsanov formula describing a shift in a probability measure. The cost of such a procedure is to introduce additional statistical noise into the numerical results, but we identify an optimised gauge transformation where this difficulty is minimal for the main properties of interest. For more complicated systems, such an approach is likely to be computationally cheaper than Monte Carlo simulation.

  3. Affixation in semantic space: Modeling morpheme meanings with compositional distributional semantics.

    PubMed

    Marelli, Marco; Baroni, Marco

    2015-07-01

    The present work proposes a computational model of morpheme combination at the meaning level. The model moves from the tenets of distributional semantics, and assumes that word meanings can be effectively represented by vectors recording their co-occurrence with other words in a large text corpus. Given this assumption, affixes are modeled as functions (matrices) mapping stems onto derived forms. Derived-form meanings can be thought of as the result of a combinatorial procedure that transforms the stem vector on the basis of the affix matrix (e.g., the meaning of nameless is obtained by multiplying the vector of name with the matrix of -less). We show that this architecture accounts for the remarkable human capacity of generating new words that denote novel meanings, correctly predicting semantic intuitions about novel derived forms. Moreover, the proposed compositional approach, once paired with a whole-word route, provides a new interpretative framework for semantic transparency, which is here partially explained in terms of ease of the combinatorial procedure and strength of the transformation brought about by the affix. Model-based predictions are in line with the modulation of semantic transparency on explicit intuitions about existing words, response times in lexical decision, and morphological priming. In conclusion, we introduce a computational model to account for morpheme combination at the meaning level. The model is data-driven, theoretically sound, and empirically supported, and it makes predictions that open new research avenues in the domain of semantic processing. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  4. Scheme for Entering Binary Data Into a Quantum Computer

    NASA Technical Reports Server (NTRS)

    Williams, Colin

    2005-01-01

    A quantum algorithm provides for the encoding of an exponentially large number of classical data bits by use of a smaller (polynomially large) number of quantum bits (qubits). The development of this algorithm was prompted by the need, heretofore not satisfied, for a means of entering real-world binary data into a quantum computer. The data format provided by this algorithm is suitable for subsequent ultrafast quantum processing of the entered data. Potential applications lie in disciplines (e.g., genomics) in which one needs to search for matches between parts of very long sequences of data. For example, the algorithm could be used to encode the N-bit-long human genome in only log2N qubits. The resulting log2N-qubit state could then be used for subsequent quantum data processing - for example, to perform rapid comparisons of sequences.

  5. Gravity measurement, processing and evaluation: Test cases de Peel and South Limburg

    NASA Astrophysics Data System (ADS)

    Nohlmans, Ron

    1990-05-01

    A general overview of the process of the measurement and the adjustment of a gravity network and the computation of some output parameters of gravimetry, gravity values, gravity anomalies and mean block anomalies, is given. An overview of developments in gravimetry, globally and in the Netherlands, until now is given. The basic theory of relative gravity measurements is studied and a description of the most commonly used instrument, the LaCoste and Romberg gravimeter is given. The surveys done in the scope of this study are descibed. A more detailed impression of the adjustment procedure and the results of the adjustment are given. A closer look is taken at the more geophysical side of gravimetry: gravity reduction, the computation of anomalies and the correlation with elevation. The interpolation of gravity and the covariance of gravity anomalies are addressed.

  6. An Intelligent Pictorial Information System

    NASA Astrophysics Data System (ADS)

    Lee, Edward T.; Chang, B.

    1987-05-01

    In examining the history of computer application, we discover that early computer systems were developed primarily for applications related to scientific computation, as in weather prediction, aerospace applications, and nuclear physics applications. At this stage, the computer system served as a big calculator to perform, in the main, manipulation of numbers. Then it was found that computer systems could also be used for business applications, information storage and retrieval, word processing, and report generation. The history of computer application is summarized in Table I. The complexity of pictures makes picture processing much more difficult than number and alphanumerical processing. Therefore, new techniques, new algorithms, and above all, new pictorial knowledge, [1] are needed to overcome the limitatins of existing computer systems. New frontiers in designing computer systems are the ways to handle the representation,[2,3] classification, manipulation, processing, storage, and retrieval of pictures. Especially, the ways to deal with similarity measures and the meaning of the word "approximate" and the phrase "approximate reasoning" are an important and an indispensable part of an intelligent pictorial information system. [4,5] The main objective of this paper is to investigate the mathematical foundation for the effective organization and efficient retrieval of pictures in similarity-directed pictorial databases, [6] based on similarity retrieval techniques [7] and fuzzy languages [8]. The main advantage of this approach is that similar pictures are stored logically close to each other by using quantitative similarity measures. Thus, for answering queries, the amount of picture data needed to be searched can be reduced and the retrieval time can be improved. In addition, in a pictorial database, very often it is desired to find pictures (or feature vectors, histograms, etc.) that are most similar to or most dissimilar [9] to a test picture (or feature vector). Using similarity measures, one can not only store similar pictures logically or physically close to each other in order to improve retrieval or updating efficiency, one can also use such similarity measures to answer fuzzy queries involving nonexact retrieval conditions. In this paper, similarity directed pictorial databases involving geometric figures, chromosome images, [10] leukocyte images, cardiomyopathy images, and satellite images [11] are presented as illustrative examples.

  7. The meaning of computers to a group of men who are homeless.

    PubMed

    Miller, Kathleen Swenson; Bunch-Harrison, Stacey; Brumbaugh, Brett; Kutty, Rekha Sankaran; FitzGerald, Kathleen

    2005-01-01

    The purpose of this pilot study was to explore the experience with computers and the meaning of computers to a group of homeless men living in a long-term shelter. This descriptive exploratory study used semistructured interviews with seven men who had been given access to computers and had participated in individually tailored occupation based interventions through a Work Readiness Program. Three themes emerged from analyzing the interviews: access to computers, computers as a bridge to life-skill development, and changed self-perceptions as a result of connecting to technology. Because they lacked computer knowledge and feared failure, the majority of study participants had not sought out computers available through public access. The need for access to computers, the potential use of computers as a medium for intervention, and the meaning of computers to these men who represent the digital divide are described in this study.

  8. Computer literacy among first year medical students in a developing country: a cross sectional study.

    PubMed

    Ranasinghe, Priyanga; Wickramasinghe, Sashimali A; Pieris, Wa Rasanga; Karunathilake, Indika; Constantine, Godwin R

    2012-09-14

    The use of computer assisted learning (CAL) has enhanced undergraduate medical education. CAL improves performance at examinations, develops problem solving skills and increases student satisfaction. The study evaluates computer literacy among first year medical students in Sri Lanka. The study was conducted at Faculty of Medicine, University of Colombo, Sri Lanka between August-September 2008. First year medical students (n = 190) were invited for the study. Data on computer literacy and associated factors were collected by an expert-validated pre-tested self-administered questionnaire. Computer literacy was evaluated by testing knowledge on 6 domains; common software packages, operating systems, database management and the usage of internet and E-mail. A linear regression was conducted using total score for computer literacy as the continuous dependant variable and other independent covariates. Sample size-181 (Response rate-95.3%), 49.7% were Males. Majority of the students (77.3%) owned a computer (Males-74.4%, Females-80.2%). Students have gained their present computer knowledge by; a formal training programme (64.1%), self learning (63.0%) or by peer learning (49.2%). The students used computers for predominately; word processing (95.6%), entertainment (95.0%), web browsing (80.1%) and preparing presentations (76.8%). Majority of the students (75.7%) expressed their willingness for a formal computer training programme at the faculty.Mean score for the computer literacy questionnaire was 48.4 ± 20.3, with no significant gender difference (Males-47.8 ± 21.1, Females-48.9 ± 19.6). There were 47.9% students that had a score less than 50% for the computer literacy questionnaire. Students from Colombo district, Western Province and Student owning a computer had a significantly higher mean score in comparison to other students (p < 0.001). In the linear regression analysis, formal computer training was the strongest predictor of computer literacy (β = 13.034), followed by using internet facility, being from Western province, using computers for Web browsing and computer programming, computer ownership and doing IT (Information Technology) as a subject in GCE (A/L) examination. Sri Lankan medical undergraduates had a low-intermediate level of computer literacy. There is a need to improve computer literacy, by increasing computer training in schools, or by introducing computer training in the initial stages of the undergraduate programme. These two options require improvement in infrastructure and other resources.

  9. Linear response to nonstationary random excitation.

    NASA Technical Reports Server (NTRS)

    Hasselman, T.

    1972-01-01

    Development of a method for computing the mean-square response of linear systems to nonstationary random excitation of the form given by y(t) = f(t) x(t), in which x(t) = a stationary process and f(t) is deterministic. The method is suitable for application to multidegree-of-freedom systems when the mean-square response at a point due to excitation applied at another point is desired. Both the stationary process, x(t), and the modulating function, f(t), may be arbitrary. The method utilizes a fundamental component of transient response dependent only on x(t) and the system, and independent of f(t) to synthesize the total response. The role played by this component is analogous to that played by the Green's function or impulse response function in the convolution integral.

  10. Design of efficient stiffened shells of revolution

    NASA Technical Reports Server (NTRS)

    Majumder, D. K.; Thornton, W. A.

    1976-01-01

    A method to produce efficient piecewise uniform stiffened shells of revolution is presented. The approach uses a first order differential equation formulation for the shell prebuckling and buckling analyses and the necessary conditions for an optimum design are derived by a variational approach. A variety of local yielding and buckling constraints and the general buckling constraint are included in the design process. The local constraints are treated by means of an interior penalty function and the general buckling load is treated by means of an exterior penalty function. This allows the general buckling constraint to be included in the design process only when it is violated. The self-adjoint nature of the prebuckling and buckling formulations is used to reduce the computational effort. Results for four conical shells and one spherical shell are given.

  11. Clouding tracing: Visualization of the mixing of fluid elements in convection-diffusion systems

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu; Smith, Philip J.

    1993-01-01

    This paper describes a highly interactive method for computer visualization of the basic physical process of dispersion and mixing of fluid elements in convection-diffusion systems. It is based on transforming the vector field from a traditionally Eulerian reference frame into a Lagrangian reference frame. Fluid elements are traced through the vector field for the mean path as well as the statistical dispersion of the fluid elements about the mean position by using added scalar information about the root mean square value of the vector field and its Lagrangian time scale. In this way, clouds of fluid elements are traced and are not just mean paths. We have used this method to visualize the simulation of an industrial incinerator to help identify mechanisms for poor mixing.

  12. Improving Search Properties in Genetic Programming

    NASA Technical Reports Server (NTRS)

    Janikow, Cezary Z.; DeWeese, Scott

    1997-01-01

    With the advancing computer processing capabilities, practical computer applications are mostly limited by the amount of human programming required to accomplish a specific task. This necessary human participation creates many problems, such as dramatically increased cost. To alleviate the problem, computers must become more autonomous. In other words, computers must be capable to program/reprogram themselves to adapt to changing environments/tasks/demands/domains. Evolutionary computation offers potential means, but it must be advanced beyond its current practical limitations. Evolutionary algorithms model nature. They maintain a population of structures representing potential solutions to the problem at hand. These structures undergo a simulated evolution by means of mutation, crossover, and a Darwinian selective pressure. Genetic programming (GP) is the most promising example of an evolutionary algorithm. In GP, the structures that evolve are trees, which is a dramatic departure from previously used representations such as strings in genetic algorithms. The space of potential trees is defined by means of their elements: functions, which label internal nodes, and terminals, which label leaves. By attaching semantic interpretation to those elements, trees can be interpreted as computer programs (given an interpreter), evolved architectures, etc. JSC has begun exploring GP as a potential tool for its long-term project on evolving dextrous robotic capabilities. Last year we identified representation redundancies as the primary source of inefficiency in GP. Subsequently, we proposed a method to use problem constraints to reduce those redundancies, effectively reducing GP complexity. This method was implemented afterwards at the University of Missouri. This summer, we have evaluated the payoff from using problem constraints to reduce search complexity on two classes of problems: learning boolean functions and solving the forward kinematics problem. We have also developed and implemented methods to use additional problem heuristics to fine-tune the searchable space, and to use typing information to further reduce the search space. Additional improvements have been proposed, but they are yet to be explored and implemented.

  13. Volume I. Percussion Sextet. (original Composition). Volume II. The Simulation of Acoustical Space by Means of Physical Modeling.

    NASA Astrophysics Data System (ADS)

    Manzara, Leonard Charles

    1990-01-01

    The dissertation is in two parts:. 1. Percussion Sextet. The Percussion Sextet is a one movement musical composition with a length of approximately fifteen minutes. It is for six instrumentalists, each on a number of percussion instruments. The overriding formal problem was to construct a coherent and compelling structure which fuses a diversity of musical materials and textures into a dramatic whole. Particularly important is the synthesis of opposing tendencies contained in stochastic and deterministic processes: global textures versus motivic detail, and randomness versus total control. Several compositional techniques are employed in the composition. These methods of composition will be aided, in part, by the use of artificial intelligence techniques programmed on a computer. Finally, the percussion ensemble is the ideal medium to realize the above processes since it encompasses a wide range of both pitched and unpitched timbres, and since a great variety of textures and densities can be created with a certain economy of means. 2. The simulation of acoustical space by means of physical modeling. This is a written report describing the research and development of a computer program which simulates the characteristics of acoustical space in two dimensions. With the computer program the user can simulate most conventional acoustical spaces, as well as those physically impossible to realize in the real world. The program simulates acoustical space by means of geometric modeling. This involves defining wall equations, phantom source points and wall diffusions, and then processing input files containing digital signals through the program, producing output files ready for digital to analog conversion. The user of the program is able to define wall locations and wall reflectivity and roughness characteristics, all of which can be changed over time. Sound source locations are also definable within the acoustical space and these locations can be changed independently at any rate of speed. The sounds themselves are generated from any external sound synthesis program or appropriate sampling system. Finally, listener location and orientation is also user definable and dynamic in nature. A Receive-ReBroadcast (RRB) model is used to play back the sound and is definable from two to eight channels of sound. (Abstract shortened with permission of author.).

  14. Cadaveric and three-dimensional computed tomography study of the morphology of the scapula with reference to reversed shoulder prosthesis

    PubMed Central

    Torrens, Carlos; Corrales, Monica; Gonzalez, Gemma; Solano, Alberto; Cáceres, Enrique

    2008-01-01

    Purpose The purpose of this study is to analyze the morphology of the scapula with reference to the glenoid component implantation in reversed shoulder prosthesis, in order to improve primary fixation of the component. Methods Seventy-three 3-dimensional computed tomography of the scapula and 108 scapular dry specimens were analyzed to determine the anterior and posterior length of the glenoid neck, the angle between the glenoid surface and the upper posterior column of the scapula and the angle between the major craneo-caudal glenoid axis and the base of the coracoid process and the upper posterior column. Results The anterior and posterior length of glenoid neck was classified into two groups named "short-neck" and "long-neck" with significant differences between them. The angle between the glenoid surface and the upper posterior column of the scapula was also classified into two different types: type I (mean 50°–52°) and type II (mean 62,50°–64°), with significant differences between them (p < 0,001). The angle between the major craneo-caudal glenoid axis and the base of the coracoid process averaged 18,25° while the angle with the upper posterior column of the scapula averaged 8°. Conclusion Scapular morphological variability advices for individual adjustments of glenoid component implantation in reversed total shoulder prosthesis. Three-dimensional computed tomography of the scapula constitutes an important tool when planning reversed prostheses implantation. PMID:18847487

  15. Cadaveric and three-dimensional computed tomography study of the morphology of the scapula with reference to reversed shoulder prosthesis.

    PubMed

    Torrens, Carlos; Corrales, Monica; Gonzalez, Gemma; Solano, Alberto; Cáceres, Enrique

    2008-10-10

    The purpose of this study is to analyze the morphology of the scapula with reference to the glenoid component implantation in reversed shoulder prosthesis, in order to improve primary fixation of the component. Seventy-three 3-dimensional computed tomography of the scapula and 108 scapular dry specimens were analyzed to determine the anterior and posterior length of the glenoid neck, the angle between the glenoid surface and the upper posterior column of the scapula and the angle between the major craneo-caudal glenoid axis and the base of the coracoid process and the upper posterior column. The anterior and posterior length of glenoid neck was classified into two groups named "short-neck" and "long-neck" with significant differences between them. The angle between the glenoid surface and the upper posterior column of the scapula was also classified into two different types: type I (mean 50 degrees-52 degrees ) and type II (mean 62.50 degrees-64 degrees ), with significant differences between them (p < 0.001). The angle between the major craneo-caudal glenoid axis and the base of the coracoid process averaged 18,25 degrees while the angle with the upper posterior column of the scapula averaged 8 degrees . Scapular morphological variability advices for individual adjustments of glenoid component implantation in reversed total shoulder prosthesis. Three-dimensional computed tomography of the scapula constitutes an important tool when planning reversed prostheses implantation.

  16. Single-Photon Emission Computed Tomography/Computed Tomography Imaging in a Rabbit Model of Emphysema Reveals Ongoing Apoptosis In Vivo

    PubMed Central

    Goldklang, Monica P.; Tekabe, Yared; Zelonina, Tina; Trischler, Jordis; Xiao, Rui; Stearns, Kyle; Romanov, Alexander; Muzio, Valeria; Shiomi, Takayuki; Johnson, Lynne L.

    2016-01-01

    Evaluation of lung disease is limited by the inability to visualize ongoing pathological processes. Molecular imaging that targets cellular processes related to disease pathogenesis has the potential to assess disease activity over time to allow intervention before lung destruction. Because apoptosis is a critical component of lung damage in emphysema, a functional imaging approach was taken to determine if targeting apoptosis in a smoke exposure model would allow the quantification of early lung damage in vivo. Rabbits were exposed to cigarette smoke for 4 or 16 weeks and underwent single-photon emission computed tomography/computed tomography scanning using technetium-99m–rhAnnexin V-128. Imaging results were correlated with ex vivo tissue analysis to validate the presence of lung destruction and apoptosis. Lung computed tomography scans of long-term smoke–exposed rabbits exhibit anatomical similarities to human emphysema, with increased lung volumes compared with controls. Morphometry on lung tissue confirmed increased mean linear intercept and destructive index at 16 weeks of smoke exposure and compliance measurements documented physiological changes of emphysema. Tissue and lavage analysis displayed the hallmarks of smoke exposure, including increased tissue cellularity and protease activity. Technetium-99m–rhAnnexin V-128 single-photon emission computed tomography signal was increased after smoke exposure at 4 and 16 weeks, with confirmation of increased apoptosis through terminal deoxynucleotidyl transferase dUTP nick end labeling staining and increased tissue neutral sphingomyelinase activity in the tissue. These studies not only describe a novel emphysema model for use with future therapeutic applications, but, most importantly, also characterize a promising imaging modality that identifies ongoing destructive cellular processes within the lung. PMID:27483341

  17. Structure, function, and behaviour of computational models in systems biology

    PubMed Central

    2013-01-01

    Background Systems Biology develops computational models in order to understand biological phenomena. The increasing number and complexity of such “bio-models” necessitate computer support for the overall modelling task. Computer-aided modelling has to be based on a formal semantic description of bio-models. But, even if computational bio-models themselves are represented precisely in terms of mathematical expressions their full meaning is not yet formally specified and only described in natural language. Results We present a conceptual framework – the meaning facets – which can be used to rigorously specify the semantics of bio-models. A bio-model has a dual interpretation: On the one hand it is a mathematical expression which can be used in computational simulations (intrinsic meaning). On the other hand the model is related to the biological reality (extrinsic meaning). We show that in both cases this interpretation should be performed from three perspectives: the meaning of the model’s components (structure), the meaning of the model’s intended use (function), and the meaning of the model’s dynamics (behaviour). In order to demonstrate the strengths of the meaning facets framework we apply it to two semantically related models of the cell cycle. Thereby, we make use of existing approaches for computer representation of bio-models as much as possible and sketch the missing pieces. Conclusions The meaning facets framework provides a systematic in-depth approach to the semantics of bio-models. It can serve two important purposes: First, it specifies and structures the information which biologists have to take into account if they build, use and exchange models. Secondly, because it can be formalised, the framework is a solid foundation for any sort of computer support in bio-modelling. The proposed conceptual framework establishes a new methodology for modelling in Systems Biology and constitutes a basis for computer-aided collaborative research. PMID:23721297

  18. A computer-human interaction model to improve the diagnostic accuracy and clinical decision-making during 12-lead electrocardiogram interpretation.

    PubMed

    Cairns, Andrew W; Bond, Raymond R; Finlay, Dewar D; Breen, Cathal; Guldenring, Daniel; Gaffney, Robert; Gallagher, Anthony G; Peace, Aaron J; Henn, Pat

    2016-12-01

    The 12-lead Electrocardiogram (ECG) presents a plethora of information and demands extensive knowledge and a high cognitive workload to interpret. Whilst the ECG is an important clinical tool, it is frequently incorrectly interpreted. Even expert clinicians are known to impulsively provide a diagnosis based on their first impression and often miss co-abnormalities. Given it is widely reported that there is a lack of competency in ECG interpretation, it is imperative to optimise the interpretation process. Predominantly the ECG interpretation process remains a paper based approach and whilst computer algorithms are used to assist interpreters by providing printed computerised diagnoses, there are a lack of interactive human-computer interfaces to guide and assist the interpreter. An interactive computing system was developed to guide the decision making process of a clinician when interpreting the ECG. The system decomposes the interpretation process into a series of interactive sub-tasks and encourages the clinician to systematically interpret the ECG. We have named this model 'Interactive Progressive based Interpretation' (IPI) as the user cannot 'progress' unless they complete each sub-task. Using this model, the ECG is segmented into five parts and presented over five user interfaces (1: Rhythm interpretation, 2: Interpretation of the P-wave morphology, 3: Limb lead interpretation, 4: QRS morphology interpretation with chest lead and rhythm strip presentation and 5: Final review of 12-lead ECG). The IPI model was implemented using emerging web technologies (i.e. HTML5, CSS3, AJAX, PHP and MySQL). It was hypothesised that this system would reduce the number of interpretation errors and increase diagnostic accuracy in ECG interpreters. To test this, we compared the diagnostic accuracy of clinicians when they used the standard approach (control cohort) with clinicians who interpreted the same ECGs using the IPI approach (IPI cohort). For the control cohort, the (mean; standard deviation; confidence interval) of the ECG interpretation accuracy was (45.45%; SD=18.1%; CI=42.07, 48.83). The mean ECG interpretation accuracy rate for the IPI cohort was 58.85% (SD=42.4%; CI=49.12, 68.58), which indicates a positive mean difference of 13.4%. (CI=4.45, 22.35) An N-1 Chi-square test of independence indicated a 92% chance that the IPI cohort will have a higher accuracy rate. Interpreter self-rated confidence also increased between cohorts from a mean of 4.9/10 in the control cohort to 6.8/10 in the IPI cohort (p=0.06). Whilst the IPI cohort had greater diagnostic accuracy, the duration of ECG interpretation was six times longer when compared to the control cohort. We have developed a system that segments and presents the ECG across five graphical user interfaces. Results indicate that this approach improves diagnostic accuracy but with the expense of time, which is a valuable resource in medical practice. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Goce and Its Role in Combined Global High Resolution Gravity Field Determination

    NASA Astrophysics Data System (ADS)

    Fecher, T.; Pail, R.; Gruber, T.

    2013-12-01

    Combined high-resolution gravity field models serve as a mandatory basis to describe static and dynamic processes in system Earth. Ocean dynamics can be modeled referring to a high-accurate geoid as reference surface, solid earth processes are initiated by the gravity field. Also geodetic disciplines such as height system determination depend on high-precise gravity field information. To fulfill the various requirements concerning resolution and accuracy, any kind of gravity field information, that means satellite as well as terrestrial and altimetric gravity field observations have to be included in one combination process. A key role is here reserved for GOCE observations, which contribute with its optimal signal content in the long to medium wavelength part and enable a more accurate gravity field determination than ever before especially in areas, where no high-accurate terrestrial gravity field observations are available, such as South America, Asia or Africa. For our contribution we prepare a combined high-resolution gravity field model up to d/o 720 based on full normal equation including recent GOCE, GRACE and terrestrial / altimetric data. For all data sets, normal equations are set up separately, relative weighted to each other in the combination step and solved. This procedure is computationally challenging and can only be performed using super computers. We put special emphasis on the combination process, for which we modified especially our procedure to include GOCE data optimally in the combination. Furthermore we modified our terrestrial/altimetric data sets, what should result in an improved outcome. With our model, in which we included the newest GOCE TIM4 gradiometry results, we can show how GOCE contributes to a combined gravity field solution especially in areas of poor terrestrial data coverage. The model is validated by independent GPS leveling data in selected regions as well as computation of the mean dynamic topography over the oceans. Further, we analyze the statistical error estimates derived from full covariance propagation and compare them with the absolute validation with independent data sets.

  20. Integration of communications and tracking data processing simulation for space station

    NASA Technical Reports Server (NTRS)

    Lacovara, Robert C.

    1987-01-01

    A simplified model of the communications network for the Communications and Tracking Data Processing System (CTDP) was developed. It was simulated by use of programs running on several on-site computers. These programs communicate with one another by means of both local area networks and direct serial connections. The domain of the model and its simulation is from Orbital Replaceable Unit (ORU) interface to Data Management Systems (DMS). The simulation was designed to allow status queries from remote entities across the DMS networks to be propagated through the model to several simulated ORU's. The ORU response is then propagated back to the remote entity which originated the request. Response times at the various levels were investigated in a multi-tasking, multi-user operating system environment. Results indicate that the effective bandwidth of the system may be too low to support expected data volume requirements under conventional operating systems. Instead, some form of embedded process control program may be required on the node computers.

  1. The research of computer multimedia assistant in college English listening

    NASA Astrophysics Data System (ADS)

    Zhang, Qian

    2012-04-01

    With the technology development of network information, there exists more and more seriously questions to our education. Computer multimedia application breaks the traditional foreign language teaching and brings new challenges and opportunities for the education. Through the multiple media application, the teaching process is full of animation, image, voice, and characters. This can improve the learning initiative and objective with great development of learning efficiency. During the traditional foreign language teaching, people use characters learning. However, through this method, the theory performance is good but the practical application is low. During the long time computer multimedia application in the foreign language teaching, many teachers still have prejudice. Therefore, the method is not obtaining the effect. After all the above, the research has significant meaning for improving the teaching quality of foreign language.

  2. Soft computing methods for geoidal height transformation

    NASA Astrophysics Data System (ADS)

    Akyilmaz, O.; Özlüdemir, M. T.; Ayan, T.; Çelik, R. N.

    2009-07-01

    Soft computing techniques, such as fuzzy logic and artificial neural network (ANN) approaches, have enabled researchers to create precise models for use in many scientific and engineering applications. Applications that can be employed in geodetic studies include the estimation of earth rotation parameters and the determination of mean sea level changes. Another important field of geodesy in which these computing techniques can be applied is geoidal height transformation. We report here our use of a conventional polynomial model, the Adaptive Network-based Fuzzy (or in some publications, Adaptive Neuro-Fuzzy) Inference System (ANFIS), an ANN and a modified ANN approach to approximate geoid heights. These approximation models have been tested on a number of test points. The results obtained through the transformation processes from ellipsoidal heights into local levelling heights have also been compared.

  3. The research on construction and application of machining process knowledge base

    NASA Astrophysics Data System (ADS)

    Zhao, Tan; Qiao, Lihong; Qie, Yifan; Guo, Kai

    2018-03-01

    In order to realize the application of knowledge in machining process design, from the perspective of knowledge in the application of computer aided process planning(CAPP), a hierarchical structure of knowledge classification is established according to the characteristics of mechanical engineering field. The expression of machining process knowledge is structured by means of production rules and the object-oriented methods. Three kinds of knowledge base models are constructed according to the representation of machining process knowledge. In this paper, the definition and classification of machining process knowledge, knowledge model, and the application flow of the process design based on the knowledge base are given, and the main steps of the design decision of the machine tool are carried out as an application by using the knowledge base.

  4. Sound production due to large-scale coherent structures

    NASA Technical Reports Server (NTRS)

    Gatski, T. B.

    1979-01-01

    The acoustic pressure fluctuations due to large-scale finite amplitude disturbances in a free turbulent shear flow are calculated. The flow is decomposed into three component scales; the mean motion, the large-scale wave-like disturbance, and the small-scale random turbulence. The effect of the large-scale structure on the flow is isolated by applying both a spatial and phase average on the governing differential equations and by initially taking the small-scale turbulence to be in energetic equilibrium with the mean flow. The subsequent temporal evolution of the flow is computed from global energetic rate equations for the different component scales. Lighthill's theory is then applied to the region with the flowfield as the source and an observer located outside the flowfield in a region of uniform velocity. Since the time history of all flow variables is known, a minimum of simplifying assumptions for the Lighthill stress tensor is required, including no far-field approximations. A phase average is used to isolate the pressure fluctuations due to the large-scale structure, and also to isolate the dynamic process responsible. Variation of mean square pressure with distance from the source is computed to determine the acoustic far-field location and decay rate, and, in addition, spectra at various acoustic field locations are computed and analyzed. Also included are the effects of varying the growth and decay of the large-scale disturbance on the sound produced.

  5. 10 CFR 727.2 - What are the definitions of the terms used in this part?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... information. Computer means desktop computers, portable computers, computer networks (including the DOE network and local area networks at or controlled by DOE organizations), network devices, automated.... DOE means the Department of Energy, including the National Nuclear Security Administration. DOE...

  6. 10 CFR 727.2 - What are the definitions of the terms used in this part?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... information. Computer means desktop computers, portable computers, computer networks (including the DOE network and local area networks at or controlled by DOE organizations), network devices, automated.... DOE means the Department of Energy, including the National Nuclear Security Administration. DOE...

  7. 10 CFR 727.2 - What are the definitions of the terms used in this part?

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... information. Computer means desktop computers, portable computers, computer networks (including the DOE network and local area networks at or controlled by DOE organizations), network devices, automated.... DOE means the Department of Energy, including the National Nuclear Security Administration. DOE...

  8. 10 CFR 727.2 - What are the definitions of the terms used in this part?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... information. Computer means desktop computers, portable computers, computer networks (including the DOE network and local area networks at or controlled by DOE organizations), network devices, automated.... DOE means the Department of Energy, including the National Nuclear Security Administration. DOE...

  9. 10 CFR 727.2 - What are the definitions of the terms used in this part?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... information. Computer means desktop computers, portable computers, computer networks (including the DOE network and local area networks at or controlled by DOE organizations), network devices, automated.... DOE means the Department of Energy, including the National Nuclear Security Administration. DOE...

  10. Geocoded data structures and their applications to Earth science investigations

    NASA Technical Reports Server (NTRS)

    Goldberg, M.

    1984-01-01

    A geocoded data structure is a means for digitally representing a geographically referenced map or image. The characteristics of representative cellular, linked, and hybrid geocoded data structures are reviewed. The data processing requirements of Earth science projects at the Goddard Space Flight Center and the basic tools of geographic data processing are described. Specific ways that new geocoded data structures can be used to adapt these tools to scientists' needs are presented. These include: expanding analysis and modeling capabilities; simplifying the merging of data sets from diverse sources; and saving computer storage space.

  11. Interdisciplinary research on the application of ERTS-1 data to the regional land use planning process

    NASA Technical Reports Server (NTRS)

    Clapp, J. L. (Principal Investigator); Kiefer, R. W.; Mccarthy, M. M.; Niemann, B. J., Jr.

    1972-01-01

    The author has identified the following significant results. Although the degree to which ERTS-1 imagery can satisfy regional land use planning data needs is not yet known, it appears to offer means by which the data acquisition process can be immeasurably improved. The initial experiences of an interdisciplinary group attempting to formulate ways of analyzing the effectiveness of ERTS-1 imagery as a base for environmental monitoring and the resolution of regional land allocation problems are documented. Application of imagery to the regional planning process consists of utilizing representative geographical regions within the state of Wisconsin. Because of the need to describe and depict regional resource complexity in an interrelatable state, certain resources within the geographical regions have been inventoried and stored in a two-dimensional computer-based map form. Computer oriented processes were developed to provide for the economical storage, analysis, and spatial display of natural and cultural data for regional land use planning purposes. The authors are optimistic that the imagery will provide revelant data for land use decision making at regional levels.

  12. Information collection and processing of dam distortion in digital reservoir system

    NASA Astrophysics Data System (ADS)

    Liang, Yong; Zhang, Chengming; Li, Yanling; Wu, Qiulan; Ge, Pingju

    2007-06-01

    The "digital reservoir" is usually understood as describing the whole reservoir with digital information technology to make it serve the human existence and development furthest. Strictly speaking, the "digital reservoir" is referred to describing vast information of the reservoir in different dimension and space-time by RS, GPS, GIS, telemetry, remote-control and virtual reality technology based on computer, multi-media, large-scale memory and wide-band networks technology for the human existence, development and daily work, life and entertainment. The core of "digital reservoir" is to realize the intelligence and visibility of vast information of the reservoir through computers and networks. The dam is main building of reservoir, whose safety concerns reservoir and people's safety. Safety monitoring is important way guaranteeing the dam's safety, which controls the dam's running through collecting the dam's information concerned and developing trend. Safety monitoring of the dam is the process from collection and processing of initial safety information to forming safety concept in the brain. The paper mainly researches information collection and processing of the dam by digital means.

  13. Exponential Speedup of Quantum Annealing by Inhomogeneous Driving of the Transverse Field

    NASA Astrophysics Data System (ADS)

    Susa, Yuki; Yamashiro, Yu; Yamamoto, Masayuki; Nishimori, Hidetoshi

    2018-02-01

    We show, for quantum annealing, that a certain type of inhomogeneous driving of the transverse field erases first-order quantum phase transitions in the p-body interacting mean-field-type model with and without longitudinal random field. Since a first-order phase transition poses a serious difficulty for quantum annealing (adiabatic quantum computing) due to the exponentially small energy gap, the removal of first-order transitions means an exponential speedup of the annealing process. The present method may serve as a simple protocol for the performance enhancement of quantum annealing, complementary to non-stoquastic Hamiltonians.

  14. Oahu wind power survey, first report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramage, C.S.; Daniels, P.A.; Schroeder, T.A.

    1977-05-01

    A wind power survey has been conducted on Oahu since summer 1975. At seventeen potentially windy sites, calibrated anemometers and wind vanes were installed and recordings made on computer-processable magnetic tape cassettes. From monthly mean wind speeds--normalized by comparing with Honolulu Airport means winds--it was concluded that about 23 mi/hr represented the highest average annual wind speed likely to be attained on Oahu and that the Koko Head and Kahuku areas gave the most promise for wind energy generation. Diurnal variation of the wind in these areas roughly parallels diurnal variation of electric power demand.

  15. Probabilistic Estimates of Global Mean Sea Level and its Underlying Processes

    NASA Astrophysics Data System (ADS)

    Hay, C.; Morrow, E.; Kopp, R. E.; Mitrovica, J. X.

    2015-12-01

    Local sea level can vary significantly from the global mean value due to a suite of processes that includes ongoing sea-level changes due to the last ice age, land water storage, ocean circulation changes, and non-uniform sea-level changes that arise when modern-day land ice rapidly melts. Understanding these sources of spatial and temporal variability is critical to estimating past and present sea-level change and projecting future sea-level rise. Using two probabilistic techniques, a multi-model Kalman smoother and Gaussian process regression, we have reanalyzed 20th century tide gauge observations to produce a new estimate of global mean sea level (GMSL). Our methods allow us to extract global information from the sparse tide gauge field by taking advantage of the physics-based and model-derived geometry of the contributing processes. Both methods provide constraints on the sea-level contribution of glacial isostatic adjustment (GIA). The Kalman smoother tests multiple discrete models of glacial isostatic adjustment (GIA), probabilistically computing the most likely GIA model given the observations, while the Gaussian process regression characterizes the prior covariance structure of a suite of GIA models and then uses this structure to estimate the posterior distribution of local rates of GIA-induced sea-level change. We present the two methodologies, the model-derived geometries of the underlying processes, and our new probabilistic estimates of GMSL and GIA.

  16. Computational prediction of the refinement of oxide agglomerates in a physical conditioning process for molten aluminium alloy

    NASA Astrophysics Data System (ADS)

    Tong, M.; Jagarlapudi, S. C.; Patel, J. B.; Stone, I. C.; Fan, Z.; Browne, D. J.

    2015-06-01

    Physically conditioning molten scrap aluminium alloys using high shear processing (HSP) was recently found to be a promising technology for purification of contaminated alloys. HSP refines the solid oxide agglomerates in molten alloys, so that they can act as sites for the nucleation of Fe-rich intermetallic phases which can subsequently be removed by the downstream de-drossing process. In this paper, a computational modelling for predicting the evolution of size of oxide clusters during HSP is presented. We used CFD to predict the macroscopic flow features of the melt, and the resultant field predictions of temperature and melt shear rate were transferred to a population balance model (PBM) as its key inputs. The PBM is a macroscopic model that formulates the microscopic agglomeration and breakage of a population of a dispersed phase. Although it has been widely used to study conventional deoxidation of liquid metal, this is the first time that PBM has been used to simulate the melt conditioning process within a rotor/stator HSP device. We employed a method which discretizes the continuous profile of size of the dispersed phase into a collection of discrete bins of size, to solve the governing population balance equation for the size of agglomerates. A finite volume method was used to solve the continuity equation, the energy equation and the momentum equation. The overall computation was implemented mainly using the FLUENT module of ANSYS. The simulations showed that there is a relatively high melt shear rate between the stator and sweeping tips of the rotor blades. This high shear rate leads directly to significant fragmentation of the initially large oxide aggregates. Because the process of agglomeration is significantly slower than the breakage processes at the beginning of HSP, the mean size of oxide clusters decreases very rapidly. As the process of agglomeration gradually balances the process of breakage, the mean size of oxide clusters converges to a steady value. The model enables formulation of the quantitative relationship between the macroscopic flow features of liquid metal and the change of size of dispersed oxide clusters, during HSP. It predicted the variation in size of the dispersed phased with operational parameters (including the geometry and, particularly, the speed of the rotor), which is of direct use to experimentalists optimising the design of the HSP device and its implementation.

  17. A Computer-Aided Diagnosis System for Measuring Carotid Artery Intima-Media Thickness (IMT) Using Quaternion Vectors.

    PubMed

    Kutbay, Uğurhan; Hardalaç, Fırat; Akbulut, Mehmet; Akaslan, Ünsal; Serhatlıoğlu, Selami

    2016-06-01

    This study aims investigating adjustable distant fuzzy c-means segmentation on carotid Doppler images, as well as quaternion-based convolution filters and saliency mapping procedures. We developed imaging software that will simplify the measurement of carotid artery intima-media thickness (IMT) on saliency mapping images. Additionally, specialists evaluated the present images and compared them with saliency mapping images. In the present research, we conducted imaging studies of 25 carotid Doppler images obtained by the Department of Cardiology at Fırat University. After implementing fuzzy c-means segmentation and quaternion-based convolution on all Doppler images, we obtained a model that can be analyzed easily by the doctors using a bottom-up saliency model. These methods were applied to 25 carotid Doppler images and then interpreted by specialists. In the present study, we used color-filtering methods to obtain carotid color images. Saliency mapping was performed on the obtained images, and the carotid artery IMT was detected and interpreted on the obtained images from both methods and the raw images are shown in Results. Also these results were investigated by using Mean Square Error (MSE) for the raw IMT images and the method which gives the best performance is the Quaternion Based Saliency Mapping (QBSM). 0,0014 and 0,000191 mm(2) MSEs were obtained for artery lumen diameters and plaque diameters in carotid arteries respectively. We found that computer-based image processing methods used on carotid Doppler could aid doctors' in their decision-making process. We developed software that could ease the process of measuring carotid IMT for cardiologists and help them to evaluate their findings.

  18. Inherent Conservatism in Deterministic Quasi-Static Structural Analysis

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1997-01-01

    The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.

  19. Evaluating young children's cognitive capacities through computer versus hand drawings.

    PubMed

    Olsen, J

    1992-09-01

    Young normal and handicapped children, aged 3 to 6 years, were taught to draw a scene of a house, garden and a sky with a computer drawing program that uses icons and is operated by a mouse. The drawings were rated by a team of experts on a 7-category scale. The children's computer- and hand-produced drawings were compared with one another and with results on cognitive, visual and fine motor tests. The computer drawing program made it possible for the children to accurately draw closed shapes, to get instant feedback on the adequacy of the drawing, and to make corrections with ease. It was hypothesized that these features would compensate for the young children's limitations in such cognitive skills, as memory, concentration, planning and accomplishment, as well as their weak motor skills. In addition, it was hypothesized that traditional cognitive ratings of hand drawings may underestimate young children's intellectual ability, because drawing by hand demands motor skills and memory, concentration and planning skills that are more developed than that actually shown by young children. To test the latter hypothesis, the children completed a training program in using a computer to make drawings. The results show that cognitive processes such as planning, analysis and synthesis can be investigated by means of a computer drawing program in a way not possible using traditional pencil and paper drawings. It can be said that the method used here made it possible to measure cognitive abilities "under the floor" of what is ordinarily possible by means of traditionally hand drawings.

  20. Time-Efficiency Analysis Comparing Digital and Conventional Workflows for Implant Crowns: A Prospective Clinical Crossover Trial.

    PubMed

    Joda, Tim; Brägger, Urs

    2015-01-01

    To compare time-efficiency in the production of implant crowns using a digital workflow versus the conventional pathway. This prospective clinical study used a crossover design that included 20 study participants receiving single-tooth replacements in posterior sites. Each patient received a customized titanium abutment plus a computer-aided design/computer-assisted manufacture (CAD/CAM) zirconia suprastructure (for those in the test group, using digital workflow) and a standardized titanium abutment plus a porcelain-fused-to-metal crown (for those in the control group, using a conventional pathway). The start of the implant prosthetic treatment was established as the baseline. Time-efficiency analysis was defined as the primary outcome, and was measured for every single clinical and laboratory work step in minutes. Statistical analysis was calculated with the Wilcoxon rank sum test. All crowns could be provided within two clinical appointments, independent of the manufacturing process. The mean total production time, as the sum of clinical plus laboratory work steps, was significantly different. The mean ± standard deviation (SD) time was 185.4 ± 17.9 minutes for the digital workflow process and 223.0 ± 26.2 minutes for the conventional pathway (P = .0001). Therefore, digital processing for overall treatment was 16% faster. Detailed analysis for the clinical treatment revealed a significantly reduced mean ± SD chair time of 27.3 ± 3.4 minutes for the test group compared with 33.2 ± 4.9 minutes for the control group (P = .0001). Similar results were found for the mean laboratory work time, with a significant decrease of 158.1 ± 17.2 minutes for the test group vs 189.8 ± 25.3 minutes for the control group (P = .0001). Only a few studies have investigated efficiency parameters of digital workflows compared with conventional pathways in implant dental medicine. This investigation shows that the digital workflow seems to be more time-efficient than the established conventional production pathway for fixed implant-supported crowns. Both clinical chair time and laboratory manufacturing steps could be effectively shortened with the digital process of intraoral scanning plus CAD/CAM technology.

  1. The periodic structure of the natural record, and nonlinear dynamics.

    USGS Publications Warehouse

    Shaw, H.R.

    1987-01-01

    This paper addresses how nonlinear dynamics can contribute to interpretations of the geologic record and evolutionary processes. Background is given to explain why nonlinear concepts are important. A resume of personal research is offered to illustrate why I think nonlinear processes fit with observations on geological and cosmological time series data. The fabric of universal periodicity arrays generated by nonlinear processes is illustrated by means of a simple computer mode. I conclude with implications concerning patterns of evolution, stratigraphic boundary events, and close correlations of major geologically instantaneous events (such as impacts or massive volcanic episodes) with any sharply defined boundary in the geologic column. - from Author

  2. [Topographological-anatomic changes in the structure of temporo-mandibular joint in case of fracture of the mandible condylar process at cervical level].

    PubMed

    Volkov, S I; Bazhenov, D V; Semkin, V A

    2011-01-01

    Pathological changes in soft tissues surrounding the fracture site as well as in the structural elements of temporo-mandibular joint always occured in condylar process fracture with shift at cervical mandibular jaw level. Other changes were also seen in the joint on the opposite normal side. Modelling of condylar process fracture at mandibular cervical level by means of three-dimensional computer model of temporo-mandibular joint contributed to proper understanding of this pathology emergence as well as to prediction and elimination of disorders arising in adjacent to the fracture site tissues.

  3. A comparison of the wavelet and short-time fourier transforms for Doppler spectral analysis.

    PubMed

    Zhang, Yufeng; Guo, Zhenyu; Wang, Weilian; He, Side; Lee, Ting; Loew, Murray

    2003-09-01

    Doppler spectrum analysis provides a non-invasive means to measure blood flow velocity and to diagnose arterial occlusive disease. The time-frequency representation of the Doppler blood flow signal is normally computed by using the short-time Fourier transform (STFT). This transform requires stationarity of the signal during a finite time interval, and thus imposes some constraints on the representation estimate. In addition, the STFT has a fixed time-frequency window, making it inaccurate to analyze signals having relatively wide bandwidths that change rapidly with time. In the present study, wavelet transform (WT), having a flexible time-frequency window, was used to investigate its advantages and limitations for the analysis of the Doppler blood flow signal. Representations computed using the WT with a modified Morlet wavelet were investigated and compared with the theoretical representation and those computed using the STFT with a Gaussian window. The time and frequency resolutions of these two approaches were compared. Three indices, the normalized root-mean-squared errors of the minimum, the maximum and the mean frequency waveforms, were used to evaluate the performance of the WT. Results showed that the WT can not only be used as an alternative signal processing tool to the STFT for Doppler blood flow signals, but can also generate a time-frequency representation with better resolution than the STFT. In addition, the WT method can provide both satisfactory mean frequencies and maximum frequencies. This technique is expected to be useful for the analysis of Doppler blood flow signals to quantify arterial stenoses.

  4. Efficient Mean Field Variational Algorithm for Data Assimilation (Invited)

    NASA Astrophysics Data System (ADS)

    Vrettas, M. D.; Cornford, D.; Opper, M.

    2013-12-01

    Data assimilation algorithms combine available observations of physical systems with the assumed model dynamics in a systematic manner, to produce better estimates of initial conditions for prediction. Broadly they can be categorized in three main approaches: (a) sequential algorithms, (b) sampling methods and (c) variational algorithms which transform the density estimation problem to an optimization problem. However, given finite computational resources, only a handful of ensemble Kalman filters and 4DVar algorithms have been applied operationally to very high dimensional geophysical applications, such as weather forecasting. In this paper we present a recent extension to our variational Bayesian algorithm which seeks the ';optimal' posterior distribution over the continuous time states, within a family of non-stationary Gaussian processes. Our initial work on variational Bayesian approaches to data assimilation, unlike the well-known 4DVar method which seeks only the most probable solution, computes the best time varying Gaussian process approximation to the posterior smoothing distribution for dynamical systems that can be represented by stochastic differential equations. This approach was based on minimising the Kullback-Leibler divergence, over paths, between the true posterior and our Gaussian process approximation. Whilst the observations were informative enough to keep the posterior smoothing density close to Gaussian the algorithm proved very effective on low dimensional systems (e.g. O(10)D). However for higher dimensional systems, the high computational demands make the algorithm prohibitively expensive. To overcome the difficulties presented in the original framework and make our approach more efficient in higher dimensional systems we have been developing a new mean field version of the algorithm which treats the state variables at any given time as being independent in the posterior approximation, while still accounting for their relationships in the mean solution arising from the original system dynamics. Here we present this new mean field approach, illustrating its performance on a range of benchmark data assimilation problems whose dimensionality varies from O(10) to O(10^3)D. We emphasise that the variational Bayesian approach we adopt, unlike other variational approaches, provides a natural bound on the marginal likelihood of the observations given the model parameters which also allows for inference of (hyper-) parameters such as observational errors, parameters in the dynamical model and model error representation. We also stress that since our approach is intrinsically parallel it can be implemented very efficiently to address very long data assimilation time windows. Moreover, like most traditional variational approaches our Bayesian variational method has the benefit of being posed as an optimisation problem therefore its complexity can be tuned to the available computational resources. We finish with a sketch of possible future directions.

  5. Computer simulation of a space SAR using a range-sequential processor for soil moisture mapping

    NASA Technical Reports Server (NTRS)

    Fujita, M.; Ulaby, F. (Principal Investigator)

    1982-01-01

    The ability of a spaceborne synthetic aperture radar (SAR) to detect soil moisture was evaluated by means of a computer simulation technique. The computer simulation package includes coherent processing of the SAR data using a range-sequential processor, which can be set up through hardware implementations, thereby reducing the amount of telemetry involved. With such a processing approach, it is possible to monitor the earth's surface on a continuous basis, since data storage requirements can be easily met through the use of currently available technology. The Development of the simulation package is described, followed by an examination of the application of the technique to actual environments. The results indicate that in estimating soil moisture content with a four-look processor, the difference between the assumed and estimated values of soil moisture is within + or - 20% of field capacity for 62% of the pixels for agricultural terrain and for 53% of the pixels for hilly terrain. The estimation accuracy for soil moisture may be improved by reducing the effect of fading through non-coherent averaging.

  6. Biophysically Inspired Rational Design of Structured Chimeric Substrates for DNAzyme Cascade Engineering

    PubMed Central

    Lakin, Matthew R.; Brown, Carl W.; Horwitz, Eli K.; Fanning, M. Leigh; West, Hannah E.; Stefanovic, Darko; Graves, Steven W.

    2014-01-01

    The development of large-scale molecular computational networks is a promising approach to implementing logical decision making at the nanoscale, analogous to cellular signaling and regulatory cascades. DNA strands with catalytic activity (DNAzymes) are one means of systematically constructing molecular computation networks with inherent signal amplification. Linking multiple DNAzymes into a computational circuit requires the design of substrate molecules that allow a signal to be passed from one DNAzyme to another through programmed biochemical interactions. In this paper, we chronicle an iterative design process guided by biophysical and kinetic constraints on the desired reaction pathways and use the resulting substrate design to implement heterogeneous DNAzyme signaling cascades. A key aspect of our design process is the use of secondary structure in the substrate molecule to sequester a downstream effector sequence prior to cleavage by an upstream DNAzyme. Our goal was to develop a concrete substrate molecule design to achieve efficient signal propagation with maximal activation and minimal leakage. We have previously employed the resulting design to develop high-performance DNAzyme-based signaling systems with applications in pathogen detection and autonomous theranostics. PMID:25347066

  7. A Parallel Point Matching Algorithm for Landmark Based Image Registration Using Multicore Platform

    PubMed Central

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L.; Foran, David J.

    2013-01-01

    Point matching is crucial for many computer vision applications. Establishing the correspondence between a large number of data points is a computationally intensive process. Some point matching related applications, such as medical image registration, require real time or near real time performance if applied to critical clinical applications like image assisted surgery. In this paper, we report a new multicore platform based parallel algorithm for fast point matching in the context of landmark based medical image registration. We introduced a non-regular data partition algorithm which utilizes the K-means clustering algorithm to group the landmarks based on the number of available processing cores, which optimize the memory usage and data transfer. We have tested our method using the IBM Cell Broadband Engine (Cell/B.E.) platform. The results demonstrated a significant speed up over its sequential implementation. The proposed data partition and parallelization algorithm, though tested only on one multicore platform, is generic by its design. Therefore the parallel algorithm can be extended to other computing platforms, as well as other point matching related applications. PMID:24308014

  8. Computational analysis of the roles of biochemical reactions in anomalous diffusion dynamics

    NASA Astrophysics Data System (ADS)

    Naruemon, Rueangkham; Charin, Modchang

    2016-04-01

    Most biochemical processes in cells are usually modeled by reaction-diffusion (RD) equations. In these RD models, the diffusive process is assumed to be Gaussian. However, a growing number of studies have noted that intracellular diffusion is anomalous at some or all times, which may result from a crowded environment and chemical kinetics. This work aims to computationally study the effects of chemical reactions on the diffusive dynamics of RD systems by using both stochastic and deterministic algorithms. Numerical method to estimate the mean-square displacement (MSD) from a deterministic algorithm is also investigated. Our computational results show that anomalous diffusion can be solely due to chemical reactions. The chemical reactions alone can cause anomalous sub-diffusion in the RD system at some or all times. The time-dependent anomalous diffusion exponent is found to depend on many parameters, including chemical reaction rates, reaction orders, and chemical concentrations. Project supported by the Thailand Research Fund and Mahidol University (Grant No. TRG5880157), the Thailand Center of Excellence in Physics (ThEP), CHE, Thailand, and the Development Promotion of Science and Technology.

  9. Adjudicating between face-coding models with individual-face fMRI responses

    PubMed Central

    Kriegeskorte, Nikolaus

    2017-01-01

    The perceptual representation of individual faces is often explained with reference to a norm-based face space. In such spaces, individuals are encoded as vectors where identity is primarily conveyed by direction and distinctiveness by eccentricity. Here we measured human fMRI responses and psychophysical similarity judgments of individual face exemplars, which were generated as realistic 3D animations using a computer-graphics model. We developed and evaluated multiple neurobiologically plausible computational models, each of which predicts a representational distance matrix and a regional-mean activation profile for 24 face stimuli. In the fusiform face area, a face-space coding model with sigmoidal ramp tuning provided a better account of the data than one based on exemplar tuning. However, an image-processing model with weighted banks of Gabor filters performed similarly. Accounting for the data required the inclusion of a measurement-level population averaging mechanism that approximates how fMRI voxels locally average distinct neuronal tunings. Our study demonstrates the importance of comparing multiple models and of modeling the measurement process in computational neuroimaging. PMID:28746335

  10. Lessons from a doctoral thesis.

    PubMed

    Peiris, A N; Mueller, R A; Sheridan, D P

    1990-01-01

    The production of a doctoral thesis is a time-consuming affair that until recently was done in conjunction with professional publishing services. Advances in computer technology have made many sophisticated desktop publishing techniques available to the microcomputer user. We describe the computer method used, the problems encountered, and the solutions improvised in the production of a doctoral thesis by computer. The Apple Macintosh was selected for its ease of use and intrinsic graphics capabilities. A scanner was used to incorporate text from published papers into a word processing program. The body of the text was updated and supplemented with new sections. Scanned graphics from the published papers were less suitable for publication, and the original data were replotted and modified with a graphics-drawing program. Graphics were imported and incorporated in the text. Final hard copy was produced by a laser printer and bound with both conventional and rapid new binding techniques. Microcomputer-based desktop processing methods provide a rapid and cost-effective means of communicating the written word. We anticipate that this evolving technology will have increased use by physicians in both the private and academic sectors.

  11. MGUPGMA: A Fast UPGMA Algorithm With Multiple Graphics Processing Units Using NCCL

    PubMed Central

    Hua, Guan-Jie; Hung, Che-Lun; Lin, Chun-Yuan; Wu, Fu-Che; Chan, Yu-Wei; Tang, Chuan Yi

    2017-01-01

    A phylogenetic tree is a visual diagram of the relationship between a set of biological species. The scientists usually use it to analyze many characteristics of the species. The distance-matrix methods, such as Unweighted Pair Group Method with Arithmetic Mean and Neighbor Joining, construct a phylogenetic tree by calculating pairwise genetic distances between taxa. These methods have the computational performance issue. Although several new methods with high-performance hardware and frameworks have been proposed, the issue still exists. In this work, a novel parallel Unweighted Pair Group Method with Arithmetic Mean approach on multiple Graphics Processing Units is proposed to construct a phylogenetic tree from extremely large set of sequences. The experimental results present that the proposed approach on a DGX-1 server with 8 NVIDIA P100 graphic cards achieves approximately 3-fold to 7-fold speedup over the implementation of Unweighted Pair Group Method with Arithmetic Mean on a modern CPU and a single GPU, respectively. PMID:29051701

  12. MGUPGMA: A Fast UPGMA Algorithm With Multiple Graphics Processing Units Using NCCL.

    PubMed

    Hua, Guan-Jie; Hung, Che-Lun; Lin, Chun-Yuan; Wu, Fu-Che; Chan, Yu-Wei; Tang, Chuan Yi

    2017-01-01

    A phylogenetic tree is a visual diagram of the relationship between a set of biological species. The scientists usually use it to analyze many characteristics of the species. The distance-matrix methods, such as Unweighted Pair Group Method with Arithmetic Mean and Neighbor Joining, construct a phylogenetic tree by calculating pairwise genetic distances between taxa. These methods have the computational performance issue. Although several new methods with high-performance hardware and frameworks have been proposed, the issue still exists. In this work, a novel parallel Unweighted Pair Group Method with Arithmetic Mean approach on multiple Graphics Processing Units is proposed to construct a phylogenetic tree from extremely large set of sequences. The experimental results present that the proposed approach on a DGX-1 server with 8 NVIDIA P100 graphic cards achieves approximately 3-fold to 7-fold speedup over the implementation of Unweighted Pair Group Method with Arithmetic Mean on a modern CPU and a single GPU, respectively.

  13. A Fast Implementation of the ISOCLUS Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; LeMoigne, Jacqueline

    2003-01-01

    Unsupervised clustering is a fundamental building block in numerous image processing applications. One of the most popular and widely used clustering schemes for remote sensing applications is the ISOCLUS algorithm, which is based on the ISODATA method. The algorithm is given a set of n data points in d-dimensional space, an integer k indicating the initial number of clusters, and a number of additional parameters. The general goal is to compute the coordinates of a set of cluster centers in d-space, such that those centers minimize the mean squared distance from each data point to its nearest center. This clustering algorithm is similar to another well-known clustering method, called k-means. One significant feature of ISOCLUS over k-means is that the actual number of clusters reported might be fewer or more than the number supplied as part of the input. The algorithm uses different heuristics to determine whether to merge lor split clusters. As ISOCLUS can run very slowly, particularly on large data sets, there has been a growing .interest in the remote sensing community in computing it efficiently. We have developed a faster implementation of the ISOCLUS algorithm. Our improvement is based on a recent acceleration to the k-means algorithm of Kanungo, et al. They showed that, by using a kd-tree data structure for storing the data, it is possible to reduce the running time of k-means. We have adapted this method for the ISOCLUS algorithm, and we show that it is possible to achieve essentially the same results as ISOCLUS on large data sets, but with significantly lower running times. This adaptation involves computing a number of cluster statistics that are needed for ISOCLUS but not for k-means. Both the k-means and ISOCLUS algorithms are based on iterative schemes, in which nearest neighbors are calculated until some convergence criterion is satisfied. Each iteration requires that the nearest center for each data point be computed. Naively, this requires O(kn) time, where k denotes the current number of centers. Traditional techniques for accelerating nearest neighbor searching involve storing the k centers in a data structure. However, because of the iterative nature of the algorithm, this data structure would need to be rebuilt with each new iteration. Our approach is to store the data points in a kd-tree data structure. The assignment of points to nearest neighbors is carried out by a filtering process, which successively eliminates centers that can not possibly be the nearest neighbor for a given region of space. This algorithm is significantly faster, because large groups of data points can be assigned to their nearest center in a single operation. Preliminary results on a number of real Landsat datasets show that our revised ISOCLUS-like scheme runs about twice as fast.

  14. Current Status on the use of Parallel Computing in Turbulent Reacting Flow Computations Involving Sprays, Monte Carlo PDF and Unstructured Grids. Chapter 4

    NASA Technical Reports Server (NTRS)

    Raju, M. S.

    1998-01-01

    The state of the art in multidimensional combustor modeling as evidenced by the level of sophistication employed in terms of modeling and numerical accuracy considerations, is also dictated by the available computer memory and turnaround times afforded by present-day computers. With the aim of advancing the current multi-dimensional computational tools used in the design of advanced technology combustors, a solution procedure is developed that combines the novelty of the coupled CFD/spray/scalar Monte Carlo PDF (Probability Density Function) computations on unstructured grids with the ability to run on parallel architectures. In this approach, the mean gas-phase velocity and turbulence fields are determined from a standard turbulence model, the joint composition of species and enthalpy from the solution of a modeled PDF transport equation, and a Lagrangian-based dilute spray model is used for the liquid-phase representation. The gas-turbine combustor flows are often characterized by a complex interaction between various physical processes associated with the interaction between the liquid and gas phases, droplet vaporization, turbulent mixing, heat release associated with chemical kinetics, radiative heat transfer associated with highly absorbing and radiating species, among others. The rate controlling processes often interact with each other at various disparate time 1 and length scales. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and liquid phase evaporation in many practical combustion devices.

  15. Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO.

    PubMed

    Hernandez-Vicen, Juan; Martinez, Santiago; Garcia-Haro, Juan Miguel; Balaguer, Carlos

    2018-03-25

    New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid.

  16. Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO

    PubMed Central

    2018-01-01

    New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid. PMID:29587392

  17. [Computerized system validation of clinical researches].

    PubMed

    Yan, Charles; Chen, Feng; Xia, Jia-lai; Zheng, Qing-shan; Liu, Daniel

    2015-11-01

    Validation is a documented process that provides a high degree of assurance. The computer system does exactly and consistently what it is designed to do in a controlled manner throughout the life. The validation process begins with the system proposal/requirements definition, and continues application and maintenance until system retirement and retention of the e-records based on regulatory rules. The objective to do so is to clearly specify that each application of information technology fulfills its purpose. The computer system validation (CSV) is essential in clinical studies according to the GCP standard, meeting product's pre-determined attributes of the specifications, quality, safety and traceability. This paper describes how to perform the validation process and determine relevant stakeholders within an organization in the light of validation SOPs. Although a specific accountability in the implementation of the validation process might be outsourced, the ultimate responsibility of the CSV remains on the shoulder of the business process owner-sponsor. In order to show that the compliance of the system validation has been properly attained, it is essential to set up comprehensive validation procedures and maintain adequate documentations as well as training records. Quality of the system validation should be controlled using both QC and QA means.

  18. Video-processing-based system for automated pedestrian data collection and analysis when crossing the street

    NASA Astrophysics Data System (ADS)

    Mansouri, Nabila; Watelain, Eric; Ben Jemaa, Yousra; Motamed, Cina

    2018-03-01

    Computer-vision techniques for pedestrian detection and tracking have progressed considerably and become widely used in several applications. However, a quick glance at the literature shows a minimal use of these techniques in pedestrian behavior and safety analysis, which might be due to the technical complexities facing the processing of pedestrian videos. To extract pedestrian trajectories from a video automatically, all road users must be detected and tracked during sequences, which is a challenging task, especially in a congested open-outdoor urban space. A multipedestrian tracker based on an interframe-detection-association process was proposed and evaluated. The tracker results are used to implement an automatic tool for pedestrians data collection when crossing the street based on video processing. The variations in the instantaneous speed allowed the detection of the street crossing phases (approach, waiting, and crossing). These were addressed for the first time in the pedestrian road security analysis to illustrate the causal relationship between pedestrian behaviors in the different phases. A comparison with a manual data collection method, by computing the root mean square error and the Pearson correlation coefficient, confirmed that the procedures proposed have significant potential to automate the data collection process.

  19. Proposed algorithm to improve job shop production scheduling using ant colony optimization method

    NASA Astrophysics Data System (ADS)

    Pakpahan, Eka KA; Kristina, Sonna; Setiawan, Ari

    2017-12-01

    This paper deals with the determination of job shop production schedule on an automatic environment. On this particular environment, machines and material handling system are integrated and controlled by a computer center where schedule were created and then used to dictate the movement of parts and the operations at each machine. This setting is usually designed to have an unmanned production process for a specified interval time. We consider here parts with various operations requirement. Each operation requires specific cutting tools. These parts are to be scheduled on machines each having identical capability, meaning that each machine is equipped with a similar set of cutting tools therefore is capable of processing any operation. The availability of a particular machine to process a particular operation is determined by the remaining life time of its cutting tools. We proposed an algorithm based on the ant colony optimization method and embedded them on matlab software to generate production schedule which minimize the total processing time of the parts (makespan). We test the algorithm on data provided by real industry and the process shows a very short computation time. This contributes a lot to the flexibility and timelines targeted on an automatic environment.

  20. Adaptive filtering in biological signal processing.

    PubMed

    Iyer, V K; Ploysongsang, Y; Ramamoorthy, P A

    1990-01-01

    The high dependence of conventional optimal filtering methods on the a priori knowledge of the signal and noise statistics render them ineffective in dealing with signals whose statistics cannot be predetermined accurately. Adaptive filtering methods offer a better alternative, since the a priori knowledge of statistics is less critical, real time processing is possible, and the computations are less expensive for this approach. Adaptive filtering methods compute the filter coefficients "on-line", converging to the optimal values in the least-mean square (LMS) error sense. Adaptive filtering is therefore apt for dealing with the "unknown" statistics situation and has been applied extensively in areas like communication, speech, radar, sonar, seismology, and biological signal processing and analysis for channel equalization, interference and echo canceling, line enhancement, signal detection, system identification, spectral analysis, beamforming, modeling, control, etc. In this review article adaptive filtering in the context of biological signals is reviewed. An intuitive approach to the underlying theory of adaptive filters and its applicability are presented. Applications of the principles in biological signal processing are discussed in a manner that brings out the key ideas involved. Current and potential future directions in adaptive biological signal processing are also discussed.

  1. All-memristive neuromorphic computing with level-tuned neurons

    NASA Astrophysics Data System (ADS)

    Pantazi, Angeliki; Woźniak, Stanisław; Tuma, Tomas; Eleftheriou, Evangelos

    2016-09-01

    In the new era of cognitive computing, systems will be able to learn and interact with the environment in ways that will drastically enhance the capabilities of current processors, especially in extracting knowledge from vast amount of data obtained from many sources. Brain-inspired neuromorphic computing systems increasingly attract research interest as an alternative to the classical von Neumann processor architecture, mainly because of the coexistence of memory and processing units. In these systems, the basic components are neurons interconnected by synapses. The neurons, based on their nonlinear dynamics, generate spikes that provide the main communication mechanism. The computational tasks are distributed across the neural network, where synapses implement both the memory and the computational units, by means of learning mechanisms such as spike-timing-dependent plasticity. In this work, we present an all-memristive neuromorphic architecture comprising neurons and synapses realized by using the physical properties and state dynamics of phase-change memristors. The architecture employs a novel concept of interconnecting the neurons in the same layer, resulting in level-tuned neuronal characteristics that preferentially process input information. We demonstrate the proposed architecture in the tasks of unsupervised learning and detection of multiple temporal correlations in parallel input streams. The efficiency of the neuromorphic architecture along with the homogenous neuro-synaptic dynamics implemented with nanoscale phase-change memristors represent a significant step towards the development of ultrahigh-density neuromorphic co-processors.

  2. All-memristive neuromorphic computing with level-tuned neurons.

    PubMed

    Pantazi, Angeliki; Woźniak, Stanisław; Tuma, Tomas; Eleftheriou, Evangelos

    2016-09-02

    In the new era of cognitive computing, systems will be able to learn and interact with the environment in ways that will drastically enhance the capabilities of current processors, especially in extracting knowledge from vast amount of data obtained from many sources. Brain-inspired neuromorphic computing systems increasingly attract research interest as an alternative to the classical von Neumann processor architecture, mainly because of the coexistence of memory and processing units. In these systems, the basic components are neurons interconnected by synapses. The neurons, based on their nonlinear dynamics, generate spikes that provide the main communication mechanism. The computational tasks are distributed across the neural network, where synapses implement both the memory and the computational units, by means of learning mechanisms such as spike-timing-dependent plasticity. In this work, we present an all-memristive neuromorphic architecture comprising neurons and synapses realized by using the physical properties and state dynamics of phase-change memristors. The architecture employs a novel concept of interconnecting the neurons in the same layer, resulting in level-tuned neuronal characteristics that preferentially process input information. We demonstrate the proposed architecture in the tasks of unsupervised learning and detection of multiple temporal correlations in parallel input streams. The efficiency of the neuromorphic architecture along with the homogenous neuro-synaptic dynamics implemented with nanoscale phase-change memristors represent a significant step towards the development of ultrahigh-density neuromorphic co-processors.

  3. Developing the fuzzy c-means clustering algorithm based on maximum entropy for multitarget tracking in a cluttered environment

    NASA Astrophysics Data System (ADS)

    Chen, Xiao; Li, Yaan; Yu, Jing; Li, Yuxing

    2018-01-01

    For fast and more effective implementation of tracking multiple targets in a cluttered environment, we propose a multiple targets tracking (MTT) algorithm called maximum entropy fuzzy c-means clustering joint probabilistic data association that combines fuzzy c-means clustering and the joint probabilistic data association (PDA) algorithm. The algorithm uses the membership value to express the probability of the target originating from measurement. The membership value is obtained through fuzzy c-means clustering objective function optimized by the maximum entropy principle. When considering the effect of the public measurement, we use a correction factor to adjust the association probability matrix to estimate the state of the target. As this algorithm avoids confirmation matrix splitting, it can solve the high computational load problem of the joint PDA algorithm. The results of simulations and analysis conducted for tracking neighbor parallel targets and cross targets in a different density cluttered environment show that the proposed algorithm can realize MTT quickly and efficiently in a cluttered environment. Further, the performance of the proposed algorithm remains constant with increasing process noise variance. The proposed algorithm has the advantages of efficiency and low computational load, which can ensure optimum performance when tracking multiple targets in a dense cluttered environment.

  4. Development of a SaaS application probe to the physical properties of the Earth's interior: An attempt at moving HPC to the cloud

    NASA Astrophysics Data System (ADS)

    Huang, Qian

    2014-09-01

    Scientific computing often requires the availability of a massive number of computers for performing large-scale simulations, and computing in mineral physics is no exception. In order to investigate physical properties of minerals at extreme conditions in computational mineral physics, parallel computing technology is used to speed up the performance by utilizing multiple computer resources to process a computational task simultaneously thereby greatly reducing computation time. Traditionally, parallel computing has been addressed by using High Performance Computing (HPC) solutions and installed facilities such as clusters and super computers. Today, it has been seen that there is a tremendous growth in cloud computing. Infrastructure as a Service (IaaS), the on-demand and pay-as-you-go model, creates a flexible and cost-effective mean to access computing resources. In this paper, a feasibility report of HPC on a cloud infrastructure is presented. It is found that current cloud services in IaaS layer still need to improve performance to be useful to research projects. On the other hand, Software as a Service (SaaS), another type of cloud computing, is introduced into an HPC system for computing in mineral physics, and an application of which is developed. In this paper, an overall description of this SaaS application is presented. This contribution can promote cloud application development in computational mineral physics, and cross-disciplinary studies.

  5. The Basics of Cloud Computing

    ERIC Educational Resources Information Center

    Kaestner, Rich

    2012-01-01

    Most school business officials have heard the term "cloud computing" bandied about and may have some idea of what the term means. In fact, they likely already leverage a cloud-computing solution somewhere within their district. But what does cloud computing really mean? This brief article puts a bit of definition behind the term and helps one…

  6. Microscope self-calibration based on micro laser line imaging and soft computing algorithms

    NASA Astrophysics Data System (ADS)

    Apolinar Muñoz Rodríguez, J.

    2018-06-01

    A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.

  7. Effective approach to spectroscopy and spectral analysis techniques using Matlab

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Lv, Yong

    2017-08-01

    With the development of electronic information, computer and network, modern education technology has entered new era, which would give a great impact on teaching process. Spectroscopy and spectral analysis is an elective course for Optoelectronic Information Science and engineering. The teaching objective of this course is to master the basic concepts and principles of spectroscopy, spectral analysis and testing of basic technical means. Then, let the students learn the principle and technology of the spectrum to study the structure and state of the material and the developing process of the technology. MATLAB (matrix laboratory) is a multi-paradigm numerical computing environment and fourth-generation programming language. A proprietary programming language developed by MathWorks, MATLAB allows matrix manipulations, plotting of functions and data, Based on the teaching practice, this paper summarizes the new situation of applying Matlab to the teaching of spectroscopy. This would be suitable for most of the current school multimedia assisted teaching

  8. Embodiment and Human Development.

    PubMed

    Marshall, Peter J

    2016-12-01

    We are recognizing increasingly that the study of cognitive, social, and emotional processes must account for their embodiment in living, acting beings. The related field of embodied cognition (EC) has coalesced around dissatisfaction with the lack of attention to the body in cognitive science. For developmental scientists, the emphasis in the literature on adult EC on the role of the body in cognition may not seem particularly novel, given that bodily action was central to Piaget's theory of cognitive development. However, as the influence of the Piagetian account waned, developmental notions of embodiment were shelved in favor of mechanical computational approaches. In this article, I argue that by reconsidering embodiment, we can address a key issue with computational accounts: how meaning is constructed by the developing person. I also suggest that the process-relational approach to developmental systems can provide a system of concepts for framing a fully embodied, integrative developmental science.

  9. Embodiment and Human Development

    PubMed Central

    Marshall, Peter J.

    2016-01-01

    We are recognizing increasingly that the study of cognitive, social, and emotional processes must account for their embodiment in living, acting beings. The related field of embodied cognition (EC) has coalesced around dissatisfaction with the lack of attention to the body in cognitive science. For developmental scientists, the emphasis in the literature on adult EC on the role of the body in cognition may not seem particularly novel, given that bodily action was central to Piaget’s theory of cognitive development. However, as the influence of the Piagetian account waned, developmental notions of embodiment were shelved in favor of mechanical computational approaches. In this article, I argue that by reconsidering embodiment, we can address a key issue with computational accounts: how meaning is constructed by the developing person. I also suggest that the process-relational approach to developmental systems can provide a system of concepts for framing a fully embodied, integrative developmental science. PMID:27833651

  10. Human-computer interface glove using flexible piezoelectric sensors

    NASA Astrophysics Data System (ADS)

    Cha, Youngsu; Seo, Jeonggyu; Kim, Jun-Sik; Park, Jung-Min

    2017-05-01

    In this note, we propose a human-computer interface glove based on flexible piezoelectric sensors. We select polyvinylidene fluoride as the piezoelectric material for the sensors because of advantages such as a steady piezoelectric characteristic and good flexibility. The sensors are installed in a fabric glove by means of pockets and Velcro bands. We detect changes in the angles of the finger joints from the outputs of the sensors, and use them for controlling a virtual hand that is utilized in virtual object manipulation. To assess the sensing ability of the piezoelectric sensors, we compare the processed angles from the sensor outputs with the real angles from a camera recoding. With good agreement between the processed and real angles, we successfully demonstrate the user interaction system with the virtual hand and interface glove based on the flexible piezoelectric sensors, for four hand motions: fist clenching, pinching, touching, and grasping.

  11. A Spacecraft Electrical Characteristics Multi-Label Classification Method Based on Off-Line FCM Clustering and On-Line WPSVM

    PubMed Central

    Li, Ke; Liu, Yi; Wang, Quanxin; Wu, Yalei; Song, Shimin; Sun, Yi; Liu, Tengchong; Wang, Jun; Li, Yang; Du, Shaoyi

    2015-01-01

    This paper proposes a novel multi-label classification method for resolving the spacecraft electrical characteristics problems which involve many unlabeled test data processing, high-dimensional features, long computing time and identification of slow rate. Firstly, both the fuzzy c-means (FCM) offline clustering and the principal component feature extraction algorithms are applied for the feature selection process. Secondly, the approximate weighted proximal support vector machine (WPSVM) online classification algorithms is used to reduce the feature dimension and further improve the rate of recognition for electrical characteristics spacecraft. Finally, the data capture contribution method by using thresholds is proposed to guarantee the validity and consistency of the data selection. The experimental results indicate that the method proposed can obtain better data features of the spacecraft electrical characteristics, improve the accuracy of identification and shorten the computing time effectively. PMID:26544549

  12. Real-Time Mapping alert system; characteristics and capabilities

    USGS Publications Warehouse

    Torres, L.A.; Lambert, S.C.; Liebermann, T.D.

    1995-01-01

    The U.S. Geological Survey has an extensive hydrologic network that records and transmits precipitation, stage, discharge, and other water-related data on a real-time basis to an automated data processing system. Data values are recorded on electronic data collection platforms at field sampling sites. These values are transmitted by means of orbiting satellites to receiving ground stations, and by way of telecommunication lines to a U.S. Geological Survey office where they are processed on a computer system. Data that exceed predefined thresholds are identified as alert values. The current alert status at monitoring sites within a state or region is of critical importance during floods, hurricanes, and other extreme hydrologic events. This report describes the characteristics and capabilities of a series of computer programs for real-time mapping of hydrologic data. The software provides interactive graphics display and query of hydrologic information from the network in a real-time, map-based, menu-driven environment.

  13. The Materials Commons: A Collaboration Platform and Information Repository for the Global Materials Community

    NASA Astrophysics Data System (ADS)

    Puchala, Brian; Tarcea, Glenn; Marquis, Emmanuelle. A.; Hedstrom, Margaret; Jagadish, H. V.; Allison, John E.

    2016-08-01

    Accelerating the pace of materials discovery and development requires new approaches and means of collaborating and sharing information. To address this need, we are developing the Materials Commons, a collaboration platform and information repository for use by the structural materials community. The Materials Commons has been designed to be a continuous, seamless part of the scientific workflow process. Researchers upload the results of experiments and computations as they are performed, automatically where possible, along with the provenance information describing the experimental and computational processes. The Materials Commons website provides an easy-to-use interface for uploading and downloading data and data provenance, as well as for searching and sharing data. This paper provides an overview of the Materials Commons. Concepts are also outlined for integrating the Materials Commons with the broader Materials Information Infrastructure that is evolving to support the Materials Genome Initiative.

  14. Quantum information processing with superconducting circuits: a review.

    PubMed

    Wendin, G

    2017-10-01

    During the last ten years, superconducting circuits have passed from being interesting physical devices to becoming contenders for near-future useful and scalable quantum information processing (QIP). Advanced quantum simulation experiments have been shown with up to nine qubits, while a demonstration of quantum supremacy with fifty qubits is anticipated in just a few years. Quantum supremacy means that the quantum system can no longer be simulated by the most powerful classical supercomputers. Integrated classical-quantum computing systems are already emerging that can be used for software development and experimentation, even via web interfaces. Therefore, the time is ripe for describing some of the recent development of superconducting devices, systems and applications. As such, the discussion of superconducting qubits and circuits is limited to devices that are proven useful for current or near future applications. Consequently, the centre of interest is the practical applications of QIP, such as computation and simulation in Physics and Chemistry.

  15. Quantum information processing with superconducting circuits: a review

    NASA Astrophysics Data System (ADS)

    Wendin, G.

    2017-10-01

    During the last ten years, superconducting circuits have passed from being interesting physical devices to becoming contenders for near-future useful and scalable quantum information processing (QIP). Advanced quantum simulation experiments have been shown with up to nine qubits, while a demonstration of quantum supremacy with fifty qubits is anticipated in just a few years. Quantum supremacy means that the quantum system can no longer be simulated by the most powerful classical supercomputers. Integrated classical-quantum computing systems are already emerging that can be used for software development and experimentation, even via web interfaces. Therefore, the time is ripe for describing some of the recent development of superconducting devices, systems and applications. As such, the discussion of superconducting qubits and circuits is limited to devices that are proven useful for current or near future applications. Consequently, the centre of interest is the practical applications of QIP, such as computation and simulation in Physics and Chemistry.

  16. Data management of a multilaboratory field program using distributed processing. [PRECP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tichler, J.L.

    The PRECP program is a multilaboratory research effort conducted by the US Department of Energy as a part of the National Acid Precipitation Assessment Program (NAPAP). The primary objective of PRECP is to provide essential information for the quantitative description of chemical wet deposition as a function of air pollution loadings, geograpic location, and atmospheric processing. The program is broken into four closely interrelated sectors: Diagnostic Modeling; Field Measurements; Laboratory Measurements; and Climatological Evaluation. Data management tasks are: compile databases of the data collected in field studies; verify the contents of data sets; make data available to program participants eithermore » on-line or by means of computer tapes; perform requested analyses, graphical displays, and data aggregations; provide an index of what data is available; and provide documentation for field programs both as part of the computer database and as data reports.« less

  17. Global satellite composites - 20 years of evolution

    NASA Astrophysics Data System (ADS)

    Kohrs, Richard A.; Lazzara, Matthew A.; Robaidek, Jerrold O.; Santek, David A.; Knuth, Shelley L.

    2014-01-01

    For two decades, the University of Wisconsin Space Science and Engineering Center (SSEC) and the Antarctic Meteorological Research Center (AMRC) have been creating global, regional and hemispheric satellite composites. These composites have proven useful in research, operational forecasting, commercial applications and educational outreach. Using the Man computer Interactive Data System (McIDAS) software developed at SSEC, infrared window composites were created by combining Geostationary Operational Environmental Satellite (GOES), and polar orbiting data from the SSEC Data Center and polar data acquired at McMurdo and Palmer stations, Antarctica. Increased computer processing speed has allowed for more advanced algorithms to address the decision making process for co-located pixels. The algorithms have evolved from a simplistic maximum brightness temperature to those that account for distance from the sub-satellite point, parallax displacement, pixel time and resolution. The composites are the state-of-the-art means for merging/mosaicking satellite imagery.

  18. Computational Fluid Dynamic Analysis of Enhancing Passenger Cabin Comfort Using PCM

    NASA Astrophysics Data System (ADS)

    Purusothaman, M.; Valarmathi, T. N.; Dada Mohammad, S. K.

    2016-09-01

    The main purpose of this study is to determine a cost effective way to enhance passenger cabin comfort by analyzing the effect of solar radiation of a open parked vehicle, which is exposed to constant solar radiation on a hot and sunny day. Maximum heat accumulation occurs in the car cabin due to the solar radiation. By means of computational fluid dynamics (CFD) analysis, a simulation process is conducted for the thermal regulation of the passenger cabin using a layer of phase change material (PCM) on the roof structure of a stationary car when exposed to ambient temperature on a hot sunny day. The heat energy accumulated in the passenger cabin is absorbed by a layer of PCM for phase change process. The installation of a ventilation system which uses an exhaust fan to create a natural convection scenario in the cabin is also considered to enhance passenger comfort along with PCM.

  19. Evolutionary fuzzy modeling human diagnostic decisions.

    PubMed

    Peña-Reyes, Carlos Andrés

    2004-05-01

    Fuzzy CoCo is a methodology, combining fuzzy logic and evolutionary computation, for constructing systems able to accurately predict the outcome of a human decision-making process, while providing an understandable explanation of the underlying reasoning. Fuzzy logic provides a formal framework for constructing systems exhibiting both good numeric performance (accuracy) and linguistic representation (interpretability). However, fuzzy modeling--meaning the construction of fuzzy systems--is an arduous task, demanding the identification of many parameters. To solve it, we use evolutionary computation techniques (specifically cooperative coevolution), which are widely used to search for adequate solutions in complex spaces. We have successfully applied the algorithm to model the decision processes involved in two breast cancer diagnostic problems, the WBCD problem and the Catalonia mammography interpretation problem, obtaining systems both of high performance and high interpretability. For the Catalonia problem, an evolved system was embedded within a Web-based tool-called COBRA-for aiding radiologists in mammography interpretation.

  20. Kernel Regression Estimation of Fiber Orientation Mixtures in Diffusion MRI

    PubMed Central

    Cabeen, Ryan P.; Bastin, Mark E.; Laidlaw, David H.

    2016-01-01

    We present and evaluate a method for kernel regression estimation of fiber orientations and associated volume fractions for diffusion MR tractography and population-based atlas construction in clinical imaging studies of brain white matter. This is a model-based image processing technique in which representative fiber models are estimated from collections of component fiber models in model-valued image data. This extends prior work in nonparametric image processing and multi-compartment processing to provide computational tools for image interpolation, smoothing, and fusion with fiber orientation mixtures. In contrast to related work on multi-compartment processing, this approach is based on directional measures of divergence and includes data-adaptive extensions for model selection and bilateral filtering. This is useful for reconstructing complex anatomical features in clinical datasets analyzed with the ball-and-sticks model, and our framework’s data-adaptive extensions are potentially useful for general multi-compartment image processing. We experimentally evaluate our approach with both synthetic data from computational phantoms and in vivo clinical data from human subjects. With synthetic data experiments, we evaluate performance based on errors in fiber orientation, volume fraction, compartment count, and tractography-based connectivity. With in vivo data experiments, we first show improved scan-rescan reproducibility and reliability of quantitative fiber bundle metrics, including mean length, volume, streamline count, and mean volume fraction. We then demonstrate the creation of a multi-fiber tractography atlas from a population of 80 human subjects. In comparison to single tensor atlasing, our multi-fiber atlas shows more complete features of known fiber bundles and includes reconstructions of the lateral projections of the corpus callosum and complex fronto-parietal connections of the superior longitudinal fasciculus I, II, and III. PMID:26691524

  1. Cloud based emergency health care information service in India.

    PubMed

    Karthikeyan, N; Sukanesh, R

    2012-12-01

    A hospital is a health care organization providing patient treatment by expert physicians, surgeons and equipments. A report from a health care accreditation group says that miscommunication between patients and health care providers is the reason for the gap in providing emergency medical care to people in need. In developing countries, illiteracy is the major key root for deaths resulting from uncertain diseases constituting a serious public health problem. Mentally affected, differently abled and unconscious patients can't communicate about their medical history to the medical practitioners. Also, Medical practitioners can't edit or view DICOM images instantly. Our aim is to provide palm vein pattern recognition based medical record retrieval system, using cloud computing for the above mentioned people. Distributed computing technology is coming in the new forms as Grid computing and Cloud computing. These new forms are assured to bring Information Technology (IT) as a service. In this paper, we have described how these new forms of distributed computing will be helpful for modern health care industries. Cloud Computing is germinating its benefit to industrial sectors especially in medical scenarios. In Cloud Computing, IT-related capabilities and resources are provided as services, via the distributed computing on-demand. This paper is concerned with sprouting software as a service (SaaS) by means of Cloud computing with an aim to bring emergency health care sector in an umbrella with physical secured patient records. In framing the emergency healthcare treatment, the crucial thing considered necessary to decide about patients is their previous health conduct records. Thus a ubiquitous access to appropriate records is essential. Palm vein pattern recognition promises a secured patient record access. Likewise our paper reveals an efficient means to view, edit or transfer the DICOM images instantly which was a challenging task for medical practitioners in the past years. We have developed two services for health care. 1. Cloud based Palm vein recognition system 2. Distributed Medical image processing tools for medical practitioners.

  2. Comprehensive evaluations of cone-beam CT dose in image-guided radiation therapy via GPU-based Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Montanari, Davide; Scolari, Enrica; Silvestri, Chiara; Jiang Graves, Yan; Yan, Hao; Cervino, Laura; Rice, Roger; Jiang, Steve B.; Jia, Xun

    2014-03-01

    Cone beam CT (CBCT) has been widely used for patient setup in image-guided radiation therapy (IGRT). Radiation dose from CBCT scans has become a clinical concern. The purposes of this study are (1) to commission a graphics processing unit (GPU)-based Monte Carlo (MC) dose calculation package gCTD for Varian On-Board Imaging (OBI) system and test the calculation accuracy, and (2) to quantitatively evaluate CBCT dose from the OBI system in typical IGRT scan protocols. We first conducted dose measurements in a water phantom. X-ray source model parameters used in gCTD are obtained through a commissioning process. gCTD accuracy is demonstrated by comparing calculations with measurements in water and in CTDI phantoms. Twenty-five brain cancer patients are used to study dose in a standard-dose head protocol, and 25 prostate cancer patients are used to study dose in pelvis protocol and pelvis spotlight protocol. Mean dose to each organ is calculated. Mean dose to 2% voxels that have the highest dose is also computed to quantify the maximum dose. It is found that the mean dose value to an organ varies largely among patients. Moreover, dose distribution is highly non-homogeneous inside an organ. The maximum dose is found to be 1-3 times higher than the mean dose depending on the organ, and is up to eight times higher for the entire body due to the very high dose region in bony structures. High computational efficiency has also been observed in our studies, such that MC dose calculation time is less than 5 min for a typical case.

  3. Lumping of degree-based mean-field and pair-approximation equations for multistate contact processes

    NASA Astrophysics Data System (ADS)

    Kyriakopoulos, Charalampos; Grossmann, Gerrit; Wolf, Verena; Bortolussi, Luca

    2018-01-01

    Contact processes form a large and highly interesting class of dynamic processes on networks, including epidemic and information-spreading networks. While devising stochastic models of such processes is relatively easy, analyzing them is very challenging from a computational point of view, particularly for large networks appearing in real applications. One strategy to reduce the complexity of their analysis is to rely on approximations, often in terms of a set of differential equations capturing the evolution of a random node, distinguishing nodes with different topological contexts (i.e., different degrees of different neighborhoods), such as degree-based mean-field (DBMF), approximate-master-equation (AME), or pair-approximation (PA) approaches. The number of differential equations so obtained is typically proportional to the maximum degree kmax of the network, which is much smaller than the size of the master equation of the underlying stochastic model, yet numerically solving these equations can still be problematic for large kmax. In this paper, we consider AME and PA, extended to cope with multiple local states, and we provide an aggregation procedure that clusters together nodes having similar degrees, treating those in the same cluster as indistinguishable, thus reducing the number of equations while preserving an accurate description of global observables of interest. We also provide an automatic way to build such equations and to identify a small number of degree clusters that give accurate results. The method is tested on several case studies, where it shows a high level of compression and a reduction of computational time of several orders of magnitude for large networks, with minimal loss in accuracy.

  4. Applying Strategic Visualization(Registered Trademark) to Lunar and Planetary Mission Design

    NASA Technical Reports Server (NTRS)

    Frassanito, John R.; Cooke, D. R.

    2002-01-01

    NASA teams, such as the NASA Exploration Team (NEXT), utilize advanced computational visualization processes to develop mission designs and architectures for lunar and planetary missions. One such process, Strategic Visualization (trademark), is a tool used extensively to help mission designers visualize various design alternatives and present them to other participants of their team. The participants, which may include NASA, industry, and the academic community, are distributed within a virtual network. Consequently, computer animation and other digital techniques provide an efficient means to communicate top-level technical information among team members. Today,Strategic Visualization(trademark) is used extensively both in the mission design process within the technical community, and to communicate the value of space exploration to the general public. Movies and digital images have been generated and shown on nationally broadcast television and the Internet, as well as in magazines and digital media. In our presentation will show excerpts of a computer-generated animation depicting the reference Earth/Moon L1 Libration Point Gateway architecture. The Gateway serves as a staging corridor for human expeditions to the lunar poles and other surface locations. Also shown are crew transfer systems and current reference lunar excursion vehicles as well as the Human and robotic construction of an inflatable telescope array for deployment to the Sun/Earth Libration Point.

  5. On the Computational Capabilities of Physical Systems. Part 1; The Impossibility of Infallible Computation

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Koga, Dennis (Technical Monitor)

    2000-01-01

    In this first of two papers, strong limits on the accuracy of physical computation are established. First it is proven that there cannot be a physical computer C to which one can pose any and all computational tasks concerning the physical universe. Next it is proven that no physical computer C can correctly carry out any computational task in the subset of such tasks that can be posed to C. This result holds whether the computational tasks concern a system that is physically isolated from C, or instead concern a system that is coupled to C. As a particular example, this result means that there cannot be a physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly 'processing information faster than the universe does'. The results also mean that there cannot exist an infallible, general-purpose observation apparatus, and that there cannot be an infallible, general-purpose control apparatus. These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - a definition of 'physical computation' - is needed to address the issues considered in these papers. While this definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. The second in this pair of papers presents a preliminary exploration of some of this mathematical structure, including in particular that of prediction complexity, which is a 'physical computation analogue' of algorithmic information complexity. It is proven in that second paper that either the Hamiltonian of our universe proscribes a certain type of computation, or prediction complexity is unique (unlike algorithmic information complexity), in that there is one and only version of it that can be applicable throughout our universe.

  6. Encryption for Remote Control via Internet or Intranet

    NASA Technical Reports Server (NTRS)

    Lineberger, Lewis

    2005-01-01

    A data-communication protocol has been devised to enable secure, reliable remote control of processes and equipment via a collision-based network, while using minimal bandwidth and computation. The network could be the Internet or an intranet. Control is made secure by use of both a password and a dynamic key, which is sent transparently to a remote user by the controlled computer (that is, the computer, located at the site of the equipment or process to be controlled, that exerts direct control over the process). The protocol functions in the presence of network latency, overcomes errors caused by missed dynamic keys, and defeats attempts by unauthorized remote users to gain control. The protocol is not suitable for real-time control, but is well suited for applications in which control latencies up to about 0.5 second are acceptable. The encryption scheme involves the use of both a dynamic and a private key, without any additional overhead that would degrade performance. The dynamic key is embedded in the equipment- or process-monitor data packets sent out by the controlled computer: in other words, the dynamic key is a subset of the data in each such data packet. The controlled computer maintains a history of the last 3 to 5 data packets for use in decrypting incoming control commands. In addition, the controlled computer records a private key (password) that is given to the remote computer. The encrypted incoming command is permuted by both the dynamic and private key. A person who records the command data in a given packet for hostile purposes cannot use that packet after the public key expires (typically within 3 seconds). Even a person in possession of an unauthorized copy of the command/remote-display software cannot use that software in the absence of the password. The use of a dynamic key embedded in the outgoing data makes the central-processing unit overhead very small. The use of a National Instruments DataSocket(TradeMark) (or equivalent) protocol or the User Datagram Protocol makes it possible to obtain reasonably short response times: Typical response times in event-driven control, using packets sized .300 bytes, are <0.2 second for commands issued from locations anywhere on Earth. The protocol requires that control commands represent absolute values of controlled parameters (e.g., a specified temperature), as distinguished from changes in values of controlled parameters (e.g., a specified increment of temperature). Each command is issued three or more times to ensure delivery in crowded networks. The use of absolute-value commands prevents additional (redundant) commands from causing trouble. Because a remote controlling computer receives "talkback" in the form of data packets from the controlled computer, typically within a time interval < or =1 s, the controlling computer can re-issue a command if network failure has occurred. The controlled computer, the process or equipment that it controls, and any human operator(s) at the site of the controlled equipment or process should be equipped with safety measures to prevent damage to equipment or injury to humans. These features could be a combination of software, external hardware, and intervention by the human operator(s). The protocol is not fail-safe, but by adopting these safety measures as part of the protocol, one makes the protocol a robust means of controlling remote processes and equipment by use of typical office computers via intranets and/or the Internet.

  7. Improving the Computational Effort of Set-Inversion-Based Prandial Insulin Delivery for Its Integration in Insulin Pumps

    PubMed Central

    León-Vargas, Fabian; Calm, Remei; Bondia, Jorge; Vehí, Josep

    2012-01-01

    Objective Set-inversion-based prandial insulin delivery is a new model-based bolus advisor for postprandial glucose control in type 1 diabetes mellitus (T1DM). It automatically coordinates the values of basal–bolus insulin to be infused during the postprandial period so as to achieve some predefined control objectives. However, the method requires an excessive computation time to compute the solution set of feasible insulin profiles, which impedes its integration into an insulin pump. In this work, a new algorithm is presented, which reduces computation time significantly and enables the integration of this new bolus advisor into current processing features of smart insulin pumps. Methods A new strategy was implemented that focused on finding the combined basal–bolus solution of interest rather than an extensive search of the feasible set of solutions. Analysis of interval simulations, inclusion of physiological assumptions, and search domain contractions were used. Data from six real patients with T1DM were used to compare the performance between the optimized and the conventional computations. Results In all cases, the optimized version yielded the basal–bolus combination recommended by the conventional method and in only 0.032% of the computation time. Simulations show that the mean number of iterations for the optimized computation requires approximately 3.59 s at 20 MHz processing power, in line with current features of smart pumps. Conclusions A computationally efficient method for basal–bolus coordination in postprandial glucose control has been presented and tested. The results indicate that an embedded algorithm within smart insulin pumps is now feasible. Nonetheless, we acknowledge that a clinical trial will be needed in order to justify this claim. PMID:23294789

  8. r.avaflow v1, an advanced open-source computational framework for the propagation and interaction of two-phase mass flows

    NASA Astrophysics Data System (ADS)

    Mergili, Martin; Fischer, Jan-Thomas; Krenn, Julia; Pudasaini, Shiva P.

    2017-02-01

    r.avaflow represents an innovative open-source computational tool for routing rapid mass flows, avalanches, or process chains from a defined release area down an arbitrary topography to a deposition area. In contrast to most existing computational tools, r.avaflow (i) employs a two-phase, interacting solid and fluid mixture model (Pudasaini, 2012); (ii) is suitable for modelling more or less complex process chains and interactions; (iii) explicitly considers both entrainment and stopping with deposition, i.e. the change of the basal topography; (iv) allows for the definition of multiple release masses, and/or hydrographs; and (v) serves with built-in functionalities for validation, parameter optimization, and sensitivity analysis. r.avaflow is freely available as a raster module of the GRASS GIS software, employing the programming languages Python and C along with the statistical software R. We exemplify the functionalities of r.avaflow by means of two sets of computational experiments: (1) generic process chains consisting in bulk mass and hydrograph release into a reservoir with entrainment of the dam and impact downstream; (2) the prehistoric Acheron rock avalanche, New Zealand. The simulation results are generally plausible for (1) and, after the optimization of two key parameters, reasonably in line with the corresponding observations for (2). However, we identify some potential to enhance the analytic and numerical concepts. Further, thorough parameter studies will be necessary in order to make r.avaflow fit for reliable forward simulations of possible future mass flow events.

  9. Coagulation kinetics beyond mean field theory using an optimised Poisson representation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burnett, James; Ford, Ian J.

    Binary particle coagulation can be modelled as the repeated random process of the combination of two particles to form a third. The kinetics may be represented by population rate equations based on a mean field assumption, according to which the rate of aggregation is taken to be proportional to the product of the mean populations of the two participants, but this can be a poor approximation when the mean populations are small. However, using the Poisson representation, it is possible to derive a set of rate equations that go beyond mean field theory, describing pseudo-populations that are continuous, noisy, andmore » complex, but where averaging over the noise and initial conditions gives the mean of the physical population. Such an approach is explored for the simple case of a size-independent rate of coagulation between particles. Analytical results are compared with numerical computations and with results derived by other means. In the numerical work, we encounter instabilities that can be eliminated using a suitable “gauge” transformation of the problem [P. D. Drummond, Eur. Phys. J. B 38, 617 (2004)] which we show to be equivalent to the application of the Cameron-Martin-Girsanov formula describing a shift in a probability measure. The cost of such a procedure is to introduce additional statistical noise into the numerical results, but we identify an optimised gauge transformation where this difficulty is minimal for the main properties of interest. For more complicated systems, such an approach is likely to be computationally cheaper than Monte Carlo simulation.« less

  10. The origins of computer weather prediction and climate modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lynch, Peter

    2008-03-20

    Numerical simulation of an ever-increasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be far-reaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. Amore » fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of ever-increasing sophistication are developed.« less

  11. The origins of computer weather prediction and climate modeling

    NASA Astrophysics Data System (ADS)

    Lynch, Peter

    2008-03-01

    Numerical simulation of an ever-increasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be far-reaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. A fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of ever-increasing sophistication are developed.

  12. Computational screening of disease-associated mutations in OCA2 gene.

    PubMed

    Kamaraj, Balu; Purohit, Rituraj

    2014-01-01

    Oculocutaneous albinism type 2 (OCA2), caused by mutations of OCA2 gene, is an autosomal recessive disorder characterized by reduced biosynthesis of melanin pigment in the skin, hair, and eyes. The OCA2 gene encodes instructions for making a protein called the P protein. This protein plays a crucial role in melanosome biogenesis, and controls the eumelanin content in melanocytes in part via the processing and trafficking of tyrosinase which is the rate-limiting enzyme in melanin synthesis. In this study we analyzed the pathogenic effect of 95 non-synonymous single nucleotide polymorphisms reported in OCA2 gene using computational methods. We found R305W mutation as most deleterious and disease associated using SIFT, PolyPhen, PANTHER, PhD-SNP, Pmut, and MutPred tools. To understand the atomic arrangement in 3D space, the native and mutant (R305W) structures were modeled. Molecular dynamics simulation was conducted to observe the structural significance of computationally prioritized disease-associated mutation (R305W). Root-mean-square deviation, root-mean-square fluctuation, radius of gyration, solvent accessibility surface area, hydrogen bond (NH bond), trace of covariance matrix, eigenvector projection analysis, and density analysis results showed prominent loss of stability and rise in mutant flexibility values in 3D space. This study presents a well designed computational methodology to examine the albinism-associated SNPs.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poutanen, Juri, E-mail: juri.poutanen@utu.fi

    Rosseland mean opacity plays an important role in theories of stellar evolution and X-ray burst models. In the high-temperature regime, when most of the gas is completely ionized, the opacity is dominated by Compton scattering. Our aim here is to critically evaluate previous works on this subject and to compute the exact Rosseland mean opacity for Compton scattering over a broad range of temperature and electron degeneracy parameter. We use relativistic kinetic equations for Compton scattering and compute the photon mean free path as a function of photon energy by solving the corresponding integral equation in the diffusion limit. Asmore » a byproduct we also demonstrate the way to compute photon redistribution functions in the case of degenerate electrons. We then compute the Rosseland mean opacity as a function of temperature and electron degeneracy and present useful approximate expressions. We compare our results to previous calculations and find a significant difference in the low-temperature regime and strong degeneracy. We then proceed to compute the flux mean opacity in both free-streaming and diffusion approximations, and show that the latter is nearly identical to the Rosseland mean opacity. We also provide a simple way to account for the true absorption in evaluating the Rosseland and flux mean opacities.« less

  14. Eruptive event generator based on the Gibson-Low magnetic configuration

    NASA Astrophysics Data System (ADS)

    Borovikov, D.; Sokolov, I. V.; Manchester, W. B.; Jin, M.; Gombosi, T. I.

    2017-08-01

    Coronal mass ejections (CMEs), a kind of energetic solar eruptions, are an integral subject of space weather research. Numerical magnetohydrodynamic (MHD) modeling, which requires powerful computational resources, is one of the primary means of studying the phenomenon. With increasing accessibility of such resources, grows the demand for user-friendly tools that would facilitate the process of simulating CMEs for scientific and operational purposes. The Eruptive Event Generator based on Gibson-Low flux rope (EEGGL), a new publicly available computational model presented in this paper, is an effort to meet this demand. EEGGL allows one to compute the parameters of a model flux rope driving a CME via an intuitive graphical user interface. We provide a brief overview of the physical principles behind EEGGL and its functionality. Ways toward future improvements of the tool are outlined.

  15. Changing from computing grid to knowledge grid in life-science grid.

    PubMed

    Talukdar, Veera; Konar, Amit; Datta, Ayan; Choudhury, Anamika Roy

    2009-09-01

    Grid computing has a great potential to become a standard cyber infrastructure for life sciences that often require high-performance computing and large data handling, which exceeds the computing capacity of a single institution. Grid computer applies the resources of many computers in a network to a single problem at the same time. It is useful to scientific problems that require a great number of computer processing cycles or access to a large amount of data.As biologists,we are constantly discovering millions of genes and genome features, which are assembled in a library and distributed on computers around the world.This means that new, innovative methods must be developed that exploit the re-sources available for extensive calculations - for example grid computing.This survey reviews the latest grid technologies from the viewpoints of computing grid, data grid and knowledge grid. Computing grid technologies have been matured enough to solve high-throughput real-world life scientific problems. Data grid technologies are strong candidates for realizing a "resourceome" for bioinformatics. Knowledge grids should be designed not only from sharing explicit knowledge on computers but also from community formulation for sharing tacit knowledge among a community. By extending the concept of grid from computing grid to knowledge grid, it is possible to make use of a grid as not only sharable computing resources, but also as time and place in which people work together, create knowledge, and share knowledge and experiences in a community.

  16. Topologically preserving straightening of spinal cord MRI.

    PubMed

    De Leener, Benjamin; Mangeat, Gabriel; Dupont, Sara; Martin, Allan R; Callot, Virginie; Stikov, Nikola; Fehlings, Michael G; Cohen-Adad, Julien

    2017-10-01

    To propose a robust and accurate method for straightening magnetic resonance (MR) images of the spinal cord, based on spinal cord segmentation, that preserves spinal cord topology and that works for any MRI contrast, in a context of spinal cord template-based analysis. The spinal cord curvature was computed using an iterative Non-Uniform Rational B-Spline (NURBS) approximation. Forward and inverse deformation fields for straightening were computed by solving analytically the straightening equations for each image voxel. Computational speed-up was accomplished by solving all voxel equation systems as one single system. Straightening accuracy (mean and maximum distance from straight line), computational time, and robustness to spinal cord length was evaluated using the proposed and the standard straightening method (label-based spline deformation) on 3T T 2 - and T 1 -weighted images from 57 healthy subjects and 33 patients with spinal cord compression due to degenerative cervical myelopathy (DCM). The proposed algorithm was more accurate, more robust, and faster than the standard method (mean distance = 0.80 vs. 0.83 mm, maximum distance = 1.49 vs. 1.78 mm, time = 71 vs. 174 sec for the healthy population and mean distance = 0.65 vs. 0.68 mm, maximum distance = 1.28 vs. 1.55 mm, time = 32 vs. 60 sec for the DCM population). A novel image straightening method that enables template-based analysis of quantitative spinal cord MRI data is introduced. This algorithm works for any MRI contrast and was validated on healthy and patient populations. The presented method is implemented in the Spinal Cord Toolbox, an open-source software for processing spinal cord MRI data. 1 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2017;46:1209-1219. © 2017 International Society for Magnetic Resonance in Medicine.

  17. Quantum logic gates based on ballistic transport in graphene

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dragoman, Daniela; Academy of Romanian Scientists, Splaiul Independentei 54, 050094 Bucharest; Dragoman, Mircea, E-mail: mircea.dragoman@imt.ro

    2016-03-07

    The paper presents various configurations for the implementation of graphene-based Hadamard, C-phase, controlled-NOT, and Toffoli gates working at room temperature. These logic gates, essential for any quantum computing algorithm, involve ballistic graphene devices for qubit generation and processing and can be fabricated using existing nanolithographical techniques. All quantum gate configurations are based on the very large mean-free-paths of carriers in graphene at room temperature.

  18. Preliminary design of a redundant strapped down inertial navigation unit using two-degree-of-freedom tuned-gimbal gyroscopes

    NASA Technical Reports Server (NTRS)

    1976-01-01

    This redundant strapdown INS preliminary design study demonstrates the practicality of a skewed sensor system configuration by means of: (1) devising a practical system mechanization utilizing proven strapdown instruments, (2) thoroughly analyzing the skewed sensor redundancy management concept to determine optimum geometry, data processing requirements, and realistic reliability estimates, and (3) implementing the redundant computers into a low-cost, maintainable configuration.

  19. Natural and accelerated recovery from brain damage: experimental and theoretical approaches.

    PubMed

    Andersen, Richard A; Schieber, Marc H; Thakor, Nitish; Loeb, Gerald E

    2012-03-01

    The goal of the Caltech group is to gain insight into the processes that occur within the primate nervous system during dexterous reaching and grasping and to see whether natural recovery from local brain damage can be accelerated by artificial means. We will create computational models of the nervous system embodying this insight and explain a variety of clinically observed neurological deficits in human subjects using these models.

  20. Measurement of intervertebral cervical motion by means of dynamic x-ray image processing and data interpolation.

    PubMed

    Bifulco, Paolo; Cesarelli, Mario; Romano, Maria; Fratini, Antonio; Sansone, Mario

    2013-01-01

    Accurate measurement of intervertebral kinematics of the cervical spine can support the diagnosis of widespread diseases related to neck pain, such as chronic whiplash dysfunction, arthritis, and segmental degeneration. The natural inaccessibility of the spine, its complex anatomy, and the small range of motion only permit concise measurement in vivo. Low dose X-ray fluoroscopy allows time-continuous screening of cervical spine during patient's spontaneous motion. To obtain accurate motion measurements, each vertebra was tracked by means of image processing along a sequence of radiographic images. To obtain a time-continuous representation of motion and to reduce noise in the experimental data, smoothing spline interpolation was used. Estimation of intervertebral motion for cervical segments was obtained by processing patient's fluoroscopic sequence; intervertebral angle and displacement and the instantaneous centre of rotation were computed. The RMS value of fitting errors resulted in about 0.2 degree for rotation and 0.2 mm for displacements.

  1. Statistical properties of color-signal spaces.

    PubMed

    Lenz, Reiner; Bui, Thanh Hai

    2005-05-01

    In applications of principal component analysis (PCA) it has often been observed that the eigenvector with the largest eigenvalue has only nonnegative entries when the vectors of the underlying stochastic process have only nonnegative values. This has been used to show that the coordinate vectors in PCA are all located in a cone. We prove that the nonnegativity of the first eigenvector follows from the Perron-Frobenius (and Krein-Rutman theory). Experiments show also that for stochastic processes with nonnegative signals the mean vector is often very similar to the first eigenvector. This is not true in general, but we first give a heuristical explanation why we can expect such a similarity. We then derive a connection between the dominance of the first eigenvalue and the similarity between the mean and the first eigenvector and show how to check the relative size of the first eigenvalue without actually computing it. In the last part of the paper we discuss the implication of theoretical results for multispectral color processing.

  2. Statistical properties of color-signal spaces

    NASA Astrophysics Data System (ADS)

    Lenz, Reiner; Hai Bui, Thanh

    2005-05-01

    In applications of principal component analysis (PCA) it has often been observed that the eigenvector with the largest eigenvalue has only nonnegative entries when the vectors of the underlying stochastic process have only nonnegative values. This has been used to show that the coordinate vectors in PCA are all located in a cone. We prove that the nonnegativity of the first eigenvector follows from the Perron-Frobenius (and Krein-Rutman theory). Experiments show also that for stochastic processes with nonnegative signals the mean vector is often very similar to the first eigenvector. This is not true in general, but we first give a heuristical explanation why we can expect such a similarity. We then derive a connection between the dominance of the first eigenvalue and the similarity between the mean and the first eigenvector and show how to check the relative size of the first eigenvalue without actually computing it. In the last part of the paper we discuss the implication of theoretical results for multispectral color processing.

  3. Modeling rainfall-runoff process using soft computing techniques

    NASA Astrophysics Data System (ADS)

    Kisi, Ozgur; Shiri, Jalal; Tombul, Mustafa

    2013-02-01

    Rainfall-runoff process was modeled for a small catchment in Turkey, using 4 years (1987-1991) of measurements of independent variables of rainfall and runoff values. The models used in the study were Artificial Neural Networks (ANNs), Adaptive Neuro-Fuzzy Inference System (ANFIS) and Gene Expression Programming (GEP) which are Artificial Intelligence (AI) approaches. The applied models were trained and tested using various combinations of the independent variables. The goodness of fit for the model was evaluated in terms of the coefficient of determination (R2), root mean square error (RMSE), mean absolute error (MAE), coefficient of efficiency (CE) and scatter index (SI). A comparison was also made between these models and traditional Multi Linear Regression (MLR) model. The study provides evidence that GEP (with RMSE=17.82 l/s, MAE=6.61 l/s, CE=0.72 and R2=0.978) is capable of modeling rainfall-runoff process and is a viable alternative to other applied artificial intelligence and MLR time-series methods.

  4. Segmentation of White Blood Cells From Microscopic Images Using a Novel Combination of K-Means Clustering and Modified Watershed Algorithm.

    PubMed

    Ghane, Narjes; Vard, Alireza; Talebi, Ardeshir; Nematollahy, Pardis

    2017-01-01

    Recognition of white blood cells (WBCs) is the first step to diagnose some particular diseases such as acquired immune deficiency syndrome, leukemia, and other blood-related diseases that are usually done by pathologists using an optical microscope. This process is time-consuming, extremely tedious, and expensive and needs experienced experts in this field. Thus, a computer-aided diagnosis system that assists pathologists in the diagnostic process can be so effective. Segmentation of WBCs is usually a first step in developing a computer-aided diagnosis system. The main purpose of this paper is to segment WBCs from microscopic images. For this purpose, we present a novel combination of thresholding, k-means clustering, and modified watershed algorithms in three stages including (1) segmentation of WBCs from a microscopic image, (2) extraction of nuclei from cell's image, and (3) separation of overlapping cells and nuclei. The evaluation results of the proposed method show that similarity measures, precision, and sensitivity respectively were 92.07, 96.07, and 94.30% for nucleus segmentation and 92.93, 97.41, and 93.78% for cell segmentation. In addition, statistical analysis presents high similarity between manual segmentation and the results obtained by the proposed method.

  5. Mean-field thalamocortical modeling of longitudinal EEG acquired during intensive meditation training.

    PubMed

    Saggar, Manish; Zanesco, Anthony P; King, Brandon G; Bridwell, David A; MacLean, Katherine A; Aichele, Stephen R; Jacobs, Tonya L; Wallace, B Alan; Saron, Clifford D; Miikkulainen, Risto

    2015-07-01

    Meditation training has been shown to enhance attention and improve emotion regulation. However, the brain processes associated with such training are poorly understood and a computational modeling framework is lacking. Modeling approaches that can realistically simulate neurophysiological data while conforming to basic anatomical and physiological constraints can provide a unique opportunity to generate concrete and testable hypotheses about the mechanisms supporting complex cognitive tasks such as meditation. Here we applied the mean-field computational modeling approach using the scalp-recorded electroencephalogram (EEG) collected at three assessment points from meditating participants during two separate 3-month-long shamatha meditation retreats. We modeled cortical, corticothalamic, and intrathalamic interactions to generate a simulation of EEG signals recorded across the scalp. We also present two novel extensions to the mean-field approach that allow for: (a) non-parametric analysis of changes in model parameter values across all channels and assessments; and (b) examination of variation in modeled thalamic reticular nucleus (TRN) connectivity over the retreat period. After successfully fitting whole-brain EEG data across three assessment points within each retreat, two model parameters were found to replicably change across both meditation retreats. First, after training, we observed an increased temporal delay between modeled cortical and thalamic cells. This increase provides a putative neural mechanism for a previously observed reduction in individual alpha frequency in these same participants. Second, we found decreased inhibitory connection strength between the TRN and secondary relay nuclei (SRN) of the modeled thalamus after training. This reduction in inhibitory strength was found to be associated with increased dynamical stability of the model. Altogether, this paper presents the first computational approach, taking core aspects of physiology and anatomy into account, to formally model brain processes associated with intensive meditation training. The observed changes in model parameters inform theoretical accounts of attention training through meditation, and may motivate future study on the use of meditation in a variety of clinical populations. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. A report on the ST ScI optical disk workstation

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The STScI optical disk project was designed to explore the options, opportunities and problems presented by the optical disk technology, and to see if optical disks are a viable, and inexpensive, means of storing the large amount of data which are found in astronomical digital imagery. A separate workstation was purchased on which the development can be done and serves as an astronomical image processing computer, incorporating the optical disks into the solution of standard image processing tasks. It is indicated that small workstations can be powerful tools for image processing, and that astronomical image processing may be more conveniently and cost-effectively performed on microcomputers than on the mainframe and super-minicomputers. The optical disks provide unique capabilities in data storage.

  7. Computer literacy among first year medical students in a developing country: A cross sectional study

    PubMed Central

    2012-01-01

    Background The use of computer assisted learning (CAL) has enhanced undergraduate medical education. CAL improves performance at examinations, develops problem solving skills and increases student satisfaction. The study evaluates computer literacy among first year medical students in Sri Lanka. Methods The study was conducted at Faculty of Medicine, University of Colombo, Sri Lanka between August-September 2008. First year medical students (n = 190) were invited for the study. Data on computer literacy and associated factors were collected by an expert-validated pre-tested self-administered questionnaire. Computer literacy was evaluated by testing knowledge on 6 domains; common software packages, operating systems, database management and the usage of internet and E-mail. A linear regression was conducted using total score for computer literacy as the continuous dependant variable and other independent covariates. Results Sample size-181 (Response rate-95.3%), 49.7% were Males. Majority of the students (77.3%) owned a computer (Males-74.4%, Females-80.2%). Students have gained their present computer knowledge by; a formal training programme (64.1%), self learning (63.0%) or by peer learning (49.2%). The students used computers for predominately; word processing (95.6%), entertainment (95.0%), web browsing (80.1%) and preparing presentations (76.8%). Majority of the students (75.7%) expressed their willingness for a formal computer training programme at the faculty. Mean score for the computer literacy questionnaire was 48.4 ± 20.3, with no significant gender difference (Males-47.8 ± 21.1, Females-48.9 ± 19.6). There were 47.9% students that had a score less than 50% for the computer literacy questionnaire. Students from Colombo district, Western Province and Student owning a computer had a significantly higher mean score in comparison to other students (p < 0.001). In the linear regression analysis, formal computer training was the strongest predictor of computer literacy (β = 13.034), followed by using internet facility, being from Western province, using computers for Web browsing and computer programming, computer ownership and doing IT (Information Technology) as a subject in GCE (A/L) examination. Conclusion Sri Lankan medical undergraduates had a low-intermediate level of computer literacy. There is a need to improve computer literacy, by increasing computer training in schools, or by introducing computer training in the initial stages of the undergraduate programme. These two options require improvement in infrastructure and other resources. PMID:22980096

  8. Implementation of Lean System on Erbium Doped Fibre Amplifier Manufacturing Process to Reduce Production Time

    NASA Astrophysics Data System (ADS)

    Maneechote, T.; Luangpaiboon, P.

    2010-10-01

    A manufacturing process of erbium doped fibre amplifiers is complicated. It needs to meet the customers' requirements under a present economic status that products need to be shipped to customers as soon as possible after purchasing orders. This research aims to study and improve processes and production lines of erbium doped fibre amplifiers using lean manufacturing systems via an application of computer simulation. Three scenarios of lean tooled box systems are selected via the expert system. Firstly, the production schedule based on shipment date is combined with a first in first out control system. The second scenario focuses on a designed flow process plant layout. Finally, the previous flow process plant layout combines with production schedule based on shipment date including the first in first out control systems. The computer simulation with the limited data via an expected value is used to observe the performance of all scenarios. The most preferable resulted lean tooled box systems from a computer simulation are selected to implement in the real process of a production of erbium doped fibre amplifiers. A comparison is carried out to determine the actual performance measures via an analysis of variance of the response or the production time per unit achieved in each scenario. The goodness of an adequacy of the linear statistical model via experimental errors or residuals is also performed to check the normality, constant variance and independence of the residuals. The results show that a hybrid scenario of lean manufacturing system with the first in first out control and flow process plant lay out statistically leads to better performance in terms of the mean and variance of production times.

  9. Automated technical validation--a real time expert system for decision support.

    PubMed

    de Graeve, J S; Cambus, J P; Gruson, A; Valdiguié, P M

    1996-04-15

    Dealing daily with various machines and various control specimens provides a lot of data that cannot be processed manually. In order to help decision-making we wrote specific software coping with the traditional QC, with patient data (mean of normals, delta check) and with criteria related to the analytical equipment (flags and alarms). Four machines (3 Ektachem 700 and 1 Hitachi 911) analysing 25 common chemical tests are controlled. Every day, three different control specimens and one more once a week (regional survey) are run on the various pieces of equipment. The data are collected on a 486 microcomputer connected to the central computer. For every parameter the standard deviation is compared with the published acceptable limits and the Westgard's rules are computed. The mean of normals is continuously monitored. The final decision induces either an alarm sound and the print-out of the cause of rejection or, if no alarms happen, the daily print-out of recorded data, with or without the Levey Jennings graphs.

  10. Parameter estimation supplement to the Mission Analysis Evaluation and Space Trajectory Operations program (MAESTRO)

    NASA Technical Reports Server (NTRS)

    Bjorkman, W. S.; Uphoff, C. W.

    1973-01-01

    This Parameter Estimation Supplement describes the PEST computer program and gives instructions for its use in determination of lunar gravitation field coefficients. PEST was developed for use in the RAE-B lunar orbiting mission as a means of lunar field recovery. The observations processed by PEST are short-arc osculating orbital elements. These observations are the end product of an orbit determination process obtained with another program. PEST's end product it a set of harmonic coefficients to be used in long-term prediction of the lunar orbit. PEST employs some novel techniques in its estimation process, notably a square batch estimator and linear variational equations in the orbital elements (both osculating and mean) for measurement sensitivities. The program's capabilities are described, and operating instructions and input/output examples are given. PEST utilizes MAESTRO routines for its trajectory propagation. PEST's program structure and subroutines which are not common to MAESTRO are described. Some of the theoretical background information for the estimation process, and a derivation of linear variational equations for the Method 7 elements are included.

  11. The weak coherence account: detail-focused cognitive style in autism spectrum disorders.

    PubMed

    Happé, Francesca; Frith, Uta

    2006-01-01

    "Weak central coherence" refers to the detail-focused processing style proposed to characterise autism spectrum disorders (ASD). The original suggestion of a core deficit in central processing resulting in failure to extract global form/meaning, has been challenged in three ways. First, it may represent an outcome of superiority in local processing. Second, it may be a processing bias, rather than deficit. Third, weak coherence may occur alongside, rather than explain, deficits in social cognition. A review of over 50 empirical studies of coherence suggests robust findings of local bias in ASD, with mixed findings regarding weak global processing. Local bias appears not to be a mere side-effect of executive dysfunction, and may be independent of theory of mind deficits. Possible computational and neural models are discussed.

  12. Correlating Lagrangian structures with forcing in two-dimensional flow

    NASA Astrophysics Data System (ADS)

    Ouellette, Nicholas; Hogg, Charlie; Liao, Yang

    2015-11-01

    Lagrangian coherent structures (LCSs) are the dominant transport barriers in unsteady, aperiodic flows, and their role in organizing mixing and transport has been well documented. However, nearly all that is known about LCSs has been gleaned from passive observations: they are computed in a post-processing step after a flow has been observed, and used to understand why the mixing and transport proceeded as it did. Here, we instead take a first step toward controlling the presence or locations of LCSs by studying the relationship between LCSs and external forcing in an experimental quasi-two-dimensional weakly turbulent flow. We find that the likelihood of finding a repelling LCS at a given location is positively correlated with the mean strain rate injected at that point and negatively correlated with the mean speed, and that it is not correlated with the vorticity. We also find that mean time between successive LCSs appearing at a fixed location is related to the structure of the forcing field. Finally, we demonstrate a surprising difference in our results between LCSs computed forward and backwards in time, with forward-time (repelling) LCSs showing much more correlation with the forcing than backwards-time (attracting) LCSs.

  13. Characterization Of Flow Stress Of Different AA6082 Alloys By Means Of Hot Torsion Test

    NASA Astrophysics Data System (ADS)

    Donati, Lorenzo; El Mehtedi, Mohamad

    2011-05-01

    FEM simulations are become the most powerful tools in order to optimize the different aspects of the extrusion process and an accurate flow stress definition of the alloy is a prerequisite for a reliable effectiveness of the simulation. In the paper the determination of flow stress by means of hot torsion test is initially presented and discussed: the several approximations that are usually introduced in flow stress computation are described and computed for an AA6082 alloy in order to evidence the final effect on curves shapes. The procedure for regressing the parameters of the sinhyperbolic flow stress definition is described in detailed and applied to the described results. Then four different alloys, extracted by different casting batches but all namely belonging to the 6082 class, were hot torsion tested in comparable levels of temperature and strain rate up to specimen failure. The results are analyzed and discussed in order to understand if a mean flow stress behavior can be identified for the whole material class at the different tested conditions or if specific testing conditions (chemical composition of the alloy, specimen shape, etc) influence the materials properties to a higher degree.

  14. Hybrid quantum and classical methods for computing kinetic isotope effects of chemical reactions in solutions and in enzymes.

    PubMed

    Gao, Jiali; Major, Dan T; Fan, Yao; Lin, Yen-Lin; Ma, Shuhua; Wong, Kin-Yiu

    2008-01-01

    A method for incorporating quantum mechanics into enzyme kinetics modeling is presented. Three aspects are emphasized: 1) combined quantum mechanical and molecular mechanical methods are used to represent the potential energy surface for modeling bond forming and breaking processes, 2) instantaneous normal mode analyses are used to incorporate quantum vibrational free energies to the classical potential of mean force, and 3) multidimensional tunneling methods are used to estimate quantum effects on the reaction coordinate motion. Centroid path integral simulations are described to make quantum corrections to the classical potential of mean force. In this method, the nuclear quantum vibrational and tunneling contributions are not separable. An integrated centroid path integral-free energy perturbation and umbrella sampling (PI-FEP/UM) method along with a bisection sampling procedure was summarized, which provides an accurate, easily convergent method for computing kinetic isotope effects for chemical reactions in solution and in enzymes. In the ensemble-averaged variational transition state theory with multidimensional tunneling (EA-VTST/MT), these three aspects of quantum mechanical effects can be individually treated, providing useful insights into the mechanism of enzymatic reactions. These methods are illustrated by applications to a model process in the gas phase, the decarboxylation reaction of N-methyl picolinate in water, and the proton abstraction and reprotonation process catalyzed by alanine racemase. These examples show that the incorporation of quantum mechanical effects is essential for enzyme kinetics simulations.

  15. Performing process migration with allreduce operations

    DOEpatents

    Archer, Charles Jens; Peters, Amanda; Wallenfelt, Brian Paul

    2010-12-14

    Compute nodes perform allreduce operations that swap processes at nodes. A first allreduce operation generates a first result and uses a first process from a first compute node, a second process from a second compute node, and zeros from other compute nodes. The first compute node replaces the first process with the first result. A second allreduce operation generates a second result and uses the first result from the first compute node, the second process from the second compute node, and zeros from others. The second compute node replaces the second process with the second result, which is the first process. A third allreduce operation generates a third result and uses the first result from first compute node, the second result from the second compute node, and zeros from others. The first compute node replaces the first result with the third result, which is the second process.

  16. Negotiation of Meaning in Synchronous Computer-Mediated Communication in Relation to Task Types

    ERIC Educational Resources Information Center

    Cho, Hye-jin

    2011-01-01

    The present study explored how negotiation of meaning occurred in task-based synchronous computer-mediated communication (SCMC) environment among college English learners. Based on the theoretical framework of the interaction hypothesis and negotiation of meaning, four research questions arose: (1) how negotiation of meaning occur in non-native…

  17. SU-F-I-45: An Automated Technique to Measure Image Contrast in Clinical CT Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanders, J; Abadi, E; Meng, B

    Purpose: To develop and validate an automated technique for measuring image contrast in chest computed tomography (CT) exams. Methods: An automated computer algorithm was developed to measure the distribution of Hounsfield units (HUs) inside four major organs: the lungs, liver, aorta, and bones. These organs were first segmented or identified using computer vision and image processing techniques. Regions of interest (ROIs) were automatically placed inside the lungs, liver, and aorta and histograms of the HUs inside the ROIs were constructed. The mean and standard deviation of each histogram were computed for each CT dataset. Comparison of the mean and standardmore » deviation of the HUs in the different organs provides different contrast values. The ROI for the bones is simply the segmentation mask of the bones. Since the histogram for bones does not follow a Gaussian distribution, the 25th and 75th percentile were computed instead of the mean. The sensitivity and accuracy of the algorithm was investigated by comparing the automated measurements with manual measurements. Fifteen contrast enhanced and fifteen non-contrast enhanced chest CT clinical datasets were examined in the validation procedure. Results: The algorithm successfully measured the histograms of the four organs in both contrast and non-contrast enhanced chest CT exams. The automated measurements were in agreement with manual measurements. The algorithm has sufficient sensitivity as indicated by the near unity slope of the automated versus manual measurement plots. Furthermore, the algorithm has sufficient accuracy as indicated by the high coefficient of determination, R2, values ranging from 0.879 to 0.998. Conclusion: Patient-specific image contrast can be measured from clinical datasets. The algorithm can be run on both contrast enhanced and non-enhanced clinical datasets. The method can be applied to automatically assess the contrast characteristics of clinical chest CT images and quantify dependencies that may not be captured in phantom data.« less

  18. Newberry Combined Gravity 2016

    DOE Data Explorer

    Kelly Rose

    2016-01-22

    Newberry combined gravity from Zonge Int'l, processed for the EGS stimulation project at well 55-29. Includes data from both Davenport 2006 collection and for OSU/4D EGS monitoring 2012 collection. Locations are NAD83, UTM Zone 10 North, meters. Elevation is NAVD88. Gravity in milligals. Free air and observed gravity are included, along with simple Bouguer anomaly and terrain corrected Bouguer anomaly. SBA230 means simple Bouguer anomaly computed at 2.30 g/cc. CBA230 means terrain corrected Bouguer anomaly at 2.30 g/cc. This suite of densities are included (g/cc): 2.00, 2.10, 2.20, 2.30, 2.40, 2.50, 2.67.

  19. Intercomparison of hydrologic processes in global climate models

    NASA Technical Reports Server (NTRS)

    Lau, W. K.-M.; Sud, Y. C.; Kim, J.-H.

    1995-01-01

    In this report, we address the intercomparison of precipitation (P), evaporation (E), and surface hydrologic forcing (P-E) for 23 Atmospheric Model Intercomparison Project (AMIP) general circulation models (GCM's) including relevant observations, over a variety of spatial and temporal scales. The intercomparison includes global and hemispheric means, latitudinal profiles, selected area means for the tropics and extratropics, ocean and land, respectively. In addition, we have computed anomaly pattern correlations among models and observations for different seasons, harmonic analysis for annual and semiannual cycles, and rain-rate frequency distribution. We also compare the joint influence of temperature and precipitation on local climate using the Koeppen climate classification scheme.

  20. Comparison of experiments and computations for cold gas spraying through a mask. Part 2

    NASA Astrophysics Data System (ADS)

    Klinkov, S. V.; Kosarev, V. F.; Ryashin, N. S.

    2017-03-01

    This paper presents experimental and simulation results of cold spray coating deposition using the mask placed above the plane substrate at different distances. Velocities of aluminum (mean size 30 μm) and copper (mean size 60 μm) particles in the vicinity of the mask are determined. It was found that particle velocities have angular distribution in flow with a representative standard deviation of 1.5-2 degrees. Modeling of coating formation behind the mask with account for this distribution was developed. The results of model agree with experimental data confirming the importance of particle angular distribution for coating deposition process in the masked area.

  1. Hypertext-based computer vision teaching packages

    NASA Astrophysics Data System (ADS)

    Marshall, A. David

    1994-10-01

    The World Wide Web Initiative has provided a means for providing hypertext and multimedia based information across the whole INTERNET. Many applications have been developed on such http servers. At Cardiff we have developed a http hypertext based multimedia server, the Cardiff Information Server, using the widely available Mosaic system. The server provides a variety of information ranging from the provision of teaching modules, on- line documentation, timetables for departmental activities to more light hearted hobby interests. One important and novel development to the server has been the development of courseware facilities. This ranges from the provision of on-line lecture notes, exercises and their solutions to more interactive teaching packages. A variety of disciplines have benefitted notably Computer Vision, and Image Processing but also C programming, X Windows, Computer Graphics and Parallel Computing. This paper will address the issues of the implementation of the Computer Vision and Image Processing packages, the advantages gained from using a hypertext based system and also will relate practical experiences of using the packages in a class environment. The paper addresses issues of how best to provide information in such a hypertext based system and how interactive image processing packages can be developed and integrated into courseware. The suite of tools developed facilitates a flexible and powerful courseware package that has proved popular in the classroom and over the Internet. The paper will also detail many future developments we see possible. One of the key points raised in the paper is that Mosaic's hypertext language (html) is extremely powerful and yet relatively straightforward to use. It is also possible to link in Unix calls so that programs and shells can be executed. This provides a powerful suite of utilities that can be exploited to develop many packages.

  2. Verification of ARMA identification for modelling temporal correlation of GPS observations using the toolbox ARMASA

    NASA Astrophysics Data System (ADS)

    Luo, Xiaoguang; Mayer, Michael; Heck, Bernhard

    2010-05-01

    One essential deficiency of the stochastic model used in many GNSS (Global Navigation Satellite Systems) software products consists in neglecting temporal correlation of GNSS observations. Analysing appropriately detrended time series of observation residuals resulting from GPS (Global Positioning System) data processing, the temporal correlation behaviour of GPS observations can be sufficiently described by means of so-called autoregressive moving average (ARMA) processes. Using the toolbox ARMASA which is available free of charge in MATLAB® Central (open exchange platform for the MATLAB® and SIMULINK® user community), a well-fitting time series model can be identified automatically in three steps. Firstly, AR, MA, and ARMA models are computed up to some user-specified maximum order. Subsequently, for each model type, the best-fitting model is selected using the combined (for AR processes) resp. generalised (for MA and ARMA processes) information criterion. The final model identification among the best-fitting AR, MA, and ARMA models is performed based on the minimum prediction error characterising the discrepancies between the given data and the fitted model. The ARMA coefficients are computed using Burg's maximum entropy algorithm (for AR processes), Durbin's first (for MA processes) and second (for ARMA processes) methods, respectively. This paper verifies the performance of the automated ARMA identification using the toolbox ARMASA. For this purpose, a representative data base is generated by means of ARMA simulation with respect to sample size, correlation level, and model complexity. The model error defined as a transform of the prediction error is used as measure for the deviation between the true and the estimated model. The results of the study show that the recognition rates of underlying true processes increase with increasing sample sizes and decrease with rising model complexity. Considering large sample sizes, the true underlying processes can be correctly recognised for nearly 80% of the analysed data sets. Additionally, the model errors of first-order AR resp. MA processes converge clearly more rapidly to the corresponding asymptotical values than those of high-order ARMA processes.

  3. Spiking Neural P Systems With Rules on Synapses Working in Maximum Spiking Strategy.

    PubMed

    Tao Song; Linqiang Pan

    2015-06-01

    Spiking neural P systems (called SN P systems for short) are a class of parallel and distributed neural-like computation models inspired by the way the neurons process information and communicate with each other by means of impulses or spikes. In this work, we introduce a new variant of SN P systems, called SN P systems with rules on synapses working in maximum spiking strategy, and investigate the computation power of the systems as both number and vector generators. Specifically, we prove that i) if no limit is imposed on the number of spikes in any neuron during any computation, such systems can generate the sets of Turing computable natural numbers and the sets of vectors of positive integers computed by k-output register machine; ii) if an upper bound is imposed on the number of spikes in each neuron during any computation, such systems can characterize semi-linear sets of natural numbers as number generating devices; as vector generating devices, such systems can only characterize the family of sets of vectors computed by sequential monotonic counter machine, which is strictly included in family of semi-linear sets of vectors. This gives a positive answer to the problem formulated in Song et al., Theor. Comput. Sci., vol. 529, pp. 82-95, 2014.

  4. Vehicle longitudinal velocity estimation during the braking process using unknown input Kalman filter

    NASA Astrophysics Data System (ADS)

    Moaveni, Bijan; Khosravi Roqaye Abad, Mahdi; Nasiri, Sayyad

    2015-10-01

    In this paper, vehicle longitudinal velocity during the braking process is estimated by measuring the wheels speed. Here, a new algorithm based on the unknown input Kalman filter is developed to estimate the vehicle longitudinal velocity with a minimum mean square error and without using the value of braking torque in the estimation procedure. The stability and convergence of the filter are analysed and proved. Effectiveness of the method is shown by designing a real experiment and comparing the estimation result with actual longitudinal velocity computing from a three-axis accelerometer output.

  5. Chemorheology of reactive systems: Finite element analysis

    NASA Technical Reports Server (NTRS)

    Douglas, C.; Roylance, D.

    1982-01-01

    The equations which govern the nonisothermal flow of reactive fluids are outlined, and the means by which finite element analysis is used to solve these equations for the sort of arbitrary boundary conditions encountered in industrial practice are described. The performance of the computer code is illustrated by several trial problems, selected more for their value in providing insight to polymer processing flows than as practical production problems. Although a good deal remains to be learned as to the performance and proper use of this numerical technique, it is undeniably useful in providing better understanding of today's complicated polymer processing problems.

  6. Automated Power Systems Management (APSM)

    NASA Technical Reports Server (NTRS)

    Bridgeforth, A. O.

    1981-01-01

    A breadboard power system incorporating autonomous functions of monitoring, fault detection and recovery, command and control was developed, tested and evaluated to demonstrate technology feasibility. Autonomous functions including switching of redundant power processing elements, individual load fault removal, and battery charge/discharge control were implemented by means of a distributed microcomputer system within the power subsystem. Three local microcomputers provide the monitoring, control and command function interfaces between the central power subsystem microcomputer and the power sources, power processing and power distribution elements. The central microcomputer is the interface between the local microcomputers and the spacecraft central computer or ground test equipment.

  7. Ultrafast electron diffraction pattern simulations using GPU technology. Applications to lattice vibrations.

    PubMed

    Eggeman, A S; London, A; Midgley, P A

    2013-11-01

    Graphical processing units (GPUs) offer a cost-effective and powerful means to enhance the processing power of computers. Here we show how GPUs can greatly increase the speed of electron diffraction pattern simulations by the implementation of a novel method to generate the phase grating used in multislice calculations. The increase in speed is especially apparent when using large supercell arrays and we illustrate the benefits of fast encoding the transmission function representing the atomic potentials through the simulation of thermal diffuse scattering in silicon brought about by specific vibrational modes. © 2013 Elsevier B.V. All rights reserved.

  8. Metals processing control by counting molten metal droplets

    DOEpatents

    Schlienger, Eric; Robertson, Joanna M.; Melgaard, David; Shelmidine, Gregory J.; Van Den Avyle, James A.

    2000-01-01

    Apparatus and method for controlling metals processing (e.g., ESR) by melting a metal ingot and counting molten metal droplets during melting. An approximate amount of metal in each droplet is determined, and a melt rate is computed therefrom. Impedance of the melting circuit is monitored, such as by calculating by root mean square a voltage and current of the circuit and dividing the calculated current into the calculated voltage. Analysis of the impedance signal is performed to look for a trace characteristic of formation of a molten metal droplet, such as by examining skew rate, curvature, or a higher moment.

  9. 14 CFR 234.2 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...: Cancelled flight means a flight operation that was not operated, but was listed in a carrier's computer... dropped from a carrier's computer reservation system more than seven calendar days before its scheduled... reporting to computer reservations system vendors, flight also means one-stop or multi-stop single plane...

  10. 14 CFR 234.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...: Cancelled flight means a flight operation that was not operated, but was listed in a carrier's computer... dropped from a carrier's computer reservation system more than seven calendar days before its scheduled... reporting to computer reservations system vendors, flight also means one-stop or multi-stop single plane...

  11. 14 CFR 234.2 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...: Cancelled flight means a flight operation that was not operated, but was listed in a carrier's computer... dropped from a carrier's computer reservation system more than seven calendar days before its scheduled... reporting to computer reservations system vendors, flight also means one-stop or multi-stop single plane...

  12. 14 CFR 234.2 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...: Cancelled flight means a flight operation that was not operated, but was listed in a carrier's computer... dropped from a carrier's computer reservation system more than seven calendar days before its scheduled... reporting to computer reservations system vendors, flight also means one-stop or multi-stop single plane...

  13. 14 CFR 234.2 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...: Cancelled flight means a flight operation that was not operated, but was listed in a carrier's computer... dropped from a carrier's computer reservation system more than seven calendar days before its scheduled... reporting to computer reservations system vendors, flight also means one-stop or multi-stop single plane...

  14. Dynamically balanced absolute sea level of the global ocean derived from near-surface velocity observations

    NASA Astrophysics Data System (ADS)

    Niiler, Pearn P.; Maximenko, Nikolai A.; McWilliams, James C.

    2003-11-01

    The 1992-2002 time-mean absolute sea level distribution of the global ocean is computed for the first time from observations of near-surface velocity. For this computation, we use the near-surface horizontal momentum balance. The velocity observed by drifters is used to compute the Coriolis force and the force due to acceleration of water parcels. The anomaly of horizontal pressure gradient is derived from satellite altimetry and corrects the temporal bias in drifter data distribution. NCEP reanalysis winds are used to compute the force due to Ekman currents. The mean sea level gradient force, which closes the momentum balance, is integrated for mean sea level. We find that our computation agrees, within uncertainties, with the sea level computed from the geostrophic, hydrostatic momentum balance using historical mean density, except in the Antarctic Circumpolar Current. A consistent horizontally and vertically dynamically balanced, near-surface, global pressure field has now been derived from observations.

  15. Sense-making for intelligence analysis on social media data

    NASA Astrophysics Data System (ADS)

    Pritzkau, Albert

    2016-05-01

    Social networks, in particular online social networks as a subset, enable the analysis of social relationships which are represented by interaction, collaboration, or other sorts of influence between people. Any set of people and their internal social relationships can be modelled as a general social graph. These relationships are formed by exchanging emails, making phone calls, or carrying out a range of other activities that build up the network. This paper presents an overview of current approaches to utilizing social media as a ubiquitous sensor network in the context of national and global security. Exploitation of social media is usually an interdisciplinary endeavour, in which the relevant technologies and methods are identified and linked in order ultimately demonstrate selected applications. Effective and efficient intelligence is usually accomplished in a combined human and computer effort. Indeed, the intelligence process heavily depends on combining a human's flexibility, creativity, and cognitive ability with the bandwidth and processing power of today's computers. To improve the usability and accuracy of the intelligence analysis we will have to rely on data-processing tools at the level of natural language. Especially the collection and transformation of unstructured data into actionable, structured data requires scalable computational algorithms ranging from Artificial Intelligence, via Machine Learning, to Natural Language Processing (NLP). To support intelligence analysis on social media data, social media analytics is concerned with developing and evaluating computational tools and frameworks to collect, monitor, analyze, summarize, and visualize social media data. Analytics methods are employed to extract of significant patterns that might not be obvious. As a result, different data representations rendering distinct aspects of content and interactions serve as a means to adapt the focus of the intelligence analysis to specific information requests.

  16. About Distributed Simulation-based Optimization of Forming Processes using a Grid Architecture

    NASA Astrophysics Data System (ADS)

    Grauer, Manfred; Barth, Thomas

    2004-06-01

    Permanently increasing complexity of products and their manufacturing processes combined with a shorter "time-to-market" leads to more and more use of simulation and optimization software systems for product design. Finding a "good" design of a product implies the solution of computationally expensive optimization problems based on the results of simulation. Due to the computational load caused by the solution of these problems, the requirements on the Information&Telecommunication (IT) infrastructure of an enterprise or research facility are shifting from stand-alone resources towards the integration of software and hardware resources in a distributed environment for high-performance computing. Resources can either comprise software systems, hardware systems, or communication networks. An appropriate IT-infrastructure must provide the means to integrate all these resources and enable their use even across a network to cope with requirements from geographically distributed scenarios, e.g. in computational engineering and/or collaborative engineering. Integrating expert's knowledge into the optimization process is inevitable in order to reduce the complexity caused by the number of design variables and the high dimensionality of the design space. Hence, utilization of knowledge-based systems must be supported by providing data management facilities as a basis for knowledge extraction from product data. In this paper, the focus is put on a distributed problem solving environment (PSE) capable of providing access to a variety of necessary resources and services. A distributed approach integrating simulation and optimization on a network of workstations and cluster systems is presented. For geometry generation the CAD-system CATIA is used which is coupled with the FEM-simulation system INDEED for simulation of sheet-metal forming processes and the problem solving environment OpTiX for distributed optimization.

  17. Computer-aided analysis of cutting processes for brittle materials

    NASA Astrophysics Data System (ADS)

    Ogorodnikov, A. I.; Tikhonov, I. N.

    2017-12-01

    This paper is focused on 3D computer simulation of cutting processes for brittle materials and silicon wafers. Computer-aided analysis of wafer scribing and dicing is carried out with the use of the ANSYS CAE (computer-aided engineering) software, and a parametric model of the processes is created by means of the internal ANSYS APDL programming language. Different types of tool tip geometry are analyzed to obtain internal stresses, such as a four-sided pyramid with an included angle of 120° and a tool inclination angle to the normal axis of 15°. The quality of the workpieces after cutting is studied by optical microscopy to verify the FE (finite-element) model. The disruption of the material structure during scribing occurs near the scratch and propagates into the wafer or over its surface at a short range. The deformation area along the scratch looks like a ragged band, but the stress width is rather low. The theory of cutting brittle semiconductor and optical materials is developed on the basis of the advanced theory of metal turning. The fall of stress intensity along the normal on the way from the tip point to the scribe line can be predicted using the developed theory and with the verified FE model. The crystal quality and dimensions of defects are determined by the mechanics of scratching, which depends on the shape of the diamond tip, the scratching direction, the velocity of the cutting tool and applied force loads. The disunity is a rate-sensitive process, and it depends on the cutting thickness. The application of numerical techniques, such as FE analysis, to cutting problems enhances understanding and promotes the further development of existing machining technologies.

  18. RNA folding kinetics using Monte Carlo and Gillespie algorithms.

    PubMed

    Clote, Peter; Bayegan, Amir H

    2018-04-01

    RNA secondary structure folding kinetics is known to be important for the biological function of certain processes, such as the hok/sok system in E. coli. Although linear algebra provides an exact computational solution of secondary structure folding kinetics with respect to the Turner energy model for tiny ([Formula: see text]20 nt) RNA sequences, the folding kinetics for larger sequences can only be approximated by binning structures into macrostates in a coarse-grained model, or by repeatedly simulating secondary structure folding with either the Monte Carlo algorithm or the Gillespie algorithm. Here we investigate the relation between the Monte Carlo algorithm and the Gillespie algorithm. We prove that asymptotically, the expected time for a K-step trajectory of the Monte Carlo algorithm is equal to [Formula: see text] times that of the Gillespie algorithm, where [Formula: see text] denotes the Boltzmann expected network degree. If the network is regular (i.e. every node has the same degree), then the mean first passage time (MFPT) computed by the Monte Carlo algorithm is equal to MFPT computed by the Gillespie algorithm multiplied by [Formula: see text]; however, this is not true for non-regular networks. In particular, RNA secondary structure folding kinetics, as computed by the Monte Carlo algorithm, is not equal to the folding kinetics, as computed by the Gillespie algorithm, although the mean first passage times are roughly correlated. Simulation software for RNA secondary structure folding according to the Monte Carlo and Gillespie algorithms is publicly available, as is our software to compute the expected degree of the network of secondary structures of a given RNA sequence-see http://bioinformatics.bc.edu/clote/RNAexpNumNbors .

  19. Minimal Traffic Model with Safe Driving Conditions

    NASA Astrophysics Data System (ADS)

    Terborg, Heinrich; Pérez, Luis A.

    We have developed a new computational traffic model in which security aspects are fundamental. In this paper we show that this model reproduces many known empirical aspects of vehicular traffic such as the three states of traffic flow and the backward speed of the downstream front of a traffic jam (C), without the aid of adjustable parameters. The model is studied for both open and closed single lane traffic systems. Also, we were able to analytically compute the value of C as 15.37 km/h from a relation that only includes the human reaction time, the mean vehicle length and the effective friction coefficient during the braking process of a vehicle as its main components.

  20. Multitasking a three-dimensional Navier-Stokes algorithm on the Cray-2

    NASA Technical Reports Server (NTRS)

    Swisshelm, Julie M.

    1989-01-01

    A three-dimensional computational aerodynamics algorithm has been multitasked for efficient parallel execution on the Cray-2. It provides a means for examining the multitasking performance of a complete CFD application code. An embedded zonal multigrid scheme is used to solve the Reynolds-averaged Navier-Stokes equations for an internal flow model problem. The explicit nature of each component of the method allows a spatial partitioning of the computational domain to achieve a well-balanced task load for MIMD computers with vector-processing capability. Experiments have been conducted with both two- and three-dimensional multitasked cases. The best speedup attained by an individual task group was 3.54 on four processors of the Cray-2, while the entire solver yielded a speedup of 2.67 on four processors for the three-dimensional case. The multiprocessing efficiency of various types of computational tasks is examined, performance on two Cray-2s with different memory access speeds is compared, and extrapolation to larger problems is discussed.

Top