Sample records for subsequent computer processing

  1. COMPUTATIONAL TOXICOLOGY - OBJECTIVE 2: DEVELOPING APPROACHES FOR PRIORITIZING CHEMICALS FOR SUBSEQUENT SCREENING AND TESTING

    EPA Science Inventory

    One of the strategic objectives of the Computational Toxicology Program is to develop approaches for prioritizing chemicals for subsequent screening and testing. Approaches currently available for this process require extensive resources. Therefore, less costly and time-extensi...

  2. Processing data communications events by awakening threads in parallel active messaging interface of a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.

    Processing data communications events in a parallel active messaging interface (`PAMI`) of a parallel computer that includes compute nodes that execute a parallel application, with the PAMI including data communications endpoints, and the endpoints are coupled for data communications through the PAMI and through other data communications resources, including determining by an advance function that there are no actionable data communications events pending for its context, placing by the advance function its thread of execution into a wait state, waiting for a subsequent data communications event for the context; responsive to occurrence of a subsequent data communications event for themore » context, awakening by the thread from the wait state; and processing by the advance function the subsequent data communications event now pending for the context.« less

  3. A computer-controlled scintiscanning system and associated computer graphic techniques for study of regional distribution of blood flow.

    NASA Technical Reports Server (NTRS)

    Coulam, C. M.; Dunnette, W. H.; Wood, E. H.

    1970-01-01

    Two methods whereby a digital computer may be used to regulate a scintiscanning process are discussed from the viewpoint of computer input-output software. The computer's function, in this case, is to govern the data acquisition and storage, and to display the results to the investigator in a meaningful manner, both during and subsequent to the scanning process. Several methods (such as three-dimensional maps, contour plots, and wall-reflection maps) have been developed by means of which the computer can graphically display the data on-line, for real-time monitoring purposes, during the scanning procedure and subsequently for detailed analysis of the data obtained. A computer-governed method for converting scintiscan data recorded over the dorsal or ventral surfaces of the thorax into fractions of pulmonary blood flow traversing the right and left lungs is presented.

  4. Processing communications events in parallel active messaging interface by awakening thread from wait state

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-10-22

    Processing data communications events in a parallel active messaging interface (`PAMI`) of a parallel computer that includes compute nodes that execute a parallel application, with the PAMI including data communications endpoints, and the endpoints are coupled for data communications through the PAMI and through other data communications resources, including determining by an advance function that there are no actionable data communications events pending for its context, placing by the advance function its thread of execution into a wait state, waiting for a subsequent data communications event for the context; responsive to occurrence of a subsequent data communications event for the context, awakening by the thread from the wait state; and processing by the advance function the subsequent data communications event now pending for the context.

  5. Factors Influencing Cloud-Computing Technology Adoption in Developing Countries

    ERIC Educational Resources Information Center

    Hailu, Alemayehu

    2012-01-01

    Adoption of new technology has complicating components both from the selection, as well as decision-making criteria and process. Although new technology such as cloud computing provides great benefits especially to the developing countries, it has challenges that may complicate the selection decision and subsequent adoption process. This study…

  6. Processing Device for High-Speed Execution of an Xrisc Computer Program

    NASA Technical Reports Server (NTRS)

    Ng, Tak-Kwong (Inventor); Mills, Carl S. (Inventor)

    2016-01-01

    A processing device for high-speed execution of a computer program is provided. A memory module may store one or more computer programs. A sequencer may select one of the computer programs and controls execution of the selected program. A register module may store intermediate values associated with a current calculation set, a set of output values associated with a previous calculation set, and a set of input values associated with a subsequent calculation set. An external interface may receive the set of input values from a computing device and provides the set of output values to the computing device. A computation interface may provide a set of operands for computation during processing of the current calculation set. The set of input values are loaded into the register and the set of output values are unloaded from the register in parallel with processing of the current calculation set.

  7. Low-cost digital image processing at the University of Oklahoma

    NASA Technical Reports Server (NTRS)

    Harrington, J. A., Jr.

    1981-01-01

    Computer assisted instruction in remote sensing at the University of Oklahoma involves two separate approaches and is dependent upon initial preprocessing of a LANDSAT computer compatible tape using software developed for an IBM 370/158 computer. In-house generated preprocessing algorithms permits students or researchers to select a subset of a LANDSAT scene for subsequent analysis using either general purpose statistical packages or color graphic image processing software developed for Apple II microcomputers. Procedures for preprocessing the data and image analysis using either of the two approaches for low-cost LANDSAT data processing are described.

  8. A Procedure for the Computerized Analysis of Cleft Palate Speech Transcription

    ERIC Educational Resources Information Center

    Fitzsimons, David A.; Jones, David L.; Barton, Belinda; North, Kathryn N.

    2012-01-01

    The phonetic symbols used by speech-language pathologists to transcribe speech contain underlying hexadecimal values used by computers to correctly display and process transcription data. This study aimed to develop a procedure to utilise these values as the basis for subsequent computerized analysis of cleft palate speech. A computer keyboard…

  9. Critical Success Factors for E-Learning and Institutional Change--Some Organisational Perspectives on Campus-Wide E-Learning

    ERIC Educational Resources Information Center

    White, Su

    2007-01-01

    Computer technology has been harnessed for education in UK universities ever since the first computers for research were installed at 10 selected sites in 1957. Subsequently, real costs have fallen dramatically. Processing power has increased; network and communications infrastructure has proliferated, and information has become unimaginably…

  10. Bilateral Malar Reconstruction Using Patient-Specific Polyether Ether Ketone Implants in Treacher-Collins Syndrome Patients With Absent Zygomas.

    PubMed

    Sainsbury, David C G; George, Alan; Forrest, Christopher R; Phillips, John H

    2017-03-01

    The authors performed bilateral malar reconstruction using polyether ether ketone implants in 3 patients with Treacher-Collins syndrome with absent, as opposed to hypoplastic, zygomata. These patient-specific implants were fabricated using computed-aided design software reformatted from three-dimensional bony preoperative computed tomography images. The first time the authors performed this procedure the implant compressed the globe resulting in temporary anisocoria that was quickly recognized intraoperatively. The implant was immediately removed and the patient made a full-recovery with no ocular disturbance. The computer-aided design and manufacturing process was adjusted to include periorbital soft-tissue boundaries to aid in contouring the new implants. The same patient, and 2 further patients, subsequently underwent malar reconstruction using this soft tissue periorbital boundary fabrication process with an additional 2 mm relief removed from the implant's orbital surface. These subsequent procedures were performed without complication and with pleasing aesthetic results. The authors describe their experience and the salutary lessons learnt.

  11. A service based adaptive U-learning system using UX.

    PubMed

    Jeong, Hwa-Young; Yi, Gangman

    2014-01-01

    In recent years, traditional development techniques for e-learning systems have been changing to become more convenient and efficient. One new technology in the development of application systems includes both cloud and ubiquitous computing. Cloud computing can support learning system processes by using services while ubiquitous computing can provide system operation and management via a high performance technical process and network. In the cloud computing environment, a learning service application can provide a business module or process to the user via the internet. This research focuses on providing the learning material and processes of courses by learning units using the services in a ubiquitous computing environment. And we also investigate functions that support users' tailored materials according to their learning style. That is, we analyzed the user's data and their characteristics in accordance with their user experience. We subsequently applied the learning process to fit on their learning performance and preferences. Finally, we demonstrate how the proposed system outperforms learning effects to learners better than existing techniques.

  12. A Service Based Adaptive U-Learning System Using UX

    PubMed Central

    Jeong, Hwa-Young

    2014-01-01

    In recent years, traditional development techniques for e-learning systems have been changing to become more convenient and efficient. One new technology in the development of application systems includes both cloud and ubiquitous computing. Cloud computing can support learning system processes by using services while ubiquitous computing can provide system operation and management via a high performance technical process and network. In the cloud computing environment, a learning service application can provide a business module or process to the user via the internet. This research focuses on providing the learning material and processes of courses by learning units using the services in a ubiquitous computing environment. And we also investigate functions that support users' tailored materials according to their learning style. That is, we analyzed the user's data and their characteristics in accordance with their user experience. We subsequently applied the learning process to fit on their learning performance and preferences. Finally, we demonstrate how the proposed system outperforms learning effects to learners better than existing techniques. PMID:25147832

  13. Comparative Modeling of Proteins: A Method for Engaging Students' Interest in Bioinformatics Tools

    ERIC Educational Resources Information Center

    Badotti, Fernanda; Barbosa, Alan Sales; Reis, André Luiz Martins; do Valle, Ítalo Faria; Ambrósio, Lara; Bitar, Mainá

    2014-01-01

    The huge increase in data being produced in the genomic era has produced a need to incorporate computers into the research process. Sequence generation, its subsequent storage, interpretation, and analysis are now entirely computer-dependent tasks. Universities from all over the world have been challenged to seek a way of encouraging students to…

  14. Condensation of wet vapors in turbines

    NASA Technical Reports Server (NTRS)

    Kothman, R. E.

    1970-01-01

    Computer program predicts condensation point in wet vapor turbines and analyzes subsequent nucleation and growth processes to determine both moisture content and drop size and number distribution as a function of position. Program includes effects of molecular association on condensation and flow processes and handles both subsonic and supersonic flows.

  15. Scheme for Entering Binary Data Into a Quantum Computer

    NASA Technical Reports Server (NTRS)

    Williams, Colin

    2005-01-01

    A quantum algorithm provides for the encoding of an exponentially large number of classical data bits by use of a smaller (polynomially large) number of quantum bits (qubits). The development of this algorithm was prompted by the need, heretofore not satisfied, for a means of entering real-world binary data into a quantum computer. The data format provided by this algorithm is suitable for subsequent ultrafast quantum processing of the entered data. Potential applications lie in disciplines (e.g., genomics) in which one needs to search for matches between parts of very long sequences of data. For example, the algorithm could be used to encode the N-bit-long human genome in only log2N qubits. The resulting log2N-qubit state could then be used for subsequent quantum data processing - for example, to perform rapid comparisons of sequences.

  16. Measuring the impact of computer resource quality on the software development process and product

    NASA Technical Reports Server (NTRS)

    Mcgarry, Frank; Valett, Jon; Hall, Dana

    1985-01-01

    The availability and quality of computer resources during the software development process was speculated to have measurable, significant impact on the efficiency of the development process and the quality of the resulting product. Environment components such as the types of tools, machine responsiveness, and quantity of direct access storage may play a major role in the effort to produce the product and in its subsequent quality as measured by factors such as reliability and ease of maintenance. During the past six years, the NASA Goddard Space Flight Center has conducted experiments with software projects in an attempt to better understand the impact of software development methodologies, environments, and general technologies on the software process and product. Data was extracted and examined from nearly 50 software development projects. All were related to support of satellite flight dynamics ground-based computations. The relationship between computer resources and the software development process and product as exemplified by the subject NASA data was examined. Based upon the results, a number of computer resource-related implications are provided.

  17. AutoCNet: A Python library for sparse multi-image correspondence identification for planetary data

    NASA Astrophysics Data System (ADS)

    Laura, Jason; Rodriguez, Kelvin; Paquette, Adam C.; Dunn, Evin

    2018-01-01

    In this work we describe the AutoCNet library, written in Python, to support the application of computer vision techniques for n-image correspondence identification in remotely sensed planetary images and subsequent bundle adjustment. The library is designed to support exploratory data analysis, algorithm and processing pipeline development, and application at scale in High Performance Computing (HPC) environments for processing large data sets and generating foundational data products. We also present a brief case study illustrating high level usage for the Apollo 15 Metric camera.

  18. When static media promote active learning: annotated illustrations versus narrated animations in multimedia instruction.

    PubMed

    Mayer, Richard E; Hegarty, Mary; Mayer, Sarah; Campbell, Julie

    2005-12-01

    In 4 experiments, students received a lesson consisting of computer-based animation and narration or a lesson consisting of paper-based static diagrams and text. The lessons used the same words and graphics in the paper-based and computer-based versions to explain the process of lightning formation (Experiment 1), how a toilet tank works (Experiment 2), how ocean waves work (Experiment 3), and how a car's braking system works (Experiment 4). On subsequent retention and transfer tests, the paper group performed significantly better than the computer group on 4 of 8 comparisons, and there was no significant difference on the rest. These results support the static media hypothesis, in which static illustrations with printed text reduce extraneous processing and promote germane processing as compared with narrated animations.

  19. Analysis of Compton continuum measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gold, R.; Olson, I. K.

    1970-01-01

    Five computer programs: COMPSCAT, FEND, GABCO, DOSE, and COMPLOT, have been developed and used for the analysis and subsequent reduction of measured energy distributions of Compton recoil electrons to continuous gamma spectra. In addition to detailed descriptions of these computer programs, the relationship amongst these codes is stressed. The manner in which these programs function is illustrated by tracing a sample measurement through a complete cycle of the data-reduction process.

  20. Method for routing events from key strokes in a multi-processing computer systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhodes, D.A.; Rustici, E.; Carter, K.H.

    1990-01-23

    The patent describes a method of routing user input in a computer system which concurrently runs a plurality of processes. It comprises: generating keycodes representative of keys typed by a user; distinguishing generated keycodes by looking up each keycode in a routing table which assigns each possible keycode to an individual assigned process of the plurality of processes, one of which processes being a supervisory process; then, sending each keycode to its assigned process until a keycode assigned to the supervisory process is received; sending keycodes received subsequent to the keycode assigned to the supervisory process to a buffer; next,more » providing additional keycodes to the supervisory process from the buffer until the supervisory process has completed operation; and sending keycodes stored in the buffer to processes assigned therewith after the supervisory process has completedoperation.« less

  1. ERIC Processing Manual. Rules and Guidelines for the Acquisition, Selection, and Technical Processing of Documents and Journal Articles by the Various Components of the ERIC Network.

    ERIC Educational Resources Information Center

    Brandhorst, Ted, Ed.; And Others

    This loose-leaf manual provides the detailed rules, guidelines, and examples to be used by the components of the Educational Resources Information Center (ERIC) Network in acquiring and selecting documents and in processing them (i.e., cataloging, indexing, abstracting) for input to the ERIC computer system and subsequent announcement in…

  2. Refractory pulse counting processes in stochastic neural computers.

    PubMed

    McNeill, Dean K; Card, Howard C

    2005-03-01

    This letter quantitiatively investigates the effect of a temporary refractory period or dead time in the ability of a stochastic Bernoulli processor to record subsequent pulse events, following the arrival of a pulse. These effects can arise in either the input detectors of a stochastic neural network or in subsequent processing. A transient period is observed, which increases with both the dead time and the Bernoulli probability of the dead-time free system, during which the system reaches equilibrium. Unless the Bernoulli probability is small compared to the inverse of the dead time, the mean and variance of the pulse count distributions are both appreciably reduced.

  3. Digital data storage systems, computers, and data verification methods

    DOEpatents

    Groeneveld, Bennett J.; Austad, Wayne E.; Walsh, Stuart C.; Herring, Catherine A.

    2005-12-27

    Digital data storage systems, computers, and data verification methods are provided. According to a first aspect of the invention, a computer includes an interface adapted to couple with a dynamic database; and processing circuitry configured to provide a first hash from digital data stored within a portion of the dynamic database at an initial moment in time, to provide a second hash from digital data stored within the portion of the dynamic database at a subsequent moment in time, and to compare the first hash and the second hash.

  4. Computational Foundations of Natural Intelligence

    PubMed Central

    van Gerven, Marcel

    2017-01-01

    New developments in AI and neuroscience are revitalizing the quest to understanding natural intelligence, offering insight about how to equip machines with human-like capabilities. This paper reviews some of the computational principles relevant for understanding natural intelligence and, ultimately, achieving strong AI. After reviewing basic principles, a variety of computational modeling approaches is discussed. Subsequently, I concentrate on the use of artificial neural networks as a framework for modeling cognitive processes. This paper ends by outlining some of the challenges that remain to fulfill the promise of machines that show human-like intelligence. PMID:29375355

  5. Topology Optimization for Reducing Additive Manufacturing Processing Distortions

    DTIC Science & Technology

    2017-12-01

    features that curl or warp under thermal load and are subsequently struck by the recoater blade /roller. Support structures act to wick heat away and...was run for 150 iterations. The material properties for all examples were Young’s modulus E = 1 GPa, Poisson’s ratio ν = 0.25, and thermal expansion...the element-birth model is significantly more computationally expensive for a full op- timization run . Consider, the computational complexity of a

  6. The detection and analysis of point processes in biological signals

    NASA Technical Reports Server (NTRS)

    Anderson, D. J.; Correia, M. J.

    1977-01-01

    A pragmatic approach to the detection and analysis of discrete events in biomedical signals is taken. Examples from both clinical and basic research are provided. Introductory sections discuss not only discrete events which are easily extracted from recordings by conventional threshold detectors but also events embedded in other information carrying signals. The primary considerations are factors governing event-time resolution and the effects limits to this resolution have on the subsequent analysis of the underlying process. The analysis portion describes tests for qualifying the records as stationary point processes and procedures for providing meaningful information about the biological signals under investigation. All of these procedures are designed to be implemented on laboratory computers of modest computational capacity.

  7. Focal-Plane Sensing-Processing: A Power-Efficient Approach for the Implementation of Privacy-Aware Networked Visual Sensors

    PubMed Central

    Fernández-Berni, Jorge; Carmona-Galán, Ricardo; del Río, Rocío; Kleihorst, Richard; Philips, Wilfried; Rodríguez-Vázquez, Ángel

    2014-01-01

    The capture, processing and distribution of visual information is one of the major challenges for the paradigm of the Internet of Things. Privacy emerges as a fundamental barrier to overcome. The idea of networked image sensors pervasively collecting data generates social rejection in the face of sensitive information being tampered by hackers or misused by legitimate users. Power consumption also constitutes a crucial aspect. Images contain a massive amount of data to be processed under strict timing requirements, demanding high-performance vision systems. In this paper, we describe a hardware-based strategy to concurrently address these two key issues. By conveying processing capabilities to the focal plane in addition to sensing, we can implement privacy protection measures just at the point where sensitive data are generated. Furthermore, such measures can be tailored for efficiently reducing the computational load of subsequent processing stages. As a proof of concept, a full-custom QVGA vision sensor chip is presented. It incorporates a mixed-signal focal-plane sensing-processing array providing programmable pixelation of multiple image regions in parallel. In addition to this functionality, the sensor exploits reconfigurability to implement other processing primitives, namely block-wise dynamic range adaptation, integral image computation and multi-resolution filtering. The proposed circuitry is also suitable to build a granular space, becoming the raw material for subsequent feature extraction and recognition of categorized objects. PMID:25195849

  8. Focal-plane sensing-processing: a power-efficient approach for the implementation of privacy-aware networked visual sensors.

    PubMed

    Fernández-Berni, Jorge; Carmona-Galán, Ricardo; del Río, Rocío; Kleihorst, Richard; Philips, Wilfried; Rodríguez-Vázquez, Ángel

    2014-08-19

    The capture, processing and distribution of visual information is one of the major challenges for the paradigm of the Internet of Things. Privacy emerges as a fundamental barrier to overcome. The idea of networked image sensors pervasively collecting data generates social rejection in the face of sensitive information being tampered by hackers or misused by legitimate users. Power consumption also constitutes a crucial aspect. Images contain a massive amount of data to be processed under strict timing requirements, demanding high-performance vision systems. In this paper, we describe a hardware-based strategy to concurrently address these two key issues. By conveying processing capabilities to the focal plane in addition to sensing, we can implement privacy protection measures just at the point where sensitive data are generated. Furthermore, such measures can be tailored for efficiently reducing the computational load of subsequent processing stages. As a proof of concept, a full-custom QVGA vision sensor chip is presented. It incorporates a mixed-signal focal-plane sensing-processing array providing programmable pixelation of multiple image regions in parallel. In addition to this functionality, the sensor exploits reconfigurability to implement other processing primitives, namely block-wise dynamic range adaptation, integral image computation and multi-resolution filtering. The proposed circuitry is also suitable to build a granular space, becoming the raw material for subsequent feature extraction and recognition of categorized objects.

  9. Robust efficient video fingerprinting

    NASA Astrophysics Data System (ADS)

    Puri, Manika; Lubin, Jeffrey

    2009-02-01

    We have developed a video fingerprinting system with robustness and efficiency as the primary and secondary design criteria. In extensive testing, the system has shown robustness to cropping, letter-boxing, sub-titling, blur, drastic compression, frame rate changes, size changes and color changes, as well as to the geometric distortions often associated with camcorder capture in cinema settings. Efficiency is afforded by a novel two-stage detection process in which a fast matching process first computes a number of likely candidates, which are then passed to a second slower process that computes the overall best match with minimal false alarm probability. One key component of the algorithm is a maximally stable volume computation - a three-dimensional generalization of maximally stable extremal regions - that provides a content-centric coordinate system for subsequent hash function computation, independent of any affine transformation or extensive cropping. Other key features include an efficient bin-based polling strategy for initial candidate selection, and a final SIFT feature-based computation for final verification. We describe the algorithm and its performance, and then discuss additional modifications that can provide further improvement to efficiency and accuracy.

  10. Adaptive control of anaerobic digestion processes-a pilot-scale application.

    PubMed

    Renard, P; Dochain, D; Bastin, G; Naveau, H; Nyns, E J

    1988-03-01

    A simple adaptive control algorithm, for which theoretical stability and convergence properties had been previously demonstrated, has been successfully implemented on a biomethanation pilot reactor. The methane digester, operated in the CSTR mode was submitted to a shock load, and successfully computer controlled during the subsequent transitory state.

  11. Subscriber Response System. Progress Report.

    ERIC Educational Resources Information Center

    Callais, Richard T.

    Results of preliminary tests made prior and subsequent to the installation of a two-way interactive communication system which involves a computer complex termed the Local Processing Center and subscriber terminals located in the home or business location are reported. This first phase of the overall test plan includes tests made at Theta-Com…

  12. Computational Labs Using VPython Complement Conventional Labs in Online and Regular Physics Classes

    NASA Astrophysics Data System (ADS)

    Bachlechner, Martina E.

    2009-03-01

    Fairmont State University has developed online physics classes for the high-school teaching certificate based on the text book Matter and Interaction by Chabay and Sherwood. This lead to using computational VPython labs also in the traditional class room setting to complement conventional labs. The computational modeling process has proven to provide an excellent basis for the subsequent conventional lab and allows for a concrete experience of the difference between behavior according to a model and realistic behavior. Observations in the regular class room setting feed back into the development of the online classes.

  13. Computational process to study the wave propagation In a non-linear medium by quasi- linearization

    NASA Astrophysics Data System (ADS)

    Sharath Babu, K.; Venkata Brammam, J.; Baby Rani, CH

    2018-03-01

    Two objects having distinct velocities come into contact an impact can occur. The impact study i.e., in the displacement of the objects after the impact, the impact force is function of time‘t’ which is behaves similar to compression force. The impact tenure is very short so impulses must be generated subsequently high stresses are generated. In this work we are examined the wave propagation inside the object after collision and measured the object non-linear behavior in the one-dimensional case. Wave transmission is studied by means of material acoustic parameter value. The objective of this paper is to present a computational study of propagating pulsation and harmonic waves in nonlinear media using quasi-linearization and subsequently utilized the central difference scheme. This study gives focus on longitudinal, one- dimensional wave propagation. In the finite difference scheme Non-linear system is reduced to a linear system by applying quasi-linearization method. The computed results exhibit good agreement on par with the selected non-liner wave propagation.

  14. Modeling Early-Stage Processes of U-10 Wt.%Mo Alloy Using Integrated Computational Materials Engineering Concepts

    NASA Astrophysics Data System (ADS)

    Wang, Xiaowo; Xu, Zhijie; Soulami, Ayoub; Hu, Xiaohua; Lavender, Curt; Joshi, Vineet

    2017-12-01

    Low-enriched uranium alloyed with 10 wt.% molybdenum (U-10Mo) has been identified as a promising alternative to high-enriched uranium. Manufacturing U-10Mo alloy involves multiple complex thermomechanical processes that pose challenges for computational modeling. This paper describes the application of integrated computational materials engineering (ICME) concepts to integrate three individual modeling components, viz. homogenization, microstructure-based finite element method for hot rolling, and carbide particle distribution, to simulate the early-stage processes of U-10Mo alloy manufacture. The resulting integrated model enables information to be passed between different model components and leads to improved understanding of the evolution of the microstructure. This ICME approach is then used to predict the variation in the thickness of the Zircaloy-2 barrier as a function of the degree of homogenization and to analyze the carbide distribution, which can affect the recrystallization, hardness, and fracture properties of U-10Mo in subsequent processes.

  15. Corra: Computational framework and tools for LC-MS discovery and targeted mass spectrometry-based proteomics

    PubMed Central

    Brusniak, Mi-Youn; Bodenmiller, Bernd; Campbell, David; Cooke, Kelly; Eddes, James; Garbutt, Andrew; Lau, Hollis; Letarte, Simon; Mueller, Lukas N; Sharma, Vagisha; Vitek, Olga; Zhang, Ning; Aebersold, Ruedi; Watts, Julian D

    2008-01-01

    Background Quantitative proteomics holds great promise for identifying proteins that are differentially abundant between populations representing different physiological or disease states. A range of computational tools is now available for both isotopically labeled and label-free liquid chromatography mass spectrometry (LC-MS) based quantitative proteomics. However, they are generally not comparable to each other in terms of functionality, user interfaces, information input/output, and do not readily facilitate appropriate statistical data analysis. These limitations, along with the array of choices, present a daunting prospect for biologists, and other researchers not trained in bioinformatics, who wish to use LC-MS-based quantitative proteomics. Results We have developed Corra, a computational framework and tools for discovery-based LC-MS proteomics. Corra extends and adapts existing algorithms used for LC-MS-based proteomics, and statistical algorithms, originally developed for microarray data analyses, appropriate for LC-MS data analysis. Corra also adapts software engineering technologies (e.g. Google Web Toolkit, distributed processing) so that computationally intense data processing and statistical analyses can run on a remote server, while the user controls and manages the process from their own computer via a simple web interface. Corra also allows the user to output significantly differentially abundant LC-MS-detected peptide features in a form compatible with subsequent sequence identification via tandem mass spectrometry (MS/MS). We present two case studies to illustrate the application of Corra to commonly performed LC-MS-based biological workflows: a pilot biomarker discovery study of glycoproteins isolated from human plasma samples relevant to type 2 diabetes, and a study in yeast to identify in vivo targets of the protein kinase Ark1 via phosphopeptide profiling. Conclusion The Corra computational framework leverages computational innovation to enable biologists or other researchers to process, analyze and visualize LC-MS data with what would otherwise be a complex and not user-friendly suite of tools. Corra enables appropriate statistical analyses, with controlled false-discovery rates, ultimately to inform subsequent targeted identification of differentially abundant peptides by MS/MS. For the user not trained in bioinformatics, Corra represents a complete, customizable, free and open source computational platform enabling LC-MS-based proteomic workflows, and as such, addresses an unmet need in the LC-MS proteomics field. PMID:19087345

  16. Deflagration to Detonation Transition Processes in Pulsed Detonation Engines

    DTIC Science & Technology

    2002-08-03

    which subsequently leads to DDT. The modelling approach taken here is as outlined by Arntzen et al. [9] and features a fractal based eddy-breakup... Arntzen , B.J., Hjertager, B., Lindstedt, R.P., Mercx, W.P.M. and Popat, N. “Investigations to Improve and Assess the Accuracy of Computational Fluid

  17. Why Today's Computers Don't Learn the Way People Do.

    ERIC Educational Resources Information Center

    Clancey, W. J.

    A major error in cognitive science has been to suppose that the meaning of a representation in the mind is known prior to its production. Representations are inherently perceptual--constructed by a perceptual process and given meaning by subsequent perception of them. The person perceiving the representation determines what it means. This premise…

  18. Neural Bases of Sequence Processing in Action and Language

    ERIC Educational Resources Information Center

    Carota, Francesca; Sirigu, Angela

    2008-01-01

    Real-time estimation of what we will do next is a crucial prerequisite of purposive behavior. During the planning of goal-oriented actions, for instance, the temporal and causal organization of upcoming subsequent moves needs to be predicted based on our knowledge of events. A forward computation of sequential structure is also essential for…

  19. System and method of designing a load bearing layer of an inflatable vessel

    NASA Technical Reports Server (NTRS)

    Spexarth, Gary R. (Inventor)

    2007-01-01

    A computer-implemented method is provided for designing a restraint layer of an inflatable vessel. The restraint layer is inflatable from an initial uninflated configuration to an inflated configuration and is constructed from a plurality of interfacing longitudinal straps and hoop straps. The method involves providing computer processing means (e.g., to receive user inputs, perform calculations, and output results) and utilizing this computer processing means to implement a plurality of subsequent design steps. The computer processing means is utilized to input the load requirements of the inflated restraint layer and to specify an inflated configuration of the restraint layer. This includes specifying a desired design gap between pairs of adjacent longitudinal or hoop straps, whereby the adjacent straps interface with a plurality of transversely extending hoop or longitudinal straps at a plurality of intersections. Furthermore, an initial uninflated configuration of the restraint layer that is inflatable to achieve the specified inflated configuration is determined. This includes calculating a manufacturing gap between pairs of adjacent longitudinal or hoop straps that correspond to the specified desired gap in the inflated configuration of the restraint layer.

  20. Phonological universals constrain the processing of nonspeech stimuli.

    PubMed

    Berent, Iris; Balaban, Evan; Lennertz, Tracy; Vaknin-Nusbaum, Vered

    2010-08-01

    Domain-specific systems are hypothetically specialized with respect to the outputs they compute and the inputs they allow (Fodor, 1983). Here, we examine whether these 2 conditions for specialization are dissociable. An initial experiment suggests that English speakers could extend a putatively universal phonological restriction to inputs identified as nonspeech. A subsequent comparison of English and Russian participants indicates that the processing of nonspeech inputs is modulated by linguistic experience. Striking, qualitative differences between English and Russian participants suggest that they rely on linguistic principles, both universal and language-particular, rather than generic auditory processing strategies. Thus, the computation of idiosyncratic linguistic outputs is apparently not restricted to speech inputs. This conclusion presents various challenges to both domain-specific and domain-general accounts of cognition. 2010 APA, all rights reserved

  1. Predicting RNA-protein binding sites and motifs through combining local and global deep convolutional neural networks.

    PubMed

    Pan, Xiaoyong; Shen, Hong-Bin

    2018-05-02

    RNA-binding proteins (RBPs) take over 5∼10% of the eukaryotic proteome and play key roles in many biological processes, e.g. gene regulation. Experimental detection of RBP binding sites is still time-intensive and high-costly. Instead, computational prediction of the RBP binding sites using pattern learned from existing annotation knowledge is a fast approach. From the biological point of view, the local structure context derived from local sequences will be recognized by specific RBPs. However, in computational modeling using deep learning, to our best knowledge, only global representations of entire RNA sequences are employed. So far, the local sequence information is ignored in the deep model construction process. In this study, we present a computational method iDeepE to predict RNA-protein binding sites from RNA sequences by combining global and local convolutional neural networks (CNNs). For the global CNN, we pad the RNA sequences into the same length. For the local CNN, we split a RNA sequence into multiple overlapping fixed-length subsequences, where each subsequence is a signal channel of the whole sequence. Next, we train deep CNNs for multiple subsequences and the padded sequences to learn high-level features, respectively. Finally, the outputs from local and global CNNs are combined to improve the prediction. iDeepE demonstrates a better performance over state-of-the-art methods on two large-scale datasets derived from CLIP-seq. We also find that the local CNN run 1.8 times faster than the global CNN with comparable performance when using GPUs. Our results show that iDeepE has captured experimentally verified binding motifs. https://github.com/xypan1232/iDeepE. xypan172436@gmail.com or hbshen@sjtu.edu.cn. Supplementary data are available at Bioinformatics online.

  2. We get the algorithms of our ground truths: Designing referential databases in digital image processing

    PubMed Central

    Jaton, Florian

    2017-01-01

    This article documents the practical efforts of a group of scientists designing an image-processing algorithm for saliency detection. By following the actors of this computer science project, the article shows that the problems often considered to be the starting points of computational models are in fact provisional results of time-consuming, collective and highly material processes that engage habits, desires, skills and values. In the project being studied, problematization processes lead to the constitution of referential databases called ‘ground truths’ that enable both the effective shaping of algorithms and the evaluation of their performances. Working as important common touchstones for research communities in image processing, the ground truths are inherited from prior problematization processes and may be imparted to subsequent ones. The ethnographic results of this study suggest two complementary analytical perspectives on algorithms: (1) an ‘axiomatic’ perspective that understands algorithms as sets of instructions designed to solve given problems computationally in the best possible way, and (2) a ‘problem-oriented’ perspective that understands algorithms as sets of instructions designed to computationally retrieve outputs designed and designated during specific problematization processes. If the axiomatic perspective on algorithms puts the emphasis on the numerical transformations of inputs into outputs, the problem-oriented perspective puts the emphasis on the definition of both inputs and outputs. PMID:28950802

  3. Attitude ground support system for the solar maximum mission spacecraft

    NASA Technical Reports Server (NTRS)

    Nair, G.

    1980-01-01

    The SMM attitude ground support system (AGSS) supports the acquisition of spacecraft roll attitude reference, performs the in-flight calibration of the attitude sensor complement, supports onboard control autonomy via onboard computer data base updates, and monitors onboard computer (OBC) performance. Initial roll attitude acquisition is accomplished by obtaining a coarse 3 axis attitude estimate from magnetometer and Sun sensor data and subsequently refining it by processing data from the fixed head star trackers. In-flight calibration of the attitude sensor complement is achieved by processing data from a series of slew maneuvers designed to maximize the observability and accuracy of the appropriate alignments and biases. To ensure autonomy of spacecraft operation, the AGSS selects guide stars and computes sensor occultation information for uplink to the OBC. The onboard attitude control performance is monitored on the ground through periodic attitude determination and processing of OBC data in downlink telemetry. In general, the control performance has met mission requirements. However, software and hardware problems have resulted in sporadic attitude reference losses.

  4. Computer-aided boundary delineation of agricultural lands

    NASA Technical Reports Server (NTRS)

    Cheng, Thomas D.; Angelici, Gary L.; Slye, Robert E.; Ma, Matt

    1989-01-01

    The National Agricultural Statistics Service of the United States Department of Agriculture (USDA) presently uses labor-intensive aerial photographic interpretation techniques to divide large geographical areas into manageable-sized units for estimating domestic crop and livestock production. Prototype software, the computer-aided stratification (CAS) system, was developed to automate the procedure, and currently runs on a Sun-based image processing system. With a background display of LANDSAT Thematic Mapper and United States Geological Survey Digital Line Graph data, the operator uses a cursor to delineate agricultural areas, called sampling units, which are assigned to strata of land-use and land-cover types. The resultant stratified sampling units are used as input into subsequent USDA sampling procedures. As a test, three counties in Missouri were chosen for application of the CAS procedures. Subsequent analysis indicates that CAS was five times faster in creating sampling units than the manual techniques were.

  5. Acoustic Detection Of Loose Particles In Pressure Sensors

    NASA Technical Reports Server (NTRS)

    Kwok, Lloyd C.

    1995-01-01

    Particle-impact-noise-detector (PIND) apparatus used in conjunction with computer program analyzing output of apparatus to detect extraneous particles trapped in pressure sensors. PIND tester essentially shaker equipped with microphone measuring noise in pressure sensor or other object being shaken. Shaker applies controlled vibration. Output of microphone recorded and expressed in terms of voltage, yielding history of noise subsequently processed by computer program. Data taken at sampling rate sufficiently high to enable identification of all impacts of particles on sensor diaphragm and on inner surfaces of sensor cavities.

  6. A Parallel Sliding Region Algorithm to Make Agent-Based Modeling Possible for a Large-Scale Simulation: Modeling Hepatitis C Epidemics in Canada.

    PubMed

    Wong, William W L; Feng, Zeny Z; Thein, Hla-Hla

    2016-11-01

    Agent-based models (ABMs) are computer simulation models that define interactions among agents and simulate emergent behaviors that arise from the ensemble of local decisions. ABMs have been increasingly used to examine trends in infectious disease epidemiology. However, the main limitation of ABMs is the high computational cost for a large-scale simulation. To improve the computational efficiency for large-scale ABM simulations, we built a parallelizable sliding region algorithm (SRA) for ABM and compared it to a nonparallelizable ABM. We developed a complex agent network and performed two simulations to model hepatitis C epidemics based on the real demographic data from Saskatchewan, Canada. The first simulation used the SRA that processed on each postal code subregion subsequently. The second simulation processed the entire population simultaneously. It was concluded that the parallelizable SRA showed computational time saving with comparable results in a province-wide simulation. Using the same method, SRA can be generalized for performing a country-wide simulation. Thus, this parallel algorithm enables the possibility of using ABM for large-scale simulation with limited computational resources.

  7. Implementation and characterization of active feed-forward for deterministic linear optics quantum computing

    NASA Astrophysics Data System (ADS)

    Böhi, P.; Prevedel, R.; Jennewein, T.; Stefanov, A.; Tiefenbacher, F.; Zeilinger, A.

    2007-12-01

    In general, quantum computer architectures which are based on the dynamical evolution of quantum states, also require the processing of classical information, obtained by measurements of the actual qubits that make up the computer. This classical processing involves fast, active adaptation of subsequent measurements and real-time error correction (feed-forward), so that quantum gates and algorithms can be executed in a deterministic and hence error-free fashion. This is also true in the linear optical regime, where the quantum information is stored in the polarization state of photons. The adaptation of the photon’s polarization can be achieved in a very fast manner by employing electro-optical modulators, which change the polarization of a trespassing photon upon appliance of a high voltage. In this paper we discuss techniques for implementing fast, active feed-forward at the single photon level and we present their application in the context of photonic quantum computing. This includes the working principles and the characterization of the EOMs as well as a description of the switching logics, both of which allow quantum computation at an unprecedented speed.

  8. Two Anatomically and Computationally Distinct Learning Signals Predict Changes to Stimulus-Outcome Associations in Hippocampus.

    PubMed

    Boorman, Erie D; Rajendran, Vani G; O'Reilly, Jill X; Behrens, Tim E

    2016-03-16

    Complex cognitive processes require sophisticated local processing but also interactions between distant brain regions. It is therefore critical to be able to study distant interactions between local computations and the neural representations they act on. Here we report two anatomically and computationally distinct learning signals in lateral orbitofrontal cortex (lOFC) and the dopaminergic ventral midbrain (VM) that predict trial-by-trial changes to a basic internal model in hippocampus. To measure local computations during learning and their interaction with neural representations, we coupled computational fMRI with trial-by-trial fMRI suppression. We find that suppression in a medial temporal lobe network changes trial-by-trial in proportion to stimulus-outcome associations. During interleaved choice trials, we identify learning signals that relate to outcome type in lOFC and to reward value in VM. These intervening choice feedback signals predicted the subsequent change to hippocampal suppression, suggesting a convergence of signals that update the flexible representation of stimulus-outcome associations. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Parallel Computations in Insect and Mammalian Visual Motion Processing

    PubMed Central

    Clark, Damon A.; Demb, Jonathan B.

    2016-01-01

    Sensory systems use receptors to extract information from the environment and neural circuits to perform subsequent computations. These computations may be described as algorithms composed of sequential mathematical operations. Comparing these operations across taxa reveals how different neural circuits have evolved to solve the same problem, even when using different mechanisms to implement the underlying math. In this review, we compare how insect and mammalian neural circuits have solved the problem of motion estimation, focusing on the fruit fly Drosophila and the mouse retina. Although the two systems implement computations with grossly different anatomy and molecular mechanisms, the underlying circuits transform light into motion signals with strikingly similar processing steps. These similarities run from photoreceptor gain control and spatiotemporal tuning to ON and OFF pathway structures, motion detection, and computed motion signals. The parallels between the two systems suggest that a limited set of algorithms for estimating motion satisfies both the needs of sighted creatures and the constraints imposed on them by metabolism, anatomy, and the structure and regularities of the visual world. PMID:27780048

  10. Parallel Computations in Insect and Mammalian Visual Motion Processing.

    PubMed

    Clark, Damon A; Demb, Jonathan B

    2016-10-24

    Sensory systems use receptors to extract information from the environment and neural circuits to perform subsequent computations. These computations may be described as algorithms composed of sequential mathematical operations. Comparing these operations across taxa reveals how different neural circuits have evolved to solve the same problem, even when using different mechanisms to implement the underlying math. In this review, we compare how insect and mammalian neural circuits have solved the problem of motion estimation, focusing on the fruit fly Drosophila and the mouse retina. Although the two systems implement computations with grossly different anatomy and molecular mechanisms, the underlying circuits transform light into motion signals with strikingly similar processing steps. These similarities run from photoreceptor gain control and spatiotemporal tuning to ON and OFF pathway structures, motion detection, and computed motion signals. The parallels between the two systems suggest that a limited set of algorithms for estimating motion satisfies both the needs of sighted creatures and the constraints imposed on them by metabolism, anatomy, and the structure and regularities of the visual world. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Development and implementation of a low cost micro computer system for LANDSAT analysis and geographic data base applications

    NASA Technical Reports Server (NTRS)

    Faust, N.; Jordon, L.

    1981-01-01

    Since the implementation of the GRID and IMGRID computer programs for multivariate spatial analysis in the early 1970's, geographic data analysis subsequently moved from large computers to minicomputers and now to microcomputers with radical reduction in the costs associated with planning analyses. Programs designed to process LANDSAT data to be used as one element in a geographic data base were used once NIMGRID (new IMGRID), a raster oriented geographic information system, was implemented on the microcomputer. Programs for training field selection, supervised and unsupervised classification, and image enhancement were added. Enhancements to the color graphics capabilities of the microsystem allow display of three channels of LANDSAT data in color infrared format. The basic microcomputer hardware needed to perform NIMGRID and most LANDSAT analyses is listed as well as the software available for LANDSAT processing.

  12. A combined approach of self-referencing and Principle Component Thermography for transient, steady, and selective heating scenarios

    NASA Astrophysics Data System (ADS)

    Omar, M. A.; Parvataneni, R.; Zhou, Y.

    2010-09-01

    Proposed manuscript describes the implementation of a two step processing procedure, composed of the self-referencing and the Principle Component Thermography (PCT). The combined approach enables the processing of thermograms from transient (flash), steady (halogen) and selective (induction) thermal perturbations. Firstly, the research discusses the three basic processing schemes typically applied for thermography; namely mathematical transformation based processing, curve-fitting processing, and direct contrast based calculations. Proposed algorithm utilizes the self-referencing scheme to create a sub-sequence that contains the maximum contrast information and also compute the anomalies' depth values. While, the Principle Component Thermography operates on the sub-sequence frames by re-arranging its data content (pixel values) spatially and temporally then it highlights the data variance. The PCT is mainly used as a mathematical mean to enhance the defects' contrast thus enabling its shape and size retrieval. The results show that the proposed combined scheme is effective in processing multiple size defects in sandwich steel structure in real-time (<30 Hz) and with full spatial coverage, without the need for a priori defect-free area.

  13. A novel track-before-detect algorithm based on optimal nonlinear filtering for detecting and tracking infrared dim target

    NASA Astrophysics Data System (ADS)

    Tian, Yuexin; Gao, Kun; Liu, Ying; Han, Lu

    2015-08-01

    Aiming at the nonlinear and non-Gaussian features of the real infrared scenes, an optimal nonlinear filtering based algorithm for the infrared dim target tracking-before-detecting application is proposed. It uses the nonlinear theory to construct the state and observation models and uses the spectral separation scheme based Wiener chaos expansion method to resolve the stochastic differential equation of the constructed models. In order to improve computation efficiency, the most time-consuming operations independent of observation data are processed on the fore observation stage. The other observation data related rapid computations are implemented subsequently. Simulation results show that the algorithm possesses excellent detection performance and is more suitable for real-time processing.

  14. Toward the Computational Representation of Individual Cultural, Cognitive, and Physiological State: The Sensor Shooter Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    RAYBOURN,ELAINE M.; FORSYTHE,JAMES C.

    2001-08-01

    This report documents an exploratory FY 00 LDRD project that sought to demonstrate the first steps toward a realistic computational representation of the variability encountered in individual human behavior. Realism, as conceptualized in this project, required that the human representation address the underlying psychological, cultural, physiological, and environmental stressors. The present report outlines the researchers' approach to representing cognitive, cultural, and physiological variability of an individual in an ambiguous situation while faced with a high-consequence decision that would greatly impact subsequent events. The present project was framed around a sensor-shooter scenario as a soldier interacts with an unexpected target (twomore » young Iraqi girls). A software model of the ''Sensor Shooter'' scenario from Desert Storm was developed in which the framework consisted of a computational instantiation of Recognition Primed Decision Making in the context of a Naturalistic Decision Making model [1]. Recognition Primed Decision Making was augmented with an underlying foundation based on our current understanding of human neurophysiology and its relationship to human cognitive processes. While the Gulf War scenario that constitutes the framework for the Sensor Shooter prototype is highly specific, the human decision architecture and the subsequent simulation are applicable to other problems similar in concept, intensity, and degree of uncertainty. The goal was to provide initial steps toward a computational representation of human variability in cultural, cognitive, and physiological state in order to attain a better understanding of the full depth of human decision-making processes in the context of ambiguity, novelty, and heightened arousal.« less

  15. Theoretical Modeling of Molecular and Electron Kinetic Processes. Volume I. Theoretical Formulation of Analysis and Description of Computer Program.

    DTIC Science & Technology

    1979-01-01

    syn- thesis proceed s by ignoring unacceptable syntax or other errors , pro- tection against subsequent execution of a faulty reaction scheme can be...resulting TAPE9 . During subroutine syn thesis and reaction processing, a search is made (fo r each secondary electron collision encountered) to...program library, which can be cat- alogued and saved if any future specialized modifications (beyond the scope of the syn thesis capability of LASER

  16. Modeling the filament winding process

    NASA Technical Reports Server (NTRS)

    Calius, E. P.; Springer, G. S.

    1985-01-01

    A model is presented which can be used to determine the appropriate values of the process variables for filament winding a cylinder. The model provides the cylinder temperature, viscosity, degree of cure, fiber position and fiber tension as functions of position and time during the filament winding and subsequent cure, and the residual stresses and strains within the cylinder during and after the cure. A computer code was developed to obtain quantitative results. Sample results are given which illustrate the information that can be generated with this code.

  17. ICME — A Mere Coupling of Models or a Discipline of Its Own?

    NASA Astrophysics Data System (ADS)

    Bambach, Markus; Schmitz, Georg J.; Prahl, Ulrich

    Technically, ICME — Integrated computational materials engineering — is an approach for solving advanced engineering problems related to the design of new materials and processes by combining individual materials and process models. The combination of models by now is mainly achieved by manual transformation of the output of a simulation to form the input to a subsequent one. This subsequent simulation is either performed at a different length scale or constitutes a subsequent step along the process chain. Is ICME thus just a synonym for the coupling of simulations? In fact, most ICME publications up to now are examples of the joint application of selected models and software codes to a specific problem. However, from a systems point of view, the coupling of individual models and/or software codes across length scales and along material processing chains leads to highly complex meta-models. Their viability has to be ensured by joint efforts from science, industry, software developers and independent organizations. This paper identifies some developments that seem necessary to make future ICME simulations viable, sustainable and broadly accessible and accepted. The main conclusion is that ICME is more than a multi-disciplinary subject but a discipline of its own, for which a generic structural framework has to be elaborated and established.

  18. Comparing Learners' State Anxiety during Task-Based Interaction in Computer-Mediated and Face-to-Face Communication

    ERIC Educational Resources Information Center

    Baralt, Melissa; Gurzynski-Weiss, Laura

    2011-01-01

    The construct of anxiety is often believed to be the affective factor with the greatest potential to pervasively affect the learning process (Horwitz, 2001), and recent research has demonstrated that anxiety can mediate whether learners are able to notice feedback and subsequently produce output (Sheen, 2008). In order to reduce the negative…

  19. A tristate optical logic system

    NASA Astrophysics Data System (ADS)

    Basuray, A.; Mukhopadhyay, S.; Kumar Ghosh, Hirak; Datta, A. K.

    1991-09-01

    A method is described to represent data in a tristate logic system which are subsequently replaced by Modified Trinary Numbers (MTN). This system is advantagegeous in parallel processing as carry and borrow free operations in arithmatic computation is possible. The logical operations are also modified according to the three states available. A possible practical application of the same using polarized light is also suggested.

  20. The Use of Computer-Mediated Communication To Enhance Subsequent Face-to-Face Discussions.

    ERIC Educational Resources Information Center

    Dietz-Uhler, Beth; Bishop-Clark, Cathy

    2001-01-01

    Describes a study of undergraduate students that assessed the effects of synchronous (Internet chat) and asynchronous (Internet discussion board) computer-mediated communication on subsequent face-to-face discussions. Results showed that face-to-face discussions preceded by computer-mediated communication were perceived to be more enjoyable.…

  1. A virtual surgical training system that simulates cutting of soft tissue using a modified pre-computed elastic model.

    PubMed

    Toe, Kyaw Kyar; Huang, Weimin; Yang, Tao; Duan, Yuping; Zhou, Jiayin; Su, Yi; Teo, Soo-Kng; Kumar, Selvaraj Senthil; Lim, Calvin Chi-Wan; Chui, Chee Kong; Chang, Stephen

    2015-08-01

    This work presents a surgical training system that incorporates cutting operation of soft tissue simulated based on a modified pre-computed linear elastic model in the Simulation Open Framework Architecture (SOFA) environment. A precomputed linear elastic model used for the simulation of soft tissue deformation involves computing the compliance matrix a priori based on the topological information of the mesh. While this process may require a few minutes to several hours, based on the number of vertices in the mesh, it needs only to be computed once and allows real-time computation of the subsequent soft tissue deformation. However, as the compliance matrix is based on the initial topology of the mesh, it does not allow any topological changes during simulation, such as cutting or tearing of the mesh. This work proposes a way to modify the pre-computed data by correcting the topological connectivity in the compliance matrix, without re-computing the compliance matrix which is computationally expensive.

  2. Weighted triangulation adjustment

    USGS Publications Warehouse

    Anderson, Walter L.

    1969-01-01

    The variation of coordinates method is employed to perform a weighted least squares adjustment of horizontal survey networks. Geodetic coordinates are required for each fixed and adjustable station. A preliminary inverse geodetic position computation is made for each observed line. Weights associated with each observed equation for direction, azimuth, and distance are applied in the formation of the normal equations in-the least squares adjustment. The number of normal equations that may be solved is twice the number of new stations and less than 150. When the normal equations are solved, shifts are produced at adjustable stations. Previously computed correction factors are applied to the shifts and a most probable geodetic position is found for each adjustable station. Pinal azimuths and distances are computed. These may be written onto magnetic tape for subsequent computation of state plane or grid coordinates. Input consists of punch cards containing project identification, program options, and position and observation information. Results listed include preliminary and final positions, residuals, observation equations, solution of the normal equations showing magnitudes of shifts, and a plot of each adjusted and fixed station. During processing, data sets containing irrecoverable errors are rejected and the type of error is listed. The computer resumes processing of additional data sets.. Other conditions cause warning-errors to be issued, and processing continues with the current data set.

  3. Association between background parenchymal enhancement of breast MRI and BIRADS rating change in the subsequent screening

    NASA Astrophysics Data System (ADS)

    Aghaei, Faranak; Mirniaharikandehei, Seyedehnafiseh; Hollingsworth, Alan B.; Stoug, Rebecca G.; Pearce, Melanie; Liu, Hong; Zheng, Bin

    2018-03-01

    Although breast magnetic resonance imaging (MRI) has been used as a breast cancer screening modality for high-risk women, its cancer detection yield remains low (i.e., <= 3%). Thus, increasing breast MRI screening efficacy and cancer detection yield is an important clinical issue in breast cancer screening. In this study, we investigated association between the background parenchymal enhancement (BPE) of breast MRI and the change of diagnostic (BIRADS) status in the next subsequent breast MRI screening. A dataset with 65 breast MRI screening cases was retrospectively assembled. All cases were rated BIRADS-2 (benign findings). In the subsequent screening, 4 cases were malignant (BIRADS-6), 48 remained BIRADS-2 and 13 were downgraded to negative (BIRADS-1). A computer-aided detection scheme was applied to process images of the first set of breast MRI screening. Total of 33 features were computed including texture feature and global BPE features. Texture features were computed from either a gray-level co-occurrence matrix or a gray level run length matrix. Ten global BPE features were also initially computed from two breast regions and bilateral difference between the left and right breasts. Box-plot based analysis shows positive association between texture features and BIRADS rating levels in the second screening. Furthermore, a logistic regression model was built using optimal features selected by a CFS based feature selection method. Using a leave-one-case-out based cross-validation method, classification yielded an overall 75% accuracy in predicting the improvement (or downgrade) of diagnostic status (to BIRAD-1) in the subsequent breast MRI screening. This study demonstrated potential of developing a new quantitative imaging marker to predict diagnostic status change in the short-term, which may help eliminate a high fraction of unnecessary repeated breast MRI screenings and increase the cancer detection yield.

  4. Geomechanical Analysis of Underground Coal Gasification Reactor Cool Down for Subsequent CO2 Storage

    NASA Astrophysics Data System (ADS)

    Sarhosis, Vasilis; Yang, Dongmin; Kempka, Thomas; Sheng, Yong

    2013-04-01

    Underground coal gasification (UCG) is an efficient method for the conversion of conventionally unmineable coal resources into energy and feedstock. If the UCG process is combined with the subsequent storage of process CO2 in the former UCG reactors, a near-zero carbon emission energy source can be realised. This study aims to present the development of a computational model to simulate the cooling process of UCG reactors in abandonment to decrease the initial high temperature of more than 400 °C to a level where extensive CO2 volume expansion due to temperature changes can be significantly reduced during the time of CO2 injection. Furthermore, we predict the cool down temperature conditions with and without water flushing. A state of the art coupled thermal-mechanical model was developed using the finite element software ABAQUS to predict the cavity growth and the resulting surface subsidence. In addition, the multi-physics computational software COMSOL was employed to simulate the cavity cool down process which is of uttermost relevance for CO2 storage in the former UCG reactors. For that purpose, we simulated fluid flow, thermal conduction as well as thermal convection processes between fluid (water and CO2) and solid represented by coal and surrounding rocks. Material properties for rocks and coal were obtained from extant literature sources and geomechanical testings which were carried out on samples derived from a prospective demonstration site in Bulgaria. The analysis of results showed that the numerical models developed allowed for the determination of the UCG reactor growth, roof spalling, surface subsidence and heat propagation during the UCG process and the subsequent CO2 storage. It is anticipated that the results of this study can support optimisation of the preparation procedure for CO2 storage in former UCG reactors. The proposed scheme was discussed so far, but not validated by a coupled numerical analysis and if proved to be applicable it could provide a significant optimisation of the UCG process by means of CO2 storage efficiency. The proposed coupled UCG-CCS scheme allows for meeting EU targets for greenhouse gas emissions and increases the coal yield otherwise impossible to exploit.

  5. Emphasizing the only character: emphasis, attention and contrast.

    PubMed

    Chen, Lijing; Yang, Yufang

    2015-03-01

    In conversations, pragmatic information such as emphasis is important for identifying the speaker's/writer's intention. The present research examines the cognitive processes involved in emphasis processing. Participants read short discourses that introduced one or two character(s), with the character being emphasized or non-emphasized in subsequent texts. Eye movements showed that: (1) early processing of the emphasized word was facilitated, which may have been due to increased attention allocation, whereas (2) late integration of the emphasized character was inhibited when the discourse involved only this character. These results indicate that it is necessary to include other characters as contrastive characters to facilitate the integration of an emphasized character, and support the existence of a relationship between Emphasis and Contrast computation. Taken together, our findings indicate that both attention allocation and contrast computation are involved in emphasis processing, and support the incremental nature of sentence processing and the importance of contrast in discourse comprehension. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Data on the impact of increasing the W amount on the mass density and compressive properties of Ni-W alloys processed by spark plasma sintering.

    PubMed

    Sadat, T; Hocini, A; Lilensten, L; Faurie, D; Tingaud, D; Dirras, G

    2016-06-01

    Bulk Ni-W alloys having composite-like microstructures are processed by spark plasma sintering (SPS) route of Ni and W powder blends as reported in a recent study of Sadat et al. (2016) (DOI of original article: doi:10.1016/j.matdes.2015.10.083) [1]. The present dataset deals with determination of mass density and evaluation of room temperature compressive mechanical properties as function of the amount of W (%wt. basis). The presented data concern: (i) measurement of the mass of each investigated Ni-W alloy which is subsequently used to compute the mass density of the alloy and (ii) the raw (stress (MPa) and strain ([Formula: see text])) data, which can be subsequently used for stress/ strain plots.

  7. Data on the impact of increasing the W amount on the mass density and compressive properties of Ni–W alloys processed by spark plasma sintering

    PubMed Central

    Sadat, T.; Hocini, A.; Lilensten, L.; Faurie, D.; Tingaud, D.; Dirras, G.

    2016-01-01

    Bulk Ni–W alloys having composite-like microstructures are processed by spark plasma sintering (SPS) route of Ni and W powder blends as reported in a recent study of Sadat et al. (2016) (DOI of original article: doi:10.1016/j.matdes.2015.10.083) [1]. The present dataset deals with determination of mass density and evaluation of room temperature compressive mechanical properties as function of the amount of W (%wt. basis). The presented data concern: (i) measurement of the mass of each investigated Ni–W alloy which is subsequently used to compute the mass density of the alloy and (ii) the raw (stress (MPa) and strain (ΔLL0)) data, which can be subsequently used for stress/ strain plots. PMID:27158658

  8. Geometry program for aerodynamic lifting surface theory

    NASA Technical Reports Server (NTRS)

    Medan, R. T.

    1973-01-01

    A computer program that provides the geometry and boundary conditions appropriate for an analysis of a lifting, thin wing with control surfaces in linearized, subsonic, steady flow is presented. The kernel function method lifting surface theory is applied. The data which is generated by the program is stored on disk files or tapes for later use by programs which calculate an influence matrix, plot the wing planform, and evaluate the loads on the wing. In addition to processing data for subsequent use in a lifting surface analysis, the program is useful for computing area and mean geometric chords of the wing and control surfaces.

  9. Computational neural learning formalisms for manipulator inverse kinematics

    NASA Technical Reports Server (NTRS)

    Gulati, Sandeep; Barhen, Jacob; Iyengar, S. Sitharama

    1989-01-01

    An efficient, adaptive neural learning paradigm for addressing the inverse kinematics of redundant manipulators is presented. The proposed methodology exploits the infinite local stability of terminal attractors - a new class of mathematical constructs which provide unique information processing capabilities to artificial neural systems. For robotic applications, synaptic elements of such networks can rapidly acquire the kinematic invariances embedded within the presented samples. Subsequently, joint-space configurations, required to follow arbitrary end-effector trajectories, can readily be computed. In a significant departure from prior neuromorphic learning algorithms, this methodology provides mechanisms for incorporating an in-training skew to handle kinematics and environmental constraints.

  10. Inlet Development for a Rocket Based Combined Cycle, Single Stage to Orbit Vehicle Using Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    DeBonis, J. R.; Trefny, C. J.; Steffen, C. J., Jr.

    1999-01-01

    Design and analysis of the inlet for a rocket based combined cycle engine is discussed. Computational fluid dynamics was used in both the design and subsequent analysis. Reynolds averaged Navier-Stokes simulations were performed using both perfect gas and real gas assumptions. An inlet design that operates over the required Mach number range from 0 to 12 was produced. Performance data for cycle analysis was post processed using a stream thrust averaging technique. A detailed performance database for cycle analysis is presented. The effect ot vehicle forebody compression on air capture is also examined.

  11. Morphology control in polymer blend fibers—a high throughput computing approach

    NASA Astrophysics Data System (ADS)

    Sesha Sarath Pokuri, Balaji; Ganapathysubramanian, Baskar

    2016-08-01

    Fibers made from polymer blends have conventionally enjoyed wide use, particularly in textiles. This wide applicability is primarily aided by the ease of manufacturing such fibers. More recently, the ability to tailor the internal morphology of polymer blend fibers by carefully designing processing conditions has enabled such fibers to be used in technologically relevant applications. Some examples include anisotropic insulating properties for heat and anisotropic wicking of moisture, coaxial morphologies for optical applications as well as fibers with high internal surface area for filtration and catalysis applications. However, identifying the appropriate processing conditions from the large space of possibilities using conventional trial-and-error approaches is a tedious and resource-intensive process. Here, we illustrate a high throughput computational approach to rapidly explore and characterize how processing conditions (specifically blend ratio and evaporation rates) affect the internal morphology of polymer blends during solvent based fabrication. We focus on a PS: PMMA system and identify two distinct classes of morphologies formed due to variations in the processing conditions. We subsequently map the processing conditions to the morphology class, thus constructing a ‘phase diagram’ that enables rapid identification of processing parameters for specific morphology class. We finally demonstrate the potential for time dependent processing conditions to get desired features of the morphology. This opens up the possibility of rational stage-wise design of processing pathways for tailored fiber morphology using high throughput computing.

  12. Photophysical and photochemical insights into the photodegradation of sulfapyridine in water: A joint experimental and theoretical study.

    PubMed

    Zhang, Heming; Wei, Xiaoxuan; Song, Xuedan; Shah, Shaheen; Chen, Jingwen; Liu, Jianhui; Hao, Ce; Chen, Zhongfang

    2018-01-01

    For organic pollutants, photodegradation, as a major abiotic elimination process and of great importance to the environmental fate and risk, involves rather complicated physical and chemical processes of excited molecules. Herein, we systematically studied the photophysical and photochemical processes of a widely used antibiotic, namely sulfapyridine. By means of density functional theory (DFT) computations, we examined the rate constants and the competition of both photophysical and photochemical processes, elucidated the photochemical reaction mechanism, calculated reaction quantum yield (Φ) based on both photophysical and photochemical processes, and subsequently estimated the photodegradation rate constant. We further conducted photolysis experiments to measure the photodegradation rate constant of sulfapyridine. Our computations showed that sulfapyridine at the lowest excited singlet state (S 1 ) mainly undergoes internal conversion to its ground state, and is difficult to transfer to the lowest excited triplet states (T 1 ) via intersystem crossing (ISC) and emit fluorescence. In T 1 state, compared with phosphorescence emission and ISC, chemical reaction is much easier to initiate. Encouragingly, the theoretically predicted photodegradation rate constant is close to the experimentally observed value, indicating that quantum chemistry computation is powerful enough to study photodegradation involving ultra-fast photophysical and photochemical processes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. 3D finite element modelling of sheet metal blanking process

    NASA Astrophysics Data System (ADS)

    Bohdal, Lukasz; Kukielka, Leon; Chodor, Jaroslaw; Kulakowska, Agnieszka; Patyk, Radoslaw; Kaldunski, Pawel

    2018-05-01

    The shearing process such as the blanking of sheet metals has been used often to prepare workpieces for subsequent forming operations. The use of FEM simulation is increasing for investigation and optimizing the blanking process. In the current literature a blanking FEM simulations for the limited capability and large computational cost of the three dimensional (3D) analysis has been largely limited to two dimensional (2D) plane axis-symmetry problems. However, a significant progress in modelling which takes into account the influence of real material (e.g. microstructure of the material), physical and technological conditions can be obtained by using 3D numerical analysis methods in this area. The objective of this paper is to present 3D finite element analysis of the ductile fracture, strain distribution and stress in blanking process with the assumption geometrical and physical nonlinearities. The physical, mathematical and computer model of the process are elaborated. Dynamic effects, mechanical coupling, constitutive damage law and contact friction are taken into account. The application in ANSYS/LS-DYNA program is elaborated. The effect of the main process parameter a blanking clearance on the deformation of 1018 steel and quality of the blank's sheared edge is analyzed. The results of computer simulations can be used to forecasting quality of the final parts optimization.

  14. Heat exchanger expert system logic

    NASA Technical Reports Server (NTRS)

    Cormier, R.

    1988-01-01

    The reduction is described of the operation and fault diagnostics of a Deep Space Network heat exchanger to a rule base by the application of propositional calculus to a set of logic statements. The value of this approach lies in the ease of converting the logic and subsequently implementing it on a computer as an expert system. The rule base was written in Process Intelligent Control software.

  15. Comparison of magnetic resonance imaging and computed tomography in suspected lesions in the posterior cranial fossa.

    PubMed Central

    Teasdale, G. M.; Hadley, D. M.; Lawrence, A.; Bone, I.; Burton, H.; Grant, R.; Condon, B.; Macpherson, P.; Rowan, J.

    1989-01-01

    OBJECTIVE--To compare computed tomography and magnetic resonance imaging in investigating patients suspected of having a lesion in the posterior cranial fossa. DESIGN--Randomised allocation of newly referred patients to undergo either computed tomography or magnetic resonance imaging; the alternative investigation was performed subsequently only in response to a request from the referring doctor. SETTING--A regional neuroscience centre serving 2.7 million. PATIENTS--1020 Patients recruited between April 1986 and December 1987, all suspected by neurologists, neurosurgeons, or other specialists of having a lesion in the posterior fossa and referred for neuroradiology. The groups allocated to undergo computed tomography or magnetic resonance imaging were well matched in distributions of age, sex, specialty of referring doctor, investigation as an inpatient or an outpatient, suspected site of lesion, and presumed disease process; the referring doctor's confidence in the initial clinical diagnosis was also similar. INTERVENTIONS--After the patients had been imaged by either computed tomography or magnetic resonance (using a resistive magnet of 0.15 T) doctors were given the radiologist's report and a form asking if they considered that imaging with the alternative technique was necessary and, if so, why; it also asked for their current diagnoses and their confidence in them. MAIN OUTCOME MEASURES--Number of requests for the alternative method of investigation. Assessment of characteristics of patients for whom further imaging was requested and lesions that were suspected initially and how the results of the second imaging affected clinicians' and radiologists' opinions. RESULTS--Ninety three of the 501 patients who initially underwent computed tomography were referred subsequently for magnetic resonance imaging whereas only 28 of the 493 patients who initially underwent magnetic resonance imaging were referred subsequently for computed tomography. Over the study the number of patients referred for magnetic resonance imaging after computed tomography increased but requests for computed tomography after magnetic resonance imaging decreased. The reason that clinicians gave most commonly for requesting further imaging by magnetic resonance was that the results of the initial computed tomography failed to exclude their suspected diagnosis (64 patients). This was less common in patients investigated initially by magnetic resonance imaging (eight patients). Management of 28 patients (6%) imaged initially with computed tomography and 12 patients (2%) imaged initially with magnetic resonance was changed on the basis of the results of the alternative imaging. CONCLUSIONS--Magnetic resonance imaging provided doctors with the information required to manage patients suspected of having a lesion in the posterior fossa more commonly than computed tomography, but computed tomography alone was satisfactory in 80% of cases... PMID:2506965

  16. Modeling cation/anion-water interactions in functional aluminosilicate structures.

    PubMed

    Richards, A J; Barnes, P; Collins, D R; Christodoulos, F; Clark, S M

    1995-02-01

    A need for the computer simulation of hydration/dehydration processes in functional aluminosilicate structures has been noted. Full and realistic simulations of these systems can be somewhat ambitious and require the aid of interactive computer graphics to identify key structural/chemical units, both in the devising of suitable water-ion simulation potentials and in the analysis of hydrogen-bonding schemes in the subsequent simulation studies. In this article, the former is demonstrated by the assembling of a range of essential water-ion potentials. These span the range of formal charges from +4e to -2e, and are evaluated in the context of three types of structure: a porous zeolite, calcium silicate cement, and layered clay. As an example of the latter, the computer graphics output from Monte Carlo computer simulation studies of hydration/dehydration in calcium-zeolite A is presented.

  17. Investigation into Cloud Computing for More Robust Automated Bulk Image Geoprocessing

    NASA Technical Reports Server (NTRS)

    Brown, Richard B.; Smoot, James C.; Underwood, Lauren; Armstrong, C. Duane

    2012-01-01

    Geospatial resource assessments frequently require timely geospatial data processing that involves large multivariate remote sensing data sets. In particular, for disasters, response requires rapid access to large data volumes, substantial storage space and high performance processing capability. The processing and distribution of this data into usable information products requires a processing pipeline that can efficiently manage the required storage, computing utilities, and data handling requirements. In recent years, with the availability of cloud computing technology, cloud processing platforms have made available a powerful new computing infrastructure resource that can meet this need. To assess the utility of this resource, this project investigates cloud computing platforms for bulk, automated geoprocessing capabilities with respect to data handling and application development requirements. This presentation is of work being conducted by Applied Sciences Program Office at NASA-Stennis Space Center. A prototypical set of image manipulation and transformation processes that incorporate sample Unmanned Airborne System data were developed to create value-added products and tested for implementation on the "cloud". This project outlines the steps involved in creating and testing of open source software developed process code on a local prototype platform, and then transitioning this code with associated environment requirements into an analogous, but memory and processor enhanced cloud platform. A data processing cloud was used to store both standard digital camera panchromatic and multi-band image data, which were subsequently subjected to standard image processing functions such as NDVI (Normalized Difference Vegetation Index), NDMI (Normalized Difference Moisture Index), band stacking, reprojection, and other similar type data processes. Cloud infrastructure service providers were evaluated by taking these locally tested processing functions, and then applying them to a given cloud-enabled infrastructure to assesses and compare environment setup options and enabled technologies. This project reviews findings that were observed when cloud platforms were evaluated for bulk geoprocessing capabilities based on data handling and application development requirements.

  18. How Formal Methods Impels Discovery: A Short History of an Air Traffic Management Project

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Hagen, George; Maddalon, Jeffrey M.; Munoz, Cesar A.; Narkawicz, Anthony; Dowek, Gilles

    2010-01-01

    In this paper we describe a process of algorithmic discovery that was driven by our goal of achieving complete, mechanically verified algorithms that compute conflict prevention bands for use in en route air traffic management. The algorithms were originally defined in the PVS specification language and subsequently have been implemented in Java and C++. We do not present the proofs in this paper: instead, we describe the process of discovery and the key ideas that enabled the final formal proof of correctness

  19. New atmospheric sensor analysis study

    NASA Technical Reports Server (NTRS)

    Parker, K. G.

    1989-01-01

    The functional capabilities of the ESAD Research Computing Facility are discussed. The system is used in processing atmospheric measurements which are used in the evaluation of sensor performance, conducting design-concept simulation studies, and also in modeling the physical and dynamical nature of atmospheric processes. The results may then be evaluated to furnish inputs into the final design specifications for new space sensors intended for future Spacelab, Space Station, and free-flying missions. In addition, data gathered from these missions may subsequently be analyzed to provide better understanding of requirements for numerical modeling of atmospheric phenomena.

  20. Application of programmable logic controllers to space simulation

    NASA Technical Reports Server (NTRS)

    Sushon, Janet

    1992-01-01

    Incorporating a state-of-the-art process control and instrumentation system into a complex system for thermal vacuum testing is discussed. The challenge was to connect several independent control systems provided by various vendors to a supervisory computer. This combination will sequentially control and monitor the process, collect the data, and transmit it to color a graphic system for subsequent manipulation. The vacuum system upgrade included: replacement of seventeen diffusion pumps with eight cryogenic pumps and one turbomolecular pump, replacing a relay based control system, replacing vacuum instrumentation, and upgrading the data acquisition system.

  1. Recurrent V1-V2 interaction in early visual boundary processing.

    PubMed

    Neumann, H; Sepp, W

    1999-11-01

    A majority of cortical areas are connected via feedforward and feedback fiber projections. In feedforward pathways we mainly observe stages of feature detection and integration. The computational role of the descending pathways at different stages of processing remains mainly unknown. Based on empirical findings we suggest that the top-down feedback pathways subserve a context-dependent gain control mechanism. We propose a new computational model for recurrent contour processing in which normalized activities of orientation selective contrast cells are fed forward to the next processing stage. There, the arrangement of input activation is matched against local patterns of contour shape. The resulting activities are subsequently fed back to the previous stage to locally enhance those initial measurements that are consistent with the top-down generated responses. In all, we suggest a computational theory for recurrent processing in the visual cortex in which the significance of local measurements is evaluated on the basis of a broader visual context that is represented in terms of contour code patterns. The model serves as a framework to link physiological with perceptual data gathered in psychophysical experiments. It handles a variety of perceptual phenomena, such as the local grouping of fragmented shape outline, texture surround and density effects, and the interpolation of illusory contours.

  2. Segmentation-free image processing and analysis of precipitate shapes in 2D and 3D

    NASA Astrophysics Data System (ADS)

    Bales, Ben; Pollock, Tresa; Petzold, Linda

    2017-06-01

    Segmentation based image analysis techniques are routinely employed for quantitative analysis of complex microstructures containing two or more phases. The primary advantage of these approaches is that spatial information on the distribution of phases is retained, enabling subjective judgements of the quality of the segmentation and subsequent analysis process. The downside is that computing micrograph segmentations with data from morphologically complex microstructures gathered with error-prone detectors is challenging and, if no special care is taken, the artifacts of the segmentation will make any subsequent analysis and conclusions uncertain. In this paper we demonstrate, using a two phase nickel-base superalloy microstructure as a model system, a new methodology for analysis of precipitate shapes using a segmentation-free approach based on the histogram of oriented gradients feature descriptor, a classic tool in image analysis. The benefits of this methodology for analysis of microstructure in two and three-dimensions are demonstrated.

  3. From the CMS Computing Experience in the WLCG STEP'09 Challenge to the First Data Taking of the LHC Era

    NASA Astrophysics Data System (ADS)

    Bonacorsi, D.; Gutsche, O.

    The Worldwide LHC Computing Grid (WLCG) project decided in March 2009 to perform scale tests of parts of its overall Grid infrastructure before the start of the LHC data taking. The "Scale Test for the Experiment Program" (STEP'09) was performed mainly in June 2009 -with more selected tests in September- October 2009 -and emphasized the simultaneous test of the computing systems of all 4 LHC experiments. CMS tested its Tier-0 tape writing and processing capabilities. The Tier-1 tape systems were stress tested using the complete range of Tier-1 work-flows: transfer from Tier-0 and custody of data on tape, processing and subsequent archival, redistribution of datasets amongst all Tier-1 sites as well as burst transfers of datasets to Tier-2 sites. The Tier-2 analysis capacity was tested using bulk analysis job submissions to backfill normal user activity. In this talk, we will report on the different performed tests and present their post-mortem analysis.

  4. Reinforcement learning in computer vision

    NASA Astrophysics Data System (ADS)

    Bernstein, A. V.; Burnaev, E. V.

    2018-04-01

    Nowadays, machine learning has become one of the basic technologies used in solving various computer vision tasks such as feature detection, image segmentation, object recognition and tracking. In many applications, various complex systems such as robots are equipped with visual sensors from which they learn state of surrounding environment by solving corresponding computer vision tasks. Solutions of these tasks are used for making decisions about possible future actions. It is not surprising that when solving computer vision tasks we should take into account special aspects of their subsequent application in model-based predictive control. Reinforcement learning is one of modern machine learning technologies in which learning is carried out through interaction with the environment. In recent years, Reinforcement learning has been used both for solving such applied tasks as processing and analysis of visual information, and for solving specific computer vision problems such as filtering, extracting image features, localizing objects in scenes, and many others. The paper describes shortly the Reinforcement learning technology and its use for solving computer vision problems.

  5. Dispensing Processes Impact Apparent Biological Activity as Determined by Computational and Statistical Analyses

    PubMed Central

    Ekins, Sean; Olechno, Joe; Williams, Antony J.

    2013-01-01

    Dispensing and dilution processes may profoundly influence estimates of biological activity of compounds. Published data show Ephrin type-B receptor 4 IC50 values obtained via tip-based serial dilution and dispensing versus acoustic dispensing with direct dilution differ by orders of magnitude with no correlation or ranking of datasets. We generated computational 3D pharmacophores based on data derived by both acoustic and tip-based transfer. The computed pharmacophores differ significantly depending upon dispensing and dilution methods. The acoustic dispensing-derived pharmacophore correctly identified active compounds in a subsequent test set where the tip-based method failed. Data from acoustic dispensing generates a pharmacophore containing two hydrophobic features, one hydrogen bond donor and one hydrogen bond acceptor. This is consistent with X-ray crystallography studies of ligand-protein interactions and automatically generated pharmacophores derived from this structural data. In contrast, the tip-based data suggest a pharmacophore with two hydrogen bond acceptors, one hydrogen bond donor and no hydrophobic features. This pharmacophore is inconsistent with the X-ray crystallographic studies and automatically generated pharmacophores. In short, traditional dispensing processes are another important source of error in high-throughput screening that impacts computational and statistical analyses. These findings have far-reaching implications in biological research. PMID:23658723

  6. Automatic scanning and measuring using POLLY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fields, T.

    1993-07-01

    The HPD and PEPR automatic measuring systems, which have been described by B. Powell and I. Pless at this conference, were developed in the 1960`s to be used for what would now be called {open_quotes}batch processing.{close_quotes} That is, an entire reel of bubble chamber film containing interesting events whose tracks had been rough-digitized would be processed in an extended run by a dedicated computer/precision digitizer hardware system, with no human intervention. Then, at a later time, events for which the precision measurement did not appear to be successful would be handled with some type of {open_quotes}fixup{close_quotes} station or process. Bymore » contrast, the POLLY system included from the start, not only a computer and a precision CRT measuring device, but also a human operator who could have convenient two-way interactions with the computer and could also view the picture directly. Inclusion of a human as a key part of the system had some important beneficial effects, as has been described in the original papers. In this note the author summarizes those effects, and also points out connections between the POLLY system philosophy and subsequent developments in both high energy physics data analysis and computing systems.« less

  7. Calculator Use Need Not Undermine Direct-Access Ability: The Roles of Retrieval, Calculation, and Calculator Use in the Acquisition of Arithmetic Facts

    ERIC Educational Resources Information Center

    Pyke, Aryn A.; LeFevre, Jo-Anne

    2011-01-01

    Why is subsequent recall sometimes better for self-generated answers than for answers obtained from an external source (e.g., calculator)? In this study, we explore the relative contribution of 2 processes, recall attempts and self-computation, to this "generation effect" (i.e., enhanced answer recall relative to when problems are practiced with a…

  8. Interactive boundary delineation of agricultural lands using graphics workstations

    NASA Technical Reports Server (NTRS)

    Cheng, Thomas D.; Angelici, Gary L.; Slye, Robert E.; Ma, Matt

    1992-01-01

    A review is presented of the computer-assisted stratification and sampling (CASS) system developed to delineate the boundaries of sample units for survey procedures. CASS stratifies the sampling units by land-cover and land-use type, employing image-processing software and hardware. This procedure generates coverage areas and the boundaries of stratified sampling units that are utilized for subsequent sampling procedures from which agricultural statistics are developed.

  9. Transferring data oscilloscope to an IBM using an Apple II+

    NASA Technical Reports Server (NTRS)

    Miller, D. L.; Frenklach, M. Y.; Laughlin, P. J.; Clary, D. W.

    1984-01-01

    A set of PASCAL programs permitting the use of a laboratory microcomputer to facilitate and control the transfer of data from a digital oscilloscope (used with photomultipliers in experiments on soot formation in hydrocarbon combustion) to a mainframe computer and the subsequent mainframe processing of these data is presented. Advantages of this approach include the possibility of on-line computations, transmission flexibility, automatic transfer and selection, increased capacity and analysis options (such as smoothing, averaging, Fourier transformation, and high-quality plotting), and more rapid availability of results. The hardware and software are briefly characterized, the programs are discussed, and printouts of the listings are provided.

  10. Planetary radar studies

    NASA Technical Reports Server (NTRS)

    Thompson, T. W.; Cutts, J. A.

    1981-01-01

    A catalog of lunar and radar anomalies was generated to provide a base for comparison with Venusian radar signatures. The relationships between lunar radar anomalies and regolith processes were investigated, and a consortium was formed to compare lunar and Venusian radar images of craters. Time was scheduled at the Arecibo Observatory to use the 430 MHz radar to obtain high resolution radar maps of six areas of the lunar suface. Data from 1978 observations of Mare Serenitas and Plato are being analyzed on a PDP 11/70 computer to construct the computer program library necessary for the eventual reduction of the May 1981 and subsequent data acquisitions. Papers accepted for publication are presented.

  11. Analysis of pressure-flow data in terms of computer-derived urethral resistance parameters.

    PubMed

    van Mastrigt, R; Kranse, M

    1995-01-01

    The simultaneous measurement of detrusor pressure and flow rate during voiding is at present the only way to measure or grade infravesical obstruction objectively. Numerous methods have been introduced to analyze the resulting data. These methods differ in aim (measurement of urethral resistance and/or diagnosis of obstruction), method (manual versus computerized data processing), theory or model used, and resolution (continuously variable parameters or a limited number of classes, the so-called monogram). In this paper, some aspects of these fundamental differences are discussed and illustrated. Subsequently, the properties and clinical performance of two computer-based methods for deriving continuous urethral resistance parameters are treated.

  12. Remembered or Forgotten?—An EEG-Based Computational Prediction Approach

    PubMed Central

    Sun, Xuyun; Qian, Cunle; Chen, Zhongqin; Wu, Zhaohui; Luo, Benyan; Pan, Gang

    2016-01-01

    Prediction of memory performance (remembered or forgotten) has various potential applications not only for knowledge learning but also for disease diagnosis. Recently, subsequent memory effects (SMEs)—the statistical differences in electroencephalography (EEG) signals before or during learning between subsequently remembered and forgotten events—have been found. This finding indicates that EEG signals convey the information relevant to memory performance. In this paper, based on SMEs we propose a computational approach to predict memory performance of an event from EEG signals. We devise a convolutional neural network for EEG, called ConvEEGNN, to predict subsequently remembered and forgotten events from EEG recorded during memory process. With the ConvEEGNN, prediction of memory performance can be achieved by integrating two main stages: feature extraction and classification. To verify the proposed approach, we employ an auditory memory task to collect EEG signals from scalp electrodes. For ConvEEGNN, the average prediction accuracy was 72.07% by using EEG data from pre-stimulus and during-stimulus periods, outperforming other approaches. It was observed that signals from pre-stimulus period and those from during-stimulus period had comparable contributions to memory performance. Furthermore, the connection weights of ConvEEGNN network can reveal prominent channels, which are consistent with the distribution of SME studied previously. PMID:27973531

  13. LETTER TO THE EDITOR: Phase transition in a random fragmentation problem with applications to computer science

    NASA Astrophysics Data System (ADS)

    Dean, David S.; Majumdar, Satya N.

    2002-08-01

    We study a fragmentation problem where an initial object of size x is broken into m random pieces provided x > x0 where x0 is an atomic cut-off. Subsequently, the fragmentation process continues for each of those daughter pieces whose sizes are bigger than x0. The process stops when all the fragments have sizes smaller than x0. We show that the fluctuation of the total number of splitting events, characterized by the variance, generically undergoes a nontrivial phase transition as one tunes the branching number m through a critical value m = mc. For m < mc, the fluctuations are Gaussian where as for m > mc they are anomalously large and non-Gaussian. We apply this general result to analyse two different search algorithms in computer science.

  14. A workload model and measures for computer performance evaluation

    NASA Technical Reports Server (NTRS)

    Kerner, H.; Kuemmerle, K.

    1972-01-01

    A generalized workload definition is presented which constructs measurable workloads of unit size from workload elements, called elementary processes. An elementary process makes almost exclusive use of one of the processors, CPU, I/O processor, etc., and is measured by the cost of its execution. Various kinds of user programs can be simulated by quantitative composition of elementary processes into a type. The character of the type is defined by the weights of its elementary processes and its structure by the amount and sequence of transitions between its elementary processes. A set of types is batched to a mix. Mixes of identical cost are considered as equivalent amounts of workload. These formalized descriptions of workloads allow investigators to compare the results of different studies quantitatively. Since workloads of different composition are assigned a unit of cost, these descriptions enable determination of cost effectiveness of different workloads on a machine. Subsequently performance parameters such as throughput rate, gain factor, internal and external delay factors are defined and used to demonstrate the effects of various workload attributes on the performance of a selected large scale computer system.

  15. Coarse-grained models of key self-assembly processes in HIV-1

    NASA Astrophysics Data System (ADS)

    Grime, John

    Computational molecular simulations can elucidate microscopic information that is inaccessible to conventional experimental techniques. However, many processes occur over time and length scales that are beyond the current capabilities of atomic-resolution molecular dynamics (MD). One such process is the self-assembly of the HIV-1 viral capsid, a biological structure that is crucial to viral infectivity. The nucleation and growth of capsid structures requires the interaction of large numbers of capsid proteins within a complicated molecular environment. Coarse-grained (CG) models, where degrees of freedom are removed to produce more computationally efficient models, can in principle access large-scale phenomena such as the nucleation and growth of HIV-1 capsid lattice. We report here studies of the self-assembly behaviors of a CG model of HIV-1 capsid protein, including the influence of the local molecular environment on nucleation and growth processes. Our results suggest a multi-stage process, involving several characteristic structures, eventually producing metastable capsid lattice morphologies that are amenable to subsequent capsid dissociation in order to transmit the viral infection.

  16. Approximate Computing Techniques for Iterative Graph Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panyala, Ajay R.; Subasi, Omer; Halappanavar, Mahantesh

    Approximate computing enables processing of large-scale graphs by trading off quality for performance. Approximate computing techniques have become critical not only due to the emergence of parallel architectures but also the availability of large scale datasets enabling data-driven discovery. Using two prototypical graph algorithms, PageRank and community detection, we present several approximate computing heuristics to scale the performance with minimal loss of accuracy. We present several heuristics including loop perforation, data caching, incomplete graph coloring and synchronization, and evaluate their efficiency. We demonstrate performance improvements of up to 83% for PageRank and up to 450x for community detection, with lowmore » impact of accuracy for both the algorithms. We expect the proposed approximate techniques will enable scalable graph analytics on data of importance to several applications in science and their subsequent adoption to scale similar graph algorithms.« less

  17. Rapid Prototyping Integrated With Nondestructive Evaluation and Finite Element Analysis

    NASA Technical Reports Server (NTRS)

    Abdul-Aziz, Ali; Baaklini, George Y.

    2001-01-01

    Most reverse engineering approaches involve imaging or digitizing an object then creating a computerized reconstruction that can be integrated, in three dimensions, into a particular design environment. Rapid prototyping (RP) refers to the practical ability to build high-quality physical prototypes directly from computer aided design (CAD) files. Using rapid prototyping, full-scale models or patterns can be built using a variety of materials in a fraction of the time required by more traditional prototyping techniques (refs. 1 and 2). Many software packages have been developed and are being designed to tackle the reverse engineering and rapid prototyping issues just mentioned. For example, image processing and three-dimensional reconstruction visualization software such as Velocity2 (ref. 3) are being used to carry out the construction process of three-dimensional volume models and the subsequent generation of a stereolithography file that is suitable for CAD applications. Producing three-dimensional models of objects from computed tomography (CT) scans is becoming a valuable nondestructive evaluation methodology (ref. 4). Real components can be rendered and subjected to temperature and stress tests using structural engineering software codes. For this to be achieved, accurate high-resolution images have to be obtained via CT scans and then processed, converted into a traditional file format, and translated into finite element models. Prototyping a three-dimensional volume of a composite structure by reading in a series of two-dimensional images generated via CT and by using and integrating commercial software (e.g. Velocity2, MSC/PATRAN (ref. 5), and Hypermesh (ref. 6)) is being applied successfully at the NASA Glenn Research Center. The building process from structural modeling to the analysis level is outlined in reference 7. Subsequently, a stress analysis of a composite cooling panel under combined thermomechanical loading conditions was performed to validate this process.

  18. A study of process parameters on workpiece anisotropy in the laser engineered net shaping (LENSTM) process

    NASA Astrophysics Data System (ADS)

    Chandra, Shubham; Rao, Balkrishna C.

    2017-06-01

    The process of laser engineered net shaping (LENSTM) is an additive manufacturing technique that employs the coaxial flow of metallic powders with a high-power laser to form a melt pool and the subsequent deposition of the specimen on a substrate. Although research done over the past decade on the LENSTM processing of alloys of steel, titanium, nickel and other metallic materials typically reports superior mechanical properties in as-deposited specimens, when compared to the bulk material, there is anisotropy in the mechanical properties of the melt deposit. The current study involves the development of a numerical model of the LENSTM process, using the principles of computational fluid dynamics (CFD), and the subsequent prediction of the volume fraction of equiaxed grains to predict process parameters required for the deposition of workpieces with isotropy in their properties. The numerical simulation is carried out on ANSYS-Fluent, whose data on thermal gradient are used to determine the volume fraction of the equiaxed grains present in the deposited specimen. This study has been validated against earlier efforts on the experimental studies of LENSTM for alloys of nickel. Besides being applicable to the wider family of metals and alloys, the results of this study will also facilitate effective process design to improve both product quality and productivity.

  19. Partitioning in parallel processing of production systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oflazer, K.

    1987-01-01

    This thesis presents research on certain issues related to parallel processing of production systems. It first presents a parallel production system interpreter that has been implemented on a four-processor multiprocessor. This parallel interpreter is based on Forgy's OPS5 interpreter and exploits production-level parallelism in production systems. Runs on the multiprocessor system indicate that it is possible to obtain speed-up of around 1.7 in the match computation for certain production systems when productions are split into three sets that are processed in parallel. The next issue addressed is that of partitioning a set of rules to processors in a parallel interpretermore » with production-level parallelism, and the extent of additional improvement in performance. The partitioning problem is formulated and an algorithm for approximate solutions is presented. The thesis next presents a parallel processing scheme for OPS5 production systems that allows some redundancy in the match computation. This redundancy enables the processing of a production to be divided into units of medium granularity each of which can be processed in parallel. Subsequently, a parallel processor architecture for implementing the parallel processing algorithm is presented.« less

  20. Optical Interconnections for VLSI Computational Systems Using Computer-Generated Holography.

    NASA Astrophysics Data System (ADS)

    Feldman, Michael Robert

    Optical interconnects for VLSI computational systems using computer generated holograms are evaluated in theory and experiment. It is shown that by replacing particular electronic connections with free-space optical communication paths, connection of devices on a single chip or wafer and between chips or modules can be improved. Optical and electrical interconnects are compared in terms of power dissipation, communication bandwidth, and connection density. Conditions are determined for which optical interconnects are advantageous. Based on this analysis, it is shown that by applying computer generated holographic optical interconnects to wafer scale fine grain parallel processing systems, dramatic increases in system performance can be expected. Some new interconnection networks, designed to take full advantage of optical interconnect technology, have been developed. Experimental Computer Generated Holograms (CGH's) have been designed, fabricated and subsequently tested in prototype optical interconnected computational systems. Several new CGH encoding methods have been developed to provide efficient high performance CGH's. One CGH was used to decrease the access time of a 1 kilobit CMOS RAM chip. Another was produced to implement the inter-processor communication paths in a shared memory SIMD parallel processor array.

  1. Real-time data-intensive computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parkinson, Dilworth Y., E-mail: dyparkinson@lbl.gov; Chen, Xian; Hexemer, Alexander

    2016-07-27

    Today users visit synchrotrons as sources of understanding and discovery—not as sources of just light, and not as sources of data. To achieve this, the synchrotron facilities frequently provide not just light but often the entire end station and increasingly, advanced computational facilities that can reduce terabytes of data into a form that can reveal a new key insight. The Advanced Light Source (ALS) has partnered with high performance computing, fast networking, and applied mathematics groups to create a “super-facility”, giving users simultaneous access to the experimental, computational, and algorithmic resources to make this possible. This combination forms an efficientmore » closed loop, where data—despite its high rate and volume—is transferred and processed immediately and automatically on appropriate computing resources, and results are extracted, visualized, and presented to users or to the experimental control system, both to provide immediate insight and to guide decisions about subsequent experiments during beamtime. We will describe our work at the ALS ptychography, scattering, micro-diffraction, and micro-tomography beamlines.« less

  2. Computer simulation of a geomagnetic substorm

    NASA Technical Reports Server (NTRS)

    Lyon, J. G.; Brecht, S. H.; Huba, J. D.; Fedder, J. A.; Palmadesso, P. J.

    1981-01-01

    A global two-dimensional simulation of a substormlike process occurring in earth's magnetosphere is presented. The results are consistent with an empirical substorm model - the neutral-line model. Specifically, the introduction of a southward interplanetary magnetic field forms an open magnetosphere. Subsequently, a substorm neutral line forms at about 15 earth radii or closer in the magnetotail, and plasma sheet thinning and plasma acceleration occur. Eventually the substorm neutral line moves tailward toward its presubstorm position.

  3. Intelligent Array System

    DTIC Science & Technology

    1990-10-01

    cepstra, th, = th= .()I. I t While all three of these statistics are computed and archived in the IIMS. ornl Vcrtanc" is used in I subsequent steps in the...39 3 / 2 7/-0 , 4427 07 4715 3 7 294 2 5 0 1 201 3 1 2 45 02 is 3 0/-3 1#0427 u8.40* 1074 S0 04 093 ..... 00 1 .L A s 4 i-4 t I Z)K 136 PROCESSING EXA

  4. Process Integrated Mechanism for Human-Computer Collaboration and Coordination

    DTIC Science & Technology

    2012-09-12

    system we implemented the TAFLib library that provides the communication with TAF . The data received from the TAF server is collected in a data structure...send new commands and flight plans for the UAVs to the TAF server. Test scenarios Several scenarios have been implemented to test and prove our...areas. Shooting Enemies The basic scenario proved the successful integration of PIM and the TAF simulation environment. Subsequently we improved the CP

  5. Java Tool Framework for Automation of Hardware Commissioning and Maintenance Procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ho, J C; Fisher, J M; Gordon, J B

    2007-10-02

    The National Ignition Facility (NIF) is a 192-beam laser system designed to study high energy density physics. Each beam line contains a variety of line replaceable units (LRUs) that contain optics, stepping motors, sensors and other devices to control and diagnose the laser. During commissioning and subsequent maintenance of the laser, LRUs undergo a qualification process using the Integrated Computer Control System (ICCS) to verify and calibrate the equipment. The commissioning processes are both repetitive and tedious when we use remote manual computer controls, making them ideal candidates for software automation. Maintenance and Commissioning Tool (MCT) software was developed tomore » improve the efficiency of the qualification process. The tools are implemented in Java, leveraging ICCS services and CORBA to communicate with the control devices. The framework provides easy-to-use mechanisms for handling configuration data, task execution, task progress reporting, and generation of commissioning test reports. The tool framework design and application examples will be discussed.« less

  6. Spacesuit glove manufacturing enhancements through the use of advanced technologies

    NASA Astrophysics Data System (ADS)

    Cadogan, David; Bradley, David; Kosmo, Joseph

    The sucess of astronauts performing extravehicular activity (EVA) on orbit is highly dependent upon the performance of their spacesuit gloves.A study has recently been conducted to advance the development and manufacture of spacesuit gloves. The process replaces the manual techniques of spacesuit glove manufacture by utilizing emerging technologies such as laser scanning, Computer Aided Design (CAD), computer generated two-dimensional patterns from three-dimensionl surfaces, rapid prototyping technology, and laser cutting of materials, to manufacture the new gloves. Results of the program indicate that the baseline process will not increase the cost of the gloves as compared to the existing styles, and in production, may reduce the cost of the gloves. perhaps the most important outcome of the Laserscan process is that greater accuracy and design control can be realized. Greater accuracy was achieved in the baseline anthropometric measurement and CAD data measurement which subsequently improved the design feature. This effectively enhances glove performance through better fit and comfort.

  7. Performance Analysis, Design Considerations, and Applications of Extreme-Scale In Situ Infrastructures

    DOE PAGES

    Ayachit, Utkarsh; Bauer, Andrew; Duque, Earl P. N.; ...

    2016-11-01

    A key trend facing extreme-scale computational science is the widening gap between computational and I/O rates, and the challenge that follows is how to best gain insight from simulation data when it is increasingly impractical to save it to persistent storage for subsequent visual exploration and analysis. One approach to this challenge is centered around the idea of in situ processing, where visualization and analysis processing is performed while data is still resident in memory. Our paper examines several key design and performance issues related to the idea of in situ processing at extreme scale on modern platforms: Scalability, overhead,more » performance measurement and analysis, comparison and contrast with a traditional post hoc approach, and interfacing with simulation codes. We illustrate these principles in practice with studies, conducted on large-scale HPC platforms, that include a miniapplication and multiple science application codes, one of which demonstrates in situ methods in use at greater than 1M-way concurrency.« less

  8. Simulating coupled dynamics of a rigid-flexible multibody system and compressible fluid

    NASA Astrophysics Data System (ADS)

    Hu, Wei; Tian, Qiang; Hu, HaiYan

    2018-04-01

    As a subsequent work of previous studies of authors, a new parallel computation approach is proposed to simulate the coupled dynamics of a rigid-flexible multibody system and compressible fluid. In this approach, the smoothed particle hydrodynamics (SPH) method is used to model the compressible fluid, the natural coordinate formulation (NCF) and absolute nodal coordinate formulation (ANCF) are used to model the rigid and flexible bodies, respectively. In order to model the compressible fluid properly and efficiently via SPH method, three measures are taken as follows. The first is to use the Riemann solver to cope with the fluid compressibility, the second is to define virtual particles of SPH to model the dynamic interaction between the fluid and the multibody system, and the third is to impose the boundary conditions of periodical inflow and outflow to reduce the number of SPH particles involved in the computation process. Afterwards, a parallel computation strategy is proposed based on the graphics processing unit (GPU) to detect the neighboring SPH particles and to solve the dynamic equations of SPH particles in order to improve the computation efficiency. Meanwhile, the generalized-alpha algorithm is used to solve the dynamic equations of the multibody system. Finally, four case studies are given to validate the proposed parallel computation approach.

  9. Descriptive and Criterion-Referenced Self-Assessment with L2 Readers

    ERIC Educational Resources Information Center

    Brantmeier, Cindy; Vanderplank, Robert

    2008-01-01

    Brantmeier [Brantmeier, C., 2006. "Advanced L2 learners and reading placement: self-assessment, computer-based testing, and subsequent performance." 'System 34" (1), 15-35] found that self-assessment (SA) of second language (L2) reading ability is not an accurate predictor for computer-based testing or subsequent classroom performance. With 359…

  10. The role of water molecules in computational drug design.

    PubMed

    de Beer, Stephanie B A; Vermeulen, Nico P E; Oostenbrink, Chris

    2010-01-01

    Although water molecules are small and only consist of two different atom types, they play various roles in cellular systems. This review discusses their influence on the binding process between biomacromolecular targets and small molecule ligands and how this influence can be modeled in computational drug design approaches. Both the structure and the thermodynamics of active site waters will be discussed as these influence the binding process significantly. Structurally conserved waters cannot always be determined experimentally and if observed, it is not clear if they will be replaced upon ligand binding, even if sufficient space is available. Methods to predict the presence of water in protein-ligand complexes will be reviewed. Subsequently, we will discuss methods to include water in computational drug research. Either as an additional factor in automated docking experiments, or explicitly in detailed molecular dynamics simulations, the effect of water on the quality of the simulations is significant, but not easily predicted. The most detailed calculations involve estimates of the free energy contribution of water molecules to protein-ligand complexes. These calculations are computationally demanding, but give insight in the versatility and importance of water in ligand binding.

  11. Techniques and potential capabilities of multi-resolutional information (knowledge) processing

    NASA Technical Reports Server (NTRS)

    Meystel, A.

    1989-01-01

    A concept of nested hierarchical (multi-resolutional, pyramidal) information (knowledge) processing is introduced for a variety of systems including data and/or knowledge bases, vision, control, and manufacturing systems, industrial automated robots, and (self-programmed) autonomous intelligent machines. A set of practical recommendations is presented using a case study of a multiresolutional object representation. It is demonstrated here that any intelligent module transforms (sometimes, irreversibly) the knowledge it deals with, and this tranformation affects the subsequent computation processes, e.g., those of decision and control. Several types of knowledge transformation are reviewed. Definite conditions are analyzed, satisfaction of which is required for organization and processing of redundant information (knowledge) in the multi-resolutional systems. Providing a definite degree of redundancy is one of these conditions.

  12. Computational and experimental analysis of DNA shuffling

    PubMed Central

    Maheshri, Narendra; Schaffer, David V.

    2003-01-01

    We describe a computational model of DNA shuffling based on the thermodynamics and kinetics of this process. The model independently tracks a representative ensemble of DNA molecules and records their states at every stage of a shuffling reaction. These data can subsequently be analyzed to yield information on any relevant metric, including reassembly efficiency, crossover number, type and distribution, and DNA sequence length distributions. The predictive ability of the model was validated by comparison to three independent sets of experimental data, and analysis of the simulation results led to several unique insights into the DNA shuffling process. We examine a tradeoff between crossover frequency and reassembly efficiency and illustrate the effects of experimental parameters on this relationship. Furthermore, we discuss conditions that promote the formation of useless “junk” DNA sequences or multimeric sequences containing multiple copies of the reassembled product. This model will therefore aid in the design of optimal shuffling reaction conditions. PMID:12626764

  13. NBS computerized carpool matching system: users' guide. Final technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilsinn, J.F.; Landau, S.

    1974-12-01

    The report includes flowcharts, input/output formats, and program listings for the programs, plus details of the manual process for coordinate coding. The matching program produces, for each person desiring it, a list of others residing within a pre-specified distance of him, and is thus applicable to a single work destination having primarily one work schedule. The system is currently operational on the National Bureau of Standards' UNIVAC 1108 computer and was run in March of 1974, producing lists for about 950 employees in less than four minutes computer time. Subsequent maintenance of the system will be carried out by themore » NBS Management and Organization Division. (GRA)« less

  14. Strategy and gaps for modeling, simulation, and control of hybrid systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rabiti, Cristian; Garcia, Humberto E.; Hovsapian, Rob

    2015-04-01

    The purpose of this report is to establish a strategy for modeling and simulation of candidate hybrid energy systems. Modeling and simulation is necessary to design, evaluate, and optimize the system technical and economic performance. Accordingly, this report first establishes the simulation requirements to analysis candidate hybrid systems. Simulation fidelity levels are established based on the temporal scale, real and synthetic data availability or needs, solution accuracy, and output parameters needed to evaluate case-specific figures of merit. Accordingly, the associated computational and co-simulation resources needed are established; including physical models when needed, code assembly and integrated solutions platforms, mathematical solvers,more » and data processing. This report first attempts to describe the figures of merit, systems requirements, and constraints that are necessary and sufficient to characterize the grid and hybrid systems behavior and market interactions. Loss of Load Probability (LOLP) and effective cost of Effective Cost of Energy (ECE), as opposed to the standard Levelized Cost of Electricty (LCOE), are introduced as technical and economical indices for integrated energy system evaluations. Financial assessment methods are subsequently introduced for evaluation of non-traditional, hybrid energy systems. Algorithms for coupled and iterative evaluation of the technical and economic performance are subsequently discussed. This report further defines modeling objectives, computational tools, solution approaches, and real-time data collection and processing (in some cases using real test units) that will be required to model, co-simulate, and optimize; (a) an energy system components (e.g., power generation unit, chemical process, electricity management unit), (b) system domains (e.g., thermal, electrical or chemical energy generation, conversion, and transport), and (c) systems control modules. Co-simulation of complex, tightly coupled, dynamic energy systems requires multiple simulation tools, potentially developed in several programming languages and resolved on separate time scales. Whereas further investigation and development of hybrid concepts will provide a more complete understanding of the joint computational and physical modeling needs, this report highlights areas in which co-simulation capabilities are warranted. The current development status, quality assurance, availability and maintainability of simulation tools that are currently available for hybrid systems modeling is presented. Existing gaps in the modeling and simulation toolsets and development needs are subsequently discussed. This effort will feed into a broader Roadmap activity for designing, developing, and demonstrating hybrid energy systems.« less

  15. 3-D laser patterning process utilizing horizontal and vertical patterning

    DOEpatents

    Malba, Vincent; Bernhardt, Anthony F.

    2000-01-01

    A process which vastly improves the 3-D patterning capability of laser pantography (computer controlled laser direct-write patterning). The process uses commercially available electrodeposited photoresist (EDPR) to pattern 3-D surfaces. The EDPR covers the surface of a metal layer conformally, coating the vertical as well as horizontal surfaces. A laser pantograph then patterns the EDPR, which is subsequently developed in a standard, commercially available developer, leaving patterned trench areas in the EDPR. The metal layer thereunder is now exposed in the trench areas and masked in others, and thereafter can be etched to form the desired pattern (subtractive process), or can be plated with metal (additive process), followed by a resist stripping, and removal of the remaining field metal (additive process). This improved laser pantograph process is simpler, faster, move manufacturable, and requires no micro-machining.

  16. Benefit from NASA

    NASA Image and Video Library

    2001-09-01

    The high-tech art of digital signal processing (DSP) was pioneered at NASA's Jet Propulsion Laboratory (JPL) in the mid-1960s for use in the Apollo Lunar Landing Program. Designed to computer enhance pictures of the Moon, this technology became the basis for the Landsat Earth resources satellites and subsequently has been incorporated into a broad range of Earthbound medical and diagnostic tools. DSP is employed in advanced body imaging techniques including Computer-Aided Tomography, also known as CT and CATScan, and Magnetic Resonance Imaging (MRI). CT images are collected by irradiating a thin slice of the body with a fan-shaped x-ray beam from a number of directions around the body's perimeter. A tomographic (slice-like) picture is reconstructed from these multiple views by a computer. MRI employs a magnetic field and radio waves, rather than x-rays, to create images.

  17. Patterns in the sky: Natural visualization of aircraft flow fields

    NASA Technical Reports Server (NTRS)

    Campbell, James F.; Chambers, Joseph R.

    1994-01-01

    The objective of the current publication is to present the collection of flight photographs to illustrate the types of flow patterns that were visualized and to present qualitative correlations with computational and wind tunnel results. Initially in section 2, the condensation process is discussed, including a review of relative humidity, vapor pressure, and factors which determine the presence of visible condensate. Next, outputs from computer code calculations are postprocessed by using water-vapor relationships to determine if computed values of relative humidity in the local flow field correlate with the qualitative features of the in-flight condensation patterns. The photographs are then presented in section 3 by flow type and subsequently in section 4 by aircraft type to demonstrate the variety of condensed flow fields that was visualized for a wide range of aircraft and flight maneuvers.

  18. Surrogates for numerical simulations; optimization of eddy-promoter heat exchangers

    NASA Technical Reports Server (NTRS)

    Patera, Anthony T.; Patera, Anthony

    1993-01-01

    Although the advent of fast and inexpensive parallel computers has rendered numerous previously intractable calculations feasible, many numerical simulations remain too resource-intensive to be directly inserted in engineering optimization efforts. An attractive alternative to direct insertion considers models for computational systems: the expensive simulation is evoked only to construct and validate a simplified, input-output model; this simplified input-output model then serves as a simulation surrogate in subsequent engineering optimization studies. A simple 'Bayesian-validated' statistical framework for the construction, validation, and purposive application of static computer simulation surrogates is presented. As an example, dissipation-transport optimization of laminar-flow eddy-promoter heat exchangers are considered: parallel spectral element Navier-Stokes calculations serve to construct and validate surrogates for the flowrate and Nusselt number; these surrogates then represent the originating Navier-Stokes equations in the ensuing design process.

  19. Computationally inexpensive identification of noninformative model parameters by sequential screening

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis

    2015-08-01

    Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.

  20. Computationally inexpensive identification of noninformative model parameters by sequential screening

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Cuntz, Matthias; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis

    2016-04-01

    Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.

  1. Automated vector selection of SIVQ and parallel computing integration MATLAB™: Innovations supporting large-scale and high-throughput image analysis studies.

    PubMed

    Cheng, Jerome; Hipp, Jason; Monaco, James; Lucas, David R; Madabhushi, Anant; Balis, Ulysses J

    2011-01-01

    Spatially invariant vector quantization (SIVQ) is a texture and color-based image matching algorithm that queries the image space through the use of ring vectors. In prior studies, the selection of one or more optimal vectors for a particular feature of interest required a manual process, with the user initially stochastically selecting candidate vectors and subsequently testing them upon other regions of the image to verify the vector's sensitivity and specificity properties (typically by reviewing a resultant heat map). In carrying out the prior efforts, the SIVQ algorithm was noted to exhibit highly scalable computational properties, where each region of analysis can take place independently of others, making a compelling case for the exploration of its deployment on high-throughput computing platforms, with the hypothesis that such an exercise will result in performance gains that scale linearly with increasing processor count. An automated process was developed for the selection of optimal ring vectors to serve as the predicate matching operator in defining histopathological features of interest. Briefly, candidate vectors were generated from every possible coordinate origin within a user-defined vector selection area (VSA) and subsequently compared against user-identified positive and negative "ground truth" regions on the same image. Each vector from the VSA was assessed for its goodness-of-fit to both the positive and negative areas via the use of the receiver operating characteristic (ROC) transfer function, with each assessment resulting in an associated area-under-the-curve (AUC) figure of merit. Use of the above-mentioned automated vector selection process was demonstrated in two cases of use: First, to identify malignant colonic epithelium, and second, to identify soft tissue sarcoma. For both examples, a very satisfactory optimized vector was identified, as defined by the AUC metric. Finally, as an additional effort directed towards attaining high-throughput capability for the SIVQ algorithm, we demonstrated the successful incorporation of it with the MATrix LABoratory (MATLAB™) application interface. The SIVQ algorithm is suitable for automated vector selection settings and high throughput computation.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dang, Liem X.; Vo, Quynh N.; Nilsson, Mikael

    We report one of the first simulations using a classical rate theory approach to predict the mechanism of the exchange process between water and aqueous uranyl ions. Using our water and ion-water polarizable force fields and molecular dynamics techniques, we computed the potentials of mean force for the uranyl ion-water pair as the function of pressures at ambient temperature. Subsequently, these simulated potentials of mean force were used to calculate rate constants using the transition rate theory; the time dependent transmission coefficients were also examined using the reactive flux method and Grote-Hynes treatments of the dynamic response of the solvent.more » The computed activation volumes using transition rate theory and the corrected rate constants are positive, thus the mechanism of this particular water-exchange is a dissociative process. We discuss our rate theory results and compare them with previously studies in which non-polarizable force fields were used. This work was supported by the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences. The calculations were carried out using computer resources provided by the Office of Basic Energy Sciences.« less

  3. Macroscopic aspects of interfacial reactions

    NASA Technical Reports Server (NTRS)

    Heckel, R. W.

    1976-01-01

    The extent of interdiffusion and formation of new phases is determined by the constitution diagram of the alloy system, the interdiffusion coefficients of the phases present, and the thermal conditions (temperature and time) associated with the bonding process and/or subsequent use of the bonded structure. In many instance, the kinetics of interdiffusion and phase formation can be predicted from known parameters using numerical methods and computer techniques. Predictions are compared with experimentally determined parameters for a variety of metallurgical alloy systems.

  4. Modeling cell adhesion and proliferation: a cellular-automata based approach.

    PubMed

    Vivas, J; Garzón-Alvarado, D; Cerrolaza, M

    Cell adhesion is a process that involves the interaction between the cell membrane and another surface, either a cell or a substrate. Unlike experimental tests, computer models can simulate processes and study the result of experiments in a shorter time and lower costs. One of the tools used to simulate biological processes is the cellular automata, which is a dynamic system that is discrete both in space and time. This work describes a computer model based on cellular automata for the adhesion process and cell proliferation to predict the behavior of a cell population in suspension and adhered to a substrate. The values of the simulated system were obtained through experimental tests on fibroblast monolayer cultures. The results allow us to estimate the cells settling time in culture as well as the adhesion and proliferation time. The change in the cells morphology as the adhesion over the contact surface progress was also observed. The formation of the initial link between cell and the substrate of the adhesion was observed after 100 min where the cell on the substrate retains its spherical morphology during the simulation. The cellular automata model developed is, however, a simplified representation of the steps in the adhesion process and the subsequent proliferation. A combined framework of experimental and computational simulation based on cellular automata was proposed to represent the fibroblast adhesion on substrates and changes in a macro-scale observed in the cell during the adhesion process. The approach showed to be simple and efficient.

  5. An architecture for real-time vision processing

    NASA Technical Reports Server (NTRS)

    Chien, Chiun-Hong

    1994-01-01

    To study the feasibility of developing an architecture for real time vision processing, a task queue server and parallel algorithms for two vision operations were designed and implemented on an i860-based Mercury Computing System 860VS array processor. The proposed architecture treats each vision function as a task or set of tasks which may be recursively divided into subtasks and processed by multiple processors coordinated by a task queue server accessible by all processors. Each idle processor subsequently fetches a task and associated data from the task queue server for processing and posts the result to shared memory for later use. Load balancing can be carried out within the processing system without the requirement for a centralized controller. The author concludes that real time vision processing cannot be achieved without both sequential and parallel vision algorithms and a good parallel vision architecture.

  6. Hand-held computer operating system program for collection of resident experience data.

    PubMed

    Malan, T K; Haffner, W H; Armstrong, A Y; Satin, A J

    2000-11-01

    To describe a system for recording resident experience involving hand-held computers with the Palm Operating System (3 Com, Inc., Santa Clara, CA). Hand-held personal computers (PCs) are popular, easy to use, inexpensive, portable, and can share data among other operating systems. Residents in our program carry individual hand-held database computers to record Residency Review Committee (RRC) reportable patient encounters. Each resident's data is transferred to a single central relational database compatible with Microsoft Access (Microsoft Corporation, Redmond, WA). Patient data entry and subsequent transfer to a central database is accomplished with commercially available software that requires minimal computer expertise to implement and maintain. The central database can then be used for statistical analysis or to create required RRC resident experience reports. As a result, the data collection and transfer process takes less time for residents and program director alike, than paper-based or central computer-based systems. The system of collecting resident encounter data using hand-held computers with the Palm Operating System is easy to use, relatively inexpensive, accurate, and secure. The user-friendly system provides prompt, complete, and accurate data, enhancing the education of residents while facilitating the job of the program director.

  7. Creation of a computer self-efficacy measure: analysis of internal consistency, psychometric properties, and validity.

    PubMed

    Howard, Matt C

    2014-10-01

    Computer self-efficacy is an often studied construct that has been shown to be related to an array of important individual outcomes. Unfortunately, existing measures of computer self-efficacy suffer from several deficiencies, including criterion contamination, outdated wording, and/or inadequate psychometric properties. For this reason, the current article presents the creation of a new computer self-efficacy measure. In Study 1, an over-representative item list is created and subsequently reduced through exploratory factor analysis to create an initial measure, and the discriminant validity of this initial measure is tested. In Study 2, the unidimensional factor structure of the initial measure is supported through confirmatory factor analysis and further reduced into a final, 12-item measure. In Study 3, the convergent and criterion validity of the 12-item measure is tested. Overall, this three study process demonstrates that the new computer self-efficacy measure has superb psychometric properties and internal reliability, and demonstrates excellent evidence for several aspects of validity. It is hoped that the 12-item computer self-efficacy measure will be utilized in future research on computer self-efficacy, which is discussed in the current article.

  8. Spectral-temporal EEG dynamics of speech discrimination processing in infants during sleep.

    PubMed

    Gilley, Phillip M; Uhler, Kristin; Watson, Kaylee; Yoshinaga-Itano, Christine

    2017-03-22

    Oddball paradigms are frequently used to study auditory discrimination by comparing event-related potential (ERP) responses from a standard, high probability sound and to a deviant, low probability sound. Previous research has established that such paradigms, such as the mismatch response or mismatch negativity, are useful for examining auditory processes in young children and infants across various sleep and attention states. The extent to which oddball ERP responses may reflect subtle discrimination effects, such as speech discrimination, is largely unknown, especially in infants that have not yet acquired speech and language. Mismatch responses for three contrasts (non-speech, vowel, and consonant) were computed as a spectral-temporal probability function in 24 infants, and analyzed at the group level by a modified multidimensional scaling. Immediately following an onset gamma response (30-50 Hz), the emergence of a beta oscillation (12-30 Hz) was temporally coupled with a lower frequency theta oscillation (2-8 Hz). The spectral-temporal probability of this coupling effect relative to a subsequent theta modulation corresponds with discrimination difficulty for non-speech, vowel, and consonant contrast features. The theta modulation effect suggests that unexpected sounds are encoded as a probabilistic measure of surprise. These results support the notion that auditory discrimination is driven by the development of brain networks for predictive processing, and can be measured in infants during sleep. The results presented here have implications for the interpretation of discrimination as a probabilistic process, and may provide a basis for the development of single-subject and single-trial classification in a clinically useful context. An infant's brain is processing information about the environment and performing computations, even during sleep. These computations reflect subtle differences in acoustic feature processing that are necessary for language-learning. Results from this study suggest that brain responses to deviant sounds in an oddball paradigm follow a cascade of oscillatory modulations. This cascade begins with a gamma response that later emerges as a beta synchronization, which is temporally coupled with a theta modulation, and followed by a second, subsequent theta modulation. The difference in frequency and timing of the theta modulations appears to reflect a measure of surprise. These insights into the neurophysiological mechanisms of auditory discrimination provide a basis for exploring the clinically utility of the MMR TF and other auditory oddball responses.

  9. Validation of Computational Models in Biomechanics

    PubMed Central

    Henninger, Heath B.; Reese, Shawn P.; Anderson, Andrew E.; Weiss, Jeffrey A.

    2010-01-01

    The topics of verification and validation (V&V) have increasingly been discussed in the field of computational biomechanics, and many recent articles have applied these concepts in an attempt to build credibility for models of complex biological systems. V&V are evolving techniques that, if used improperly, can lead to false conclusions about a system under study. In basic science these erroneous conclusions may lead to failure of a subsequent hypothesis, but they can have more profound effects if the model is designed to predict patient outcomes. While several authors have reviewed V&V as they pertain to traditional solid and fluid mechanics, it is the intent of this manuscript to present them in the context of computational biomechanics. Specifically, the task of model validation will be discussed with a focus on current techniques. It is hoped that this review will encourage investigators to engage and adopt the V&V process in an effort to increase peer acceptance of computational biomechanics models. PMID:20839648

  10. dropEst: pipeline for accurate estimation of molecular counts in droplet-based single-cell RNA-seq experiments.

    PubMed

    Petukhov, Viktor; Guo, Jimin; Baryawno, Ninib; Severe, Nicolas; Scadden, David T; Samsonova, Maria G; Kharchenko, Peter V

    2018-06-19

    Recent single-cell RNA-seq protocols based on droplet microfluidics use massively multiplexed barcoding to enable simultaneous measurements of transcriptomes for thousands of individual cells. The increasing complexity of such data creates challenges for subsequent computational processing and troubleshooting of these experiments, with few software options currently available. Here, we describe a flexible pipeline for processing droplet-based transcriptome data that implements barcode corrections, classification of cell quality, and diagnostic information about the droplet libraries. We introduce advanced methods for correcting composition bias and sequencing errors affecting cellular and molecular barcodes to provide more accurate estimates of molecular counts in individual cells.

  11. Swarm intelligence metaheuristics for enhanced data analysis and optimization.

    PubMed

    Hanrahan, Grady

    2011-09-21

    The swarm intelligence (SI) computing paradigm has proven itself as a comprehensive means of solving complicated analytical chemistry problems by emulating biologically-inspired processes. As global optimum search metaheuristics, associated algorithms have been widely used in training neural networks, function optimization, prediction and classification, and in a variety of process-based analytical applications. The goal of this review is to provide readers with critical insight into the utility of swarm intelligence tools as methods for solving complex chemical problems. Consideration will be given to algorithm development, ease of implementation and model performance, detailing subsequent influences on a number of application areas in the analytical, bioanalytical and detection sciences.

  12. Multi-agent grid system Agent-GRID with dynamic load balancing of cluster nodes

    NASA Astrophysics Data System (ADS)

    Satymbekov, M. N.; Pak, I. T.; Naizabayeva, L.; Nurzhanov, Ch. A.

    2017-12-01

    In this study the work presents the system designed for automated load balancing of the contributor by analysing the load of compute nodes and the subsequent migration of virtual machines from loaded nodes to less loaded ones. This system increases the performance of cluster nodes and helps in the timely processing of data. A grid system balances the work of cluster nodes the relevance of the system is the award of multi-agent balancing for the solution of such problems.

  13. The Application of a Massively Parallel Computer to the Simulation of Electrical Wave Propagation Phenomena in the Heart Muscle Using Simplified Models

    NASA Technical Reports Server (NTRS)

    Karpoukhin, Mikhii G.; Kogan, Boris Y.; Karplus, Walter J.

    1995-01-01

    The simulation of heart arrhythmia and fibrillation are very important and challenging tasks. The solution of these problems using sophisticated mathematical models is beyond the capabilities of modern super computers. To overcome these difficulties it is proposed to break the whole simulation problem into two tightly coupled stages: generation of the action potential using sophisticated models. and propagation of the action potential using simplified models. The well known simplified models are compared and modified to bring the rate of depolarization and action potential duration restitution closer to reality. The modified method of lines is used to parallelize the computational process. The conditions for the appearance of 2D spiral waves after the application of a premature beat and the subsequent traveling of the spiral wave inside the simulated tissue are studied.

  14. Benefit from NASA

    NASA Image and Video Library

    2001-01-01

    The high-tech art of digital signal processing (DSP) was pioneered at NASA's Jet Propulsion Laboratory (JPL) in the mid-1960s for use in the Apollo Lunar Landing Program. Designed to computer enhance pictures of the Moon, this technology became the basis for the Landsat Earth resources satellites and subsequently has been incorporated into a broad range of Earthbound medical and diagnostic tools. DSP is employed in advanced body imaging techniques including Computer-Aided Tomography, also known as CT and CATScan, and Magnetic Resonance Imaging (MRI). CT images are collected by irradiating a thin slice of the body with a fan-shaped x-ray beam from a number of directions around the body's perimeter. A tomographic (slice-like) picture is reconstructed from these multiple views by a computer. MRI employs a magnetic field and radio waves, rather than x-rays, to create images. In this photograph, a patient undergoes an open MRI.

  15. GPU-Acceleration of Sequence Homology Searches with Database Subsequence Clustering.

    PubMed

    Suzuki, Shuji; Kakuta, Masanori; Ishida, Takashi; Akiyama, Yutaka

    2016-01-01

    Sequence homology searches are used in various fields and require large amounts of computation time, especially for metagenomic analysis, owing to the large number of queries and the database size. To accelerate computing analyses, graphics processing units (GPUs) are widely used as a low-cost, high-performance computing platform. Therefore, we mapped the time-consuming steps involved in GHOSTZ, which is a state-of-the-art homology search algorithm for protein sequences, onto a GPU and implemented it as GHOSTZ-GPU. In addition, we optimized memory access for GPU calculations and for communication between the CPU and GPU. As per results of the evaluation test involving metagenomic data, GHOSTZ-GPU with 12 CPU threads and 1 GPU was approximately 3.0- to 4.1-fold faster than GHOSTZ with 12 CPU threads. Moreover, GHOSTZ-GPU with 12 CPU threads and 3 GPUs was approximately 5.8- to 7.7-fold faster than GHOSTZ with 12 CPU threads.

  16. Infant Statistical Learning

    PubMed Central

    Saffran, Jenny R.; Kirkham, Natasha Z.

    2017-01-01

    Perception involves making sense of a dynamic, multimodal environment. In the absence of mechanisms capable of exploiting the statistical patterns in the natural world, infants would face an insurmountable computational problem. Infant statistical learning mechanisms facilitate the detection of structure. These abilities allow the infant to compute across elements in their environmental input, extracting patterns for further processing and subsequent learning. In this selective review, we summarize findings that show that statistical learning is both a broad and flexible mechanism (supporting learning from different modalities across many different content areas) and input specific (shifting computations depending on the type of input and goal of learning). We suggest that statistical learning not only provides a framework for studying language development and object knowledge in constrained laboratory settings, but also allows researchers to tackle real-world problems, such as multilingualism, the role of ever-changing learning environments, and differential developmental trajectories. PMID:28793812

  17. Acceleration of GPU-based Krylov solvers via data transfer reduction

    DOE PAGES

    Anzt, Hartwig; Tomov, Stanimire; Luszczek, Piotr; ...

    2015-04-08

    Krylov subspace iterative solvers are often the method of choice when solving large sparse linear systems. At the same time, hardware accelerators such as graphics processing units continue to offer significant floating point performance gains for matrix and vector computations through easy-to-use libraries of computational kernels. However, as these libraries are usually composed of a well optimized but limited set of linear algebra operations, applications that use them often fail to reduce certain data communications, and hence fail to leverage the full potential of the accelerator. In this study, we target the acceleration of Krylov subspace iterative methods for graphicsmore » processing units, and in particular the Biconjugate Gradient Stabilized solver that significant improvement can be achieved by reformulating the method to reduce data-communications through application-specific kernels instead of using the generic BLAS kernels, e.g. as provided by NVIDIA’s cuBLAS library, and by designing a graphics processing unit specific sparse matrix-vector product kernel that is able to more efficiently use the graphics processing unit’s computing power. Furthermore, we derive a model estimating the performance improvement, and use experimental data to validate the expected runtime savings. Finally, considering that the derived implementation achieves significantly higher performance, we assert that similar optimizations addressing algorithm structure, as well as sparse matrix-vector, are crucial for the subsequent development of high-performance graphics processing units accelerated Krylov subspace iterative methods.« less

  18. DMD: a digital light processing application to projection displays

    NASA Astrophysics Data System (ADS)

    Feather, Gary A.

    1989-01-01

    Summary Revolutionary technologies achieve rapid product and subsequent business diffusion only when the in- ventors focus on technology application, maturation, and proliferation. A revolutionary technology is emerg- ing with micro-electromechanical systems (MEMS). MEMS are being developed by leveraging mature semi- conductor processing coupled with mechanical systems into complete, integrated, useful systems. The digital micromirror device (DMD), a Texas Instruments invented MEMS, has focused on its application to projec- tion displays. The DMD has demonstrated its application as a digital light processor, processing and produc- ing compelling computer and video projection displays. This tutorial discusses requirements in the projection display market and the potential solutions offered by this digital light processing system. The seminar in- cludes an evaluation of the market, system needs, design, fabrication, application, and performance results of a system using digital light processing solutions.

  19. Data Mining Citizen Science Results

    NASA Astrophysics Data System (ADS)

    Borne, K. D.

    2012-12-01

    Scientific discovery from big data is enabled through multiple channels, including data mining (through the application of machine learning algorithms) and human computation (commonly implemented through citizen science tasks). We will describe the results of new data mining experiments on the results from citizen science activities. Discovering patterns, trends, and anomalies in data are among the powerful contributions of citizen science. Establishing scientific algorithms that can subsequently re-discover the same types of patterns, trends, and anomalies in automatic data processing pipelines will ultimately result from the transformation of those human algorithms into computer algorithms, which can then be applied to much larger data collections. Scientific discovery from big data is thus greatly amplified through the marriage of data mining with citizen science.

  20. Neuromorphic sensory systems.

    PubMed

    Liu, Shih-Chii; Delbruck, Tobi

    2010-06-01

    Biology provides examples of efficient machines which greatly outperform conventional technology. Designers in neuromorphic engineering aim to construct electronic systems with the same efficient style of computation. This task requires a melding of novel engineering principles with knowledge gleaned from neuroscience. We discuss recent progress in realizing neuromorphic sensory systems which mimic the biological retina and cochlea, and subsequent sensor processing. The main trends are the increasing number of sensors and sensory systems that communicate through asynchronous digital signals analogous to neural spikes; the improved performance and usability of these sensors; and novel sensory processing methods which capitalize on the timing of spikes from these sensors. Experiments using these sensors can impact how we think the brain processes sensory information. 2010 Elsevier Ltd. All rights reserved.

  1. Automated Rapid Prototyping of 3D Ceramic Parts

    NASA Technical Reports Server (NTRS)

    McMillin, Scott G.; Griffin, Eugene A.; Griffin, Curtis W.; Coles, Peter W. H.; Engle, James D.

    2005-01-01

    An automated system of manufacturing equipment produces three-dimensional (3D) ceramic parts specified by computational models of the parts. The system implements an advanced, automated version of a generic rapid-prototyping process in which the fabrication of an object having a possibly complex 3D shape includes stacking of thin sheets, the outlines of which closely approximate the horizontal cross sections of the object at their respective heights. In this process, the thin sheets are made of a ceramic precursor material, and the stack is subsequently heated to transform it into a unitary ceramic object. In addition to the computer used to generate the computational model of the part to be fabricated, the equipment used in this process includes: 1) A commercially available laminated-object-manufacturing machine that was originally designed for building woodlike 3D objects from paper and was modified to accept sheets of ceramic precursor material, and 2) A machine designed specifically to feed single sheets of ceramic precursor material to the laminated-object-manufacturing machine. Like other rapid-prototyping processes that utilize stacking of thin sheets, this process begins with generation of the computational model of the part to be fabricated, followed by computational sectioning of the part into layers of predetermined thickness that collectively define the shape of the part. Information about each layer is transmitted to rapid-prototyping equipment, where the part is built layer by layer. What distinguishes this process from other rapid-prototyping processes that utilize stacking of thin sheets are the details of the machines and the actions that they perform. In this process, flexible sheets of ceramic precursor material (called "green" ceramic sheets) suitable for lamination are produced by tape casting. The binder used in the tape casting is specially formulated to enable lamination of layers with little or no applied heat or pressure. The tape is cut into individual sheets, which are stacked in the sheet-feeding machine until used. The sheet-feeding machine can hold enough sheets for about 8 hours of continuous operation.

  2. Modelling indirect interactions during failure spreading in a project activity network.

    PubMed

    Ellinas, Christos

    2018-03-12

    Spreading broadly refers to the notion of an entity propagating throughout a networked system via its interacting components. Evidence of its ubiquity and severity can be seen in a range of phenomena, from disease epidemics to financial systemic risk. In order to understand the dynamics of these critical phenomena, computational models map the probability of propagation as a function of direct exposure, typically in the form of pairwise interactions between components. By doing so, the important role of indirect interactions remains unexplored. In response, we develop a simple model that accounts for the effect of both direct and subsequent exposure, which we deploy in the novel context of failure propagation within a real-world engineering project. We show that subsequent exposure has a significant effect in key aspects, including the: (a) final spreading event size, (b) propagation rate, and (c) spreading event structure. In addition, we demonstrate the existence of 'hidden influentials' in large-scale spreading events, and evaluate the role of direct and subsequent exposure in their emergence. Given the evidence of the importance of subsequent exposure, our findings offer new insight on particular aspects that need to be included when modelling network dynamics in general, and spreading processes specifically.

  3. Semiautomated skeletonization of the pulmonary arterial tree in micro-CT images

    NASA Astrophysics Data System (ADS)

    Hanger, Christopher C.; Haworth, Steven T.; Molthen, Robert C.; Dawson, Christopher A.

    2001-05-01

    We present a simple and robust approach that utilizes planar images at different angular rotations combined with unfiltered back-projection to locate the central axes of the pulmonary arterial tree. Three-dimensional points are selected interactively by the user. The computer calculates a sub- volume unfiltered back-projection orthogonal to the vector connecting the two points and centered on the first point. Because more x-rays are absorbed at the thickest portion of the vessel, in the unfiltered back-projection, the darkest pixel is assumed to be the center of the vessel. The computer replaces this point with the newly computer-calculated point. A second back-projection is calculated around the original point orthogonal to a vector connecting the newly-calculated first point and user-determined second point. The darkest pixel within the reconstruction is determined. The computer then replaces the second point with the XYZ coordinates of the darkest pixel within this second reconstruction. Following a vector based on a moving average of previously determined 3- dimensional points along the vessel's axis, the computer continues this skeletonization process until stopped by the user. The computer estimates the vessel diameter along the set of previously determined points using a method similar to the full width-half max algorithm. On all subsequent vessels, the process works the same way except that at each point, distances between the current point and all previously determined points along different vessels are determined. If the difference is less than the previously estimated diameter, the vessels are assumed to branch. This user/computer interaction continues until the vascular tree has been skeletonized.

  4. Final Report Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Leary, Patrick

    The primary challenge motivating this project is the widening gap between the ability to compute information and to store it for subsequent analysis. This gap adversely impacts science code teams, who can perform analysis only on a small fraction of the data they calculate, resulting in the substantial likelihood of lost or missed science, when results are computed but not analyzed. Our approach is to perform as much analysis or visualization processing on data while it is still resident in memory, which is known as in situ processing. The idea in situ processing was not new at the time ofmore » the start of this effort in 2014, but efforts in that space were largely ad hoc, and there was no concerted effort within the research community that aimed to foster production-quality software tools suitable for use by Department of Energy (DOE) science projects. Our objective was to produce and enable the use of production-quality in situ methods and infrastructure, at scale, on DOE high-performance computing (HPC) facilities, though we expected to have an impact beyond DOE due to the widespread nature of the challenges, which affect virtually all large-scale computational science efforts. To achieve this objective, we engaged in software technology research and development (R&D), in close partnerships with DOE science code teams, to produce software technologies that were shown to run efficiently at scale on DOE HPC platforms.« less

  5. Two dimensional kinetic analysis of electrostatic harmonic plasma waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fonseca-Pongutá, E. C.; Ziebell, L. F.; Gaelzer, R.

    2016-06-15

    Electrostatic harmonic Langmuir waves are virtual modes excited in weakly turbulent plasmas, first observed in early laboratory beam-plasma experiments as well as in rocket-borne active experiments in space. However, their unequivocal presence was confirmed through computer simulated experiments and subsequently theoretically explained. The peculiarity of harmonic Langmuir waves is that while their existence requires nonlinear response, their excitation mechanism and subsequent early time evolution are governed by essentially linear process. One of the unresolved theoretical issues regards the role of nonlinear wave-particle interaction process over longer evolution time period. Another outstanding issue is that existing theories for these modes aremore » limited to one-dimensional space. The present paper carries out two dimensional theoretical analysis of fundamental and (first) harmonic Langmuir waves for the first time. The result shows that harmonic Langmuir wave is essentially governed by (quasi)linear process and that nonlinear wave-particle interaction plays no significant role in the time evolution of the wave spectrum. The numerical solutions of the two-dimensional wave spectra for fundamental and harmonic Langmuir waves are also found to be consistent with those obtained by direct particle-in-cell simulation method reported in the literature.« less

  6. Process yield improvements with process control terminal for varian serial ion implanters

    NASA Astrophysics Data System (ADS)

    Higashi, Harry; Soni, Ameeta; Martinez, Larry; Week, Ken

    Implant processes in a modern wafer production fab are extremely complex. There can be several types of misprocessing, i.e. wrong dose or species, double implants and missed implants. Process Control Terminals (PCT) for Varian 350Ds installed at Intel fabs were found to substantially reduce the number of misprocessing steps. This paper describes those misprocessing steps and their subsequent reduction with use of PCTs. Reliable and simple process control with serial process ion implanters has been in increasing demand. A well designed process control terminal greatly increases device yield by monitoring all pertinent implanter functions and enabling process engineering personnel to set up process recipes for simple and accurate system operation. By programming user-selectable interlocks, implant errors are reduced and those that occur are logged for further analysis and prevention. A process control terminal should also be compatible with office personal computers for greater flexibility in system use and data analysis. The impact from the capability of a process control terminal is increased productivity, ergo higher device yield.

  7. Automation of a Wave-Optics Simulation and Image Post-Processing Package on Riptide

    NASA Astrophysics Data System (ADS)

    Werth, M.; Lucas, J.; Thompson, D.; Abercrombie, M.; Holmes, R.; Roggemann, M.

    Detailed wave-optics simulations and image post-processing algorithms are computationally expensive and benefit from the massively parallel hardware available at supercomputing facilities. We created an automated system that interfaces with the Maui High Performance Computing Center (MHPCC) Distributed MATLAB® Portal interface to submit massively parallel waveoptics simulations to the IBM iDataPlex (Riptide) supercomputer. This system subsequently postprocesses the output images with an improved version of physically constrained iterative deconvolution (PCID) and analyzes the results using a series of modular algorithms written in Python. With this architecture, a single person can simulate thousands of unique scenarios and produce analyzed, archived, and briefing-compatible output products with very little effort. This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA). The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.

  8. Thermal Convection on an Irradiated Target

    NASA Astrophysics Data System (ADS)

    Mehmedagic, Igbal; Thangam, Siva

    2016-11-01

    The present work involves the computational modeling of metallic targets subject to steady and high intensity heat flux. The ablation and associated fluid dynamics when metallic surfaces are exposed to high intensity laser fluence at normal atmospheric conditions is modelled. The incident energy from the laser is partly absorbed and partly reflected by the surface during ablation and subsequent vaporization of the melt. Computational findings based on effective representation and prediction of the heat transfer, melting and vaporization of the targeting material as well as plume formation and expansion are presented and discussed in the context of various ablation mechanisms, variable thermo-physical and optical properties, plume expansion and surface geometry. The energy distribution during the process between the bulk and vapor phase strongly depends on optical and thermodynamic properties of the irradiated material, radiation wavelength, and laser intensity. The relevance of the findings to various manufacturing processes as well as for the development of protective shields is discussed. Funded in part by U. S. Army ARDEC, Picatinny Arsenal, NJ.

  9. Stability of phase transformation models for Ti-6Al-4V under cyclic thermal loading imposed during laser metal deposition

    NASA Astrophysics Data System (ADS)

    Klusemann, Benjamin; Bambach, Markus

    2018-05-01

    Processing conditions play a crucial role for the resulting microstructure and properties of the material. In particular, processing materials under non-equilibrium conditions can lead to a remarkable improvement of the final properties [1]. Additive manufacturing represents a specific process example considered in this study. Models for the prediction of residual stresses and microstructure in additive manufacturing processes, such as laser metal deposition, are being developed with huge efforts to support the development of materials and processes as well as to support process design [2-4]. Since the microstructure predicted after each heating and cooling cycle induced by the moving laser source enters the phase transformation kinetics and microstucture evolution of the subsequent heating and cooling cycle, a feed-back loop for the microstructure calculation is created. This calculation loop may become unstable so that the computed microstructure and related properties become very sensitive to small variations in the input parameters, e.g. thermal conductivity. In this paper, a model for phase transformation in Ti-6Al-4V, originally proposed by Charles Murgau et al. [5], is adopted and minimal adjusted concerning the decomposition of the martensite phase are made. This model is subsequently used to study the changes in the predictions of the different phase volume fractions during heating and cooling under the conditions of laser metal deposition with respect to slight variations in the thermal process history.

  10. PAI-OFF: A new proposal for online flood forecasting in flash flood prone catchments

    NASA Astrophysics Data System (ADS)

    Schmitz, G. H.; Cullmann, J.

    2008-10-01

    SummaryThe Process Modelling and Artificial Intelligence for Online Flood Forecasting (PAI-OFF) methodology combines the reliability of physically based, hydrologic/hydraulic modelling with the operational advantages of artificial intelligence. These operational advantages are extremely low computation times and straightforward operation. The basic principle of the methodology is to portray process models by means of ANN. We propose to train ANN flood forecasting models with synthetic data that reflects the possible range of storm events. To this end, establishing PAI-OFF requires first setting up a physically based hydrologic model of the considered catchment and - optionally, if backwater effects have a significant impact on the flow regime - a hydrodynamic flood routing model of the river reach in question. Both models are subsequently used for simulating all meaningful and flood relevant storm scenarios which are obtained from a catchment specific meteorological data analysis. This provides a database of corresponding input/output vectors which is then completed by generally available hydrological and meteorological data for characterizing the catchment state prior to each storm event. This database subsequently serves for training both a polynomial neural network (PoNN) - portraying the rainfall-runoff process - and a multilayer neural network (MLFN), which mirrors the hydrodynamic flood wave propagation in the river. These two ANN models replace the hydrological and hydrodynamic model in the operational mode. After presenting the theory, we apply PAI-OFF - essentially consisting of the coupled "hydrologic" PoNN and "hydrodynamic" MLFN - to the Freiberger Mulde catchment in the Erzgebirge (Ore-mountains) in East Germany (3000 km 2). Both the demonstrated computational efficiency and the prediction reliability underline the potential of the new PAI-OFF methodology for online flood forecasting.

  11. Processing the image gradient field using a topographic primal sketch approach.

    PubMed

    Gambaruto, A M

    2015-03-01

    The spatial derivatives of the image intensity provide topographic information that may be used to identify and segment objects. The accurate computation of the derivatives is often hampered in medical images by the presence of noise and a limited resolution. This paper focuses on accurate computation of spatial derivatives and their subsequent use to process an image gradient field directly, from which an image with improved characteristics can be reconstructed. The improvements include noise reduction, contrast enhancement, thinning object contours and the preservation of edges. Processing the gradient field directly instead of the image is shown to have numerous benefits. The approach is developed such that the steps are modular, allowing the overall method to be improved and possibly tailored to different applications. As presented, the approach relies on a topographic representation and primal sketch of an image. Comparisons with existing image processing methods on a synthetic image and different medical images show improved results and accuracy in segmentation. Here, the focus is on objects with low spatial resolution, which is often the case in medical images. The methods developed show the importance of improved accuracy in derivative calculation and the potential in processing the image gradient field directly. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Segmentation of 3D ultrasound computer tomography reflection images using edge detection and surface fitting

    NASA Astrophysics Data System (ADS)

    Hopp, T.; Zapf, M.; Ruiter, N. V.

    2014-03-01

    An essential processing step for comparison of Ultrasound Computer Tomography images to other modalities, as well as for the use in further image processing, is to segment the breast from the background. In this work we present a (semi-) automated 3D segmentation method which is based on the detection of the breast boundary in coronal slice images and a subsequent surface fitting. The method was evaluated using a software phantom and in-vivo data. The fully automatically processed phantom results showed that a segmentation of approx. 10% of the slices of a dataset is sufficient to recover the overall breast shape. Application to 16 in-vivo datasets was performed successfully using semi-automated processing, i.e. using a graphical user interface for manual corrections of the automated breast boundary detection. The processing time for the segmentation of an in-vivo dataset could be significantly reduced by a factor of four compared to a fully manual segmentation. Comparison to manually segmented images identified a smoother surface for the semi-automated segmentation with an average of 11% of differing voxels and an average surface deviation of 2mm. Limitations of the edge detection may be overcome by future updates of the KIT USCT system, allowing a fully-automated usage of our segmentation approach.

  13. Evaluating two process scale chromatography column header designs using CFD.

    PubMed

    Johnson, Chris; Natarajan, Venkatesh; Antoniou, Chris

    2014-01-01

    Chromatography is an indispensable unit operation in the downstream processing of biomolecules. Scaling of chromatographic operations typically involves a significant increase in the column diameter. At this scale, the flow distribution within a packed bed could be severely affected by the distributor design in process scale columns. Different vendors offer process scale columns with varying design features. The effect of these design features on the flow distribution in packed beds and the resultant effect on column efficiency and cleanability needs to be properly understood in order to prevent unpleasant surprises on scale-up. Computational Fluid Dynamics (CFD) provides a cost-effective means to explore the effect of various distributor designs on process scale performance. In this work, we present a CFD tool that was developed and validated against experimental dye traces and tracer injections. Subsequently, the tool was employed to compare and contrast two commercially available header designs. © 2014 American Institute of Chemical Engineers.

  14. Cancel and rethink in the Wason selection task: further evidence for the heuristic-analytic dual process theory.

    PubMed

    Wada, Kazushige; Nittono, Hiroshi

    2004-06-01

    The reasoning process in the Wason selection task was examined by measuring card inspection times in the letter-number and drinking-age problems. 24 students were asked to solve the problems presented on a computer screen. Only the card touched with a mouse pointer was visible, and the total exposure time of each card was measured. Participants were allowed to cancel their previous selections at any time. Although rethinking was encouraged, the cards once selected were rarely cancelled (10% of the total selections). Moreover, most of the cancelled cards were reselected (89% of the total cancellations). Consistent with previous findings, inspection times were longer for selected cards than for nonselected cards. These results suggest that card selections are determined largely by initial heuristic processes and rarely reversed by subsequent analytic processes. The present study gives further support for the heuristic-analytic dual process theory.

  15. Optimizing the availability of a buffered industrial process

    DOEpatents

    Martz, Jr., Harry F.; Hamada, Michael S.; Koehler, Arthur J.; Berg, Eric C.

    2004-08-24

    A computer-implemented process determines optimum configuration parameters for a buffered industrial process. A population size is initialized by randomly selecting a first set of design and operation values associated with subsystems and buffers of the buffered industrial process to form a set of operating parameters for each member of the population. An availability discrete event simulation (ADES) is performed on each member of the population to determine the product-based availability of each member. A new population is formed having members with a second set of design and operation values related to the first set of design and operation values through a genetic algorithm and the product-based availability determined by the ADES. Subsequent population members are then determined by iterating the genetic algorithm with product-based availability determined by ADES to form improved design and operation values from which the configuration parameters are selected for the buffered industrial process.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galarraga, Haize; Warren, Robert J.; Lados, Diana A.

    Electron beam melting (EBM) is a metal powder bed fusion additive manufacturing (AM) technology that is used to fabricate three-dimensional near-net-shaped parts directly from computer models. Ti-6Al-4V is the most widely used and studied alloy for this technology and is the focus of this work in its ELI (Extra Low Interstitial) variation. The mechanisms of microstructure formation, evolution, and its subsequent influence on mechanical properties of the alloy in as-fabricated condition have been documented by various researchers. In the present work, the thermal history resulting in the formation of the as-fabricated microstructure was analyzed and studied by a thermal simulation.more » Subsequently different heat treatments were performed based on three approaches in order to study the effects of heat treatments on the singular and exclusive microstructure formed during the EBM fabrication process. In the first approach, the effect of cooling rate after the solutionizing process was studied. In the second approach, the variation of α lath thickness during annealing treatment and correlation with mechanical properties was established. In the last approach, several solutionizing and aging experiments were conducted.« less

  17. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Tolson, Bryan

    2017-04-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters or model processes. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method independency of the convergence testing method, we applied it to three widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991, Campolongo et al., 2000), the variance-based Sobol' method (Solbol' 1993, Saltelli et al. 2010) and a derivative-based method known as Parameter Importance index (Goehler et al. 2013). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. Subsequently, we focus on the model-independency by testing the frugal method using the hydrologic model mHM (www.ufz.de/mhm) with about 50 model parameters. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed (and published) sensitivity results. This is one step towards reliable and transferable, published sensitivity results.

  18. Ten-Ecosystem Study. [Grand and Weld Counties, Colorado; Warren County, Pennsylvania; St. Louis County, Minnesota; Sandoval County, New Mexico; Kershaw County, South Carolina; Fort Yukon, Alaska; Grays Harbor County, Washington; and Washington County, Missouri.

    NASA Technical Reports Server (NTRS)

    Mazade, A. V. (Principal Investigator)

    1981-01-01

    Remote sensing methodology developed for the Nationwide Forestry Applications Program utilize computer data processing procedures for performing inventories from satellite imagery. The Ten-Ecosystem Study (TES) was developed to test the processing procedures in an intermediate-sized application study. The results of TES indicate that LANDSAT multispectral imagery and associated automatic data processing techniques can be used to distinguish softwood, hardwood, grassland, and water and make inventory of these classes with an accuracy of 70 percent or better. The technical problems encountered during the TES and the solutions and insights to these problems are discussed. The TES experience is useful in planning subsequent inventories utilizing remote sensing technology.

  19. The effect on cadaver blood DNA identification by the use of targeted and whole body post-mortem computed tomography angiography.

    PubMed

    Rutty, Guy N; Barber, Jade; Amoroso, Jasmin; Morgan, Bruno; Graham, Eleanor A M

    2013-12-01

    Post-mortem computed tomography angiography (PMCTA) involves the injection of contrast agents. This could have both a dilution effect on biological fluid samples and could affect subsequent post-contrast analytical laboratory processes. We undertook a small sample study of 10 targeted and 10 whole body PMCTA cases to consider whether or not these two methods of PMCTA could affect post-PMCTA cadaver blood based DNA identification. We used standard methodology to examine DNA from blood samples obtained before and after the PMCTA procedure. We illustrate that neither of these PMCTA methods had an effect on the alleles called following short tandem repeat based DNA profiling, and therefore the ability to undertake post-PMCTA blood based DNA identification.

  20. Solving multiconstraint assignment problems using learning automata.

    PubMed

    Horn, Geir; Oommen, B John

    2010-02-01

    This paper considers the NP-hard problem of object assignment with respect to multiple constraints: assigning a set of elements (or objects) into mutually exclusive classes (or groups), where the elements which are "similar" to each other are hopefully located in the same class. The literature reports solutions in which the similarity constraint consists of a single index that is inappropriate for the type of multiconstraint problems considered here and where the constraints could simultaneously be contradictory. This feature, where we permit possibly contradictory constraints, distinguishes this paper from the state of the art. Indeed, we are aware of no learning automata (or other heuristic) solutions which solve this problem in its most general setting. Such a scenario is illustrated with the static mapping problem, which consists of distributing the processes of a parallel application onto a set of computing nodes. This is a classical and yet very important problem within the areas of parallel computing, grid computing, and cloud computing. We have developed four learning-automata (LA)-based algorithms to solve this problem: First, a fixed-structure stochastic automata algorithm is presented, where the processes try to form pairs to go onto the same node. This algorithm solves the problem, although it requires some centralized coordination. As it is desirable to avoid centralized control, we subsequently present three different variable-structure stochastic automata (VSSA) algorithms, which have superior partitioning properties in certain settings, although they forfeit some of the scalability features of the fixed-structure algorithm. All three VSSA algorithms model the processes as automata having first the hosting nodes as possible actions; second, the processes as possible actions; and, third, attempting to estimate the process communication digraph prior to probabilistically mapping the processes. This paper, which, we believe, comprehensively reports the pioneering LA solutions to this problem, unequivocally demonstrates that LA can play an important role in solving complex combinatorial and integer optimization problems.

  1. Creep Measurement Video Extensometer

    NASA Technical Reports Server (NTRS)

    Jaster, Mark; Vickerman, Mary; Padula, Santo, II; Juhas, John

    2011-01-01

    Understanding material behavior under load is critical to the efficient and accurate design of advanced aircraft and spacecraft. Technologies such as the one disclosed here allow accurate creep measurements to be taken automatically, reducing error. The goal was to develop a non-contact, automated system capable of capturing images that could subsequently be processed to obtain the strain characteristics of these materials during deformation, while maintaining adequate resolution to capture the true deformation response of the material. The measurement system comprises a high-resolution digital camera, computer, and software that work collectively to interpret the image.

  2. Dynamic, diagnostic, and pharmacological radionuclide studies of the esophagus in achalasia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rozen, P.; Gelfond, M.; Zaltzman, S.

    1982-08-01

    The esophagus was evaluated in 15 patients with achalasia by continuous gamma camera imaging following ingestion of a semi-solid meal labeled with /sup 99m/Tc. The images were displayed and recorded on a simple computerized data processing/display system. Subsequent cine mode images of esophageal emptying demonstrated abnormalities of the body of the esophagus not reflected by the manometric examination. Computer-generated time-activity curves representing specific regions of interest were better than manometry in evaluating the results of myotomy, dilatation, and drug therapy. Isosorbide dinitrate significantly improved esophageal emptying.

  3. The incept of ejection from a fresh Taylor cone and subsequent evolution

    NASA Astrophysics Data System (ADS)

    Lopez-Herrera, Jose M.; Ganan-Calvo, Alfonso

    2017-11-01

    Within a certain range of applied voltages, a pendant drop suddenly subject to an intense electric field develops a cusp from which a fast liquid ligament issues. The incept of this process has common roots with other related phenomena like the Worthington jets, the jet issued after surface bubble bursting or the impact of a drop on a liquid pool. This is experimentally and numerically demonstrated. However, given the electrohydrodynamic nature of the driver in the formation of a Taylor cone, a number of electrokinetic processes take place in the rapid tapering flow, whose characteristic times should be carefully compared to the ones of the flow. As a result, universal scaling laws for the size and charge of the top drop have been obtained. Subsequently, sustaining the applied electric field, the ejection continues and the issuing liquid ligament releases a train of droplets of varying size and charge. Under appropriate conditions and if the liquid suctioned by the electric field is replenished, the system reaches a (quasi)steady state asymptotically. The degree of compliance of the size and charge of those subsequent droplets with previously proposed scaling laws of steady Taylor cone-jets has been studied. Computational code Gerris and an extended electrokinetic module is used. This work was supported by the Ministerio de Economia y Competitividad, Plan Estatal 2013-2016 Retos, project DPI2016-78887-C3-1-R.

  4. Brain-computer interfaces in neurological rehabilitation.

    PubMed

    Daly, Janis J; Wolpaw, Jonathan R

    2008-11-01

    Recent advances in analysis of brain signals, training patients to control these signals, and improved computing capabilities have enabled people with severe motor disabilities to use their brain signals for communication and control of objects in their environment, thereby bypassing their impaired neuromuscular system. Non-invasive, electroencephalogram (EEG)-based brain-computer interface (BCI) technologies can be used to control a computer cursor or a limb orthosis, for word processing and accessing the internet, and for other functions such as environmental control or entertainment. By re-establishing some independence, BCI technologies can substantially improve the lives of people with devastating neurological disorders such as advanced amyotrophic lateral sclerosis. BCI technology might also restore more effective motor control to people after stroke or other traumatic brain disorders by helping to guide activity-dependent brain plasticity by use of EEG brain signals to indicate to the patient the current state of brain activity and to enable the user to subsequently lower abnormal activity. Alternatively, by use of brain signals to supplement impaired muscle control, BCIs might increase the efficacy of a rehabilitation protocol and thus improve muscle control for the patient.

  5. Alpha absolute power measurement in panic disorder with agoraphobia patients.

    PubMed

    de Carvalho, Marcele Regine; Velasques, Bruna Brandão; Freire, Rafael C; Cagy, Maurício; Marques, Juliana Bittencourt; Teixeira, Silmar; Rangé, Bernard P; Piedade, Roberto; Ribeiro, Pedro; Nardi, Antonio Egidio; Akiskal, Hagop Souren

    2013-10-01

    Panic attacks are thought to be a result from a dysfunctional coordination of cortical and brainstem sensory information leading to heightened amygdala activity with subsequent neuroendocrine, autonomic and behavioral activation. Prefrontal areas may be responsible for inhibitory top-down control processes and alpha synchronization seems to reflect this modulation. The objective of this study was to measure frontal absolute alpha-power with qEEG in 24 subjects with panic disorder and agoraphobia (PDA) compared to 21 healthy controls. qEEG data were acquired while participants watched a computer simulation, consisting of moments classified as "high anxiety"(HAM) and "low anxiety" (LAM). qEEG data were also acquired during two rest conditions, before and after the computer simulation display. We observed a higher absolute alpha-power in controls when compared to the PDA patients while watching the computer simulation. The main finding was an interaction between the moment and group factors on frontal cortex. Our findings suggest that the decreased alpha-power in the frontal cortex for the PDA group may reflect a state of high excitability. Our results suggest a possible deficiency in top-down control processes of anxiety reflected by a low absolute alpha-power in the PDA group while watching the computer simulation and they highlight that prefrontal regions and frontal region nearby the temporal area are recruited during the exposure to anxiogenic stimuli. © 2013 Elsevier B.V. All rights reserved.

  6. De novo self-assembling collagen heterotrimers using explicit positive and negative design.

    PubMed

    Xu, Fei; Zhang, Lei; Koder, Ronald L; Nanda, Vikas

    2010-03-23

    We sought to computationally design model collagen peptides that specifically associate as heterotrimers. Computational design has been successfully applied to the creation of new protein folds and functions. Despite the high abundance of collagen and its key role in numerous biological processes, fibrous proteins have received little attention as computational design targets. Collagens are composed of three polypeptide chains that wind into triple helices. We developed a discrete computational model to design heterotrimer-forming collagen-like peptides. Stability and specificity of oligomerization were concurrently targeted using a combined positive and negative design approach. The sequences of three 30-residue peptides, A, B, and C, were optimized to favor charge-pair interactions in an ABC heterotrimer, while disfavoring the 26 competing oligomers (i.e., AAA, ABB, BCA). Peptides were synthesized and characterized for thermal stability and triple-helical structure by circular dichroism and NMR. A unique A:B:C-type species was not achieved. Negative design was partially successful, with only A + B and B + C competing mixtures formed. Analysis of computed versus experimental stabilities helps to clarify the role of electrostatics and secondary-structure propensities determining collagen stability and to provide important insight into how subsequent designs can be improved.

  7. Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, Wes

    2016-07-24

    The primary challenge motivating this team’s work is the widening gap between the ability to compute information and to store it for subsequent analysis. This gap adversely impacts science code teams, who are able to perform analysis only on a small fraction of the data they compute, resulting in the very real likelihood of lost or missed science, when results are computed but not analyzed. Our approach is to perform as much analysis or visualization processing on data while it is still resident in memory, an approach that is known as in situ processing. The idea in situ processing wasmore » not new at the time of the start of this effort in 2014, but efforts in that space were largely ad hoc, and there was no concerted effort within the research community that aimed to foster production-quality software tools suitable for use by DOE science projects. In large, our objective was produce and enable use of production-quality in situ methods and infrastructure, at scale, on DOE HPC facilities, though we expected to have impact beyond DOE due to the widespread nature of the challenges, which affect virtually all large-scale computational science efforts. To achieve that objective, we assembled a unique team of researchers consisting of representatives from DOE national laboratories, academia, and industry, and engaged in software technology R&D, as well as engaged in close partnerships with DOE science code teams, to produce software technologies that were shown to run effectively at scale on DOE HPC platforms.« less

  8. Speech perception at the interface of neurobiology and linguistics.

    PubMed

    Poeppel, David; Idsardi, William J; van Wassenhove, Virginie

    2008-03-12

    Speech perception consists of a set of computations that take continuously varying acoustic waveforms as input and generate discrete representations that make contact with the lexical representations stored in long-term memory as output. Because the perceptual objects that are recognized by the speech perception enter into subsequent linguistic computation, the format that is used for lexical representation and processing fundamentally constrains the speech perceptual processes. Consequently, theories of speech perception must, at some level, be tightly linked to theories of lexical representation. Minimally, speech perception must yield representations that smoothly and rapidly interface with stored lexical items. Adopting the perspective of Marr, we argue and provide neurobiological and psychophysical evidence for the following research programme. First, at the implementational level, speech perception is a multi-time resolution process, with perceptual analyses occurring concurrently on at least two time scales (approx. 20-80 ms, approx. 150-300 ms), commensurate with (sub)segmental and syllabic analyses, respectively. Second, at the algorithmic level, we suggest that perception proceeds on the basis of internal forward models, or uses an 'analysis-by-synthesis' approach. Third, at the computational level (in the sense of Marr), the theory of lexical representation that we adopt is principally informed by phonological research and assumes that words are represented in the mental lexicon in terms of sequences of discrete segments composed of distinctive features. One important goal of the research programme is to develop linking hypotheses between putative neurobiological primitives (e.g. temporal primitives) and those primitives derived from linguistic inquiry, to arrive ultimately at a biologically sensible and theoretically satisfying model of representation and computation in speech.

  9. Selection of Levels of Dressing Process Parameters by Using TOPSIS Technique for Surface Roughness of En-31 Work piece in CNC Cylindrical Grinding Machine

    NASA Astrophysics Data System (ADS)

    Patil, Sanjay S.; Bhalerao, Yogesh J.

    2017-02-01

    Grinding is metal cutting process used for mainly finishing the automobile components. The grinding wheel performance becomes dull by using it most of times. So it should be reshaping for consistent performance. It is necessary to remove dull grains of grinding wheel which is known as dressing process. The surface finish produced on the work piece is dependent on the dressing parameters in sub-sequent grinding operation. Multi-point diamond dresser has four important parameters such as the dressing cross feed rate, dressing depth of cut, width of the diamond dresser and drag angle of the dresser. The range of cross feed rate level is from 80-100 mm/min, depth of cut varies from 10 - 30 micron, width of diamond dresser is from 0.8 - 1.10mm and drag angle is from 40o - 500, The relative closeness to ideal levels of dressing parameters are found for surface finish produced on the En-31 work piece during sub-sequent grinding operation by using Technique of Order Preference by Similarity to Ideal Solution (TOPSIS).In the present work, closeness to ideal solution i.e. levels of dressing parameters are found for Computer Numerical Control (CNC) cylindrical angular grinding machine. After the TOPSIS technique, it is found that the value of Level I is 0.9738 which gives better surface finish on the En-31 work piece in sub-sequent grinding operation which helps the user to select the correct levels (combinations) of dressing parameters.

  10. Applying a new computer-aided detection scheme generated imaging marker to predict short-term breast cancer risk

    NASA Astrophysics Data System (ADS)

    Mirniaharikandehei, Seyedehnafiseh; Hollingsworth, Alan B.; Patel, Bhavika; Heidari, Morteza; Liu, Hong; Zheng, Bin

    2018-05-01

    This study aims to investigate the feasibility of identifying a new quantitative imaging marker based on false-positives generated by a computer-aided detection (CAD) scheme to help predict short-term breast cancer risk. An image dataset including four view mammograms acquired from 1044 women was retrospectively assembled. All mammograms were originally interpreted as negative by radiologists. In the next subsequent mammography screening, 402 women were diagnosed with breast cancer and 642 remained negative. An existing CAD scheme was applied ‘as is’ to process each image. From CAD-generated results, four detection features including the total number of (1) initial detection seeds and (2) the final detected false-positive regions, (3) average and (4) sum of detection scores, were computed from each image. Then, by combining the features computed from two bilateral images of left and right breasts from either craniocaudal or mediolateral oblique view, two logistic regression models were trained and tested using a leave-one-case-out cross-validation method to predict the likelihood of each testing case being positive in the next subsequent screening. The new prediction model yielded the maximum prediction accuracy with an area under a ROC curve of AUC  =  0.65  ±  0.017 and the maximum adjusted odds ratio of 4.49 with a 95% confidence interval of (2.95, 6.83). The results also showed an increasing trend in the adjusted odds ratio and risk prediction scores (p  <  0.01). Thus, this study demonstrated that CAD-generated false-positives might include valuable information, which needs to be further explored for identifying and/or developing more effective imaging markers for predicting short-term breast cancer risk.

  11. PROcess Based Diagnostics PROBE

    NASA Technical Reports Server (NTRS)

    Clune, T.; Schmidt, G.; Kuo, K.; Bauer, M.; Oloso, H.

    2013-01-01

    Many of the aspects of the climate system that are of the greatest interest (e.g., the sensitivity of the system to external forcings) are emergent properties that arise via the complex interplay between disparate processes. This is also true for climate models most diagnostics are not a function of an isolated portion of source code, but rather are affected by multiple components and procedures. Thus any model-observation mismatch is hard to attribute to any specific piece of code or imperfection in a specific model assumption. An alternative approach is to identify diagnostics that are more closely tied to specific processes -- implying that if a mismatch is found, it should be much easier to identify and address specific algorithmic choices that will improve the simulation. However, this approach requires looking at model output and observational data in a more sophisticated way than the more traditional production of monthly or annual mean quantities. The data must instead be filtered in time and space for examples of the specific process being targeted.We are developing a data analysis environment called PROcess-Based Explorer (PROBE) that seeks to enable efficient and systematic computation of process-based diagnostics on very large sets of data. In this environment, investigators can define arbitrarily complex filters and then seamlessly perform computations in parallel on the filtered output from their model. The same analysis can be performed on additional related data sets (e.g., reanalyses) thereby enabling routine comparisons between model and observational data. PROBE also incorporates workflow technology to automatically update computed diagnostics for subsequent executions of a model. In this presentation, we will discuss the design and current status of PROBE as well as share results from some preliminary use cases.

  12. Word aligned bitmap compression method, data structure, and apparatus

    DOEpatents

    Wu, Kesheng; Shoshani, Arie; Otoo, Ekow

    2004-12-14

    The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.

  13. GPU-Acceleration of Sequence Homology Searches with Database Subsequence Clustering

    PubMed Central

    Suzuki, Shuji; Kakuta, Masanori; Ishida, Takashi; Akiyama, Yutaka

    2016-01-01

    Sequence homology searches are used in various fields and require large amounts of computation time, especially for metagenomic analysis, owing to the large number of queries and the database size. To accelerate computing analyses, graphics processing units (GPUs) are widely used as a low-cost, high-performance computing platform. Therefore, we mapped the time-consuming steps involved in GHOSTZ, which is a state-of-the-art homology search algorithm for protein sequences, onto a GPU and implemented it as GHOSTZ-GPU. In addition, we optimized memory access for GPU calculations and for communication between the CPU and GPU. As per results of the evaluation test involving metagenomic data, GHOSTZ-GPU with 12 CPU threads and 1 GPU was approximately 3.0- to 4.1-fold faster than GHOSTZ with 12 CPU threads. Moreover, GHOSTZ-GPU with 12 CPU threads and 3 GPUs was approximately 5.8- to 7.7-fold faster than GHOSTZ with 12 CPU threads. PMID:27482905

  14. Neighbour lists for smoothed particle hydrodynamics on GPUs

    NASA Astrophysics Data System (ADS)

    Winkler, Daniel; Rezavand, Massoud; Rauch, Wolfgang

    2018-04-01

    The efficient iteration of neighbouring particles is a performance critical aspect of any high performance smoothed particle hydrodynamics (SPH) solver. SPH solvers that implement a constant smoothing length generally divide the simulation domain into a uniform grid to reduce the computational complexity of the neighbour search. Based on this method, particle neighbours are either stored per grid cell or for each individual particle, denoted as Verlet list. While the latter approach has significantly higher memory requirements, it has the potential for a significant computational speedup. A theoretical comparison is performed to estimate the potential improvements of the method based on unknown hardware dependent factors. Subsequently, the computational performance of both approaches is empirically evaluated on graphics processing units. It is shown that the speedup differs significantly for different hardware, dimensionality and floating point precision. The Verlet list algorithm is implemented as an alternative to the cell linked list approach in the open-source SPH solver DualSPHysics and provided as a standalone software package.

  15. A linear decomposition method for large optimization problems. Blueprint for development

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1982-01-01

    A method is proposed for decomposing large optimization problems encountered in the design of engineering systems such as an aircraft into a number of smaller subproblems. The decomposition is achieved by organizing the problem and the subordinated subproblems in a tree hierarchy and optimizing each subsystem separately. Coupling of the subproblems is accounted for by subsequent optimization of the entire system based on sensitivities of the suboptimization problem solutions at each level of the tree to variables of the next higher level. A formalization of the procedure suitable for computer implementation is developed and the state of readiness of the implementation building blocks is reviewed showing that the ingredients for the development are on the shelf. The decomposition method is also shown to be compatible with the natural human organization of the design process of engineering systems. The method is also examined with respect to the trends in computer hardware and software progress to point out that its efficiency can be amplified by network computing using parallel processors.

  16. Computer Networking with the Victorian Correspondence School.

    ERIC Educational Resources Information Center

    Conboy, Ian

    During 1985 the Education Department installed two-way radios in 44 remote secondary schools in Victoria, Australia, to improve turn-around time for correspondence assignments. Subsequently, teacher supervisors at Melbourne's Correspondence School sought ways to further augument audio interactivity with computer networking. Computer equipment was…

  17. In situ treatment of VOCs by recirculation technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siegrist, R.L.; Webb, O.F.; Ally, M.R.

    1993-06-01

    The project described herein was conducted by Oak Ridge National Laboratory (ORNL) to identify processes and technologies developed in Germany that appeared to have near-term potential for enhancing the cleanup of volatile organic compound (VOC) contaminated soil and groundwater at DOE sites. Members of the ORNL research team identified and evaluated selected German technologies developed at or in association with the University of Karlsruhe (UoK) for in situ treatment of VOC contaminated soils and groundwater. Project activities included contacts with researchers within three departments of the UoK (i.e., Applied Geology, Hydromechanics, and Soil and Foundation Engineering) during fall 1991 andmore » subsequent visits to UoK and private industry collaborators during February 1992. Subsequent analyses consisted of engineering computations, groundwater flow modeling, and treatment process modeling. As a result of these project efforts, two processes were identified as having near-term potential for DOE: (1) the vacuum vaporizer well/groundwater recirculation well and (2) the porous pipe/horizontal well. This document was prepared to summarize the methods and results of the assessment activities completed during the initial year of the project. The project is still ongoing, so not all facets of the effort are completely described in this document. Recommendations for laboratory and field experiments are provided.« less

  18. A comparative approach for the investigation of biological information processing: An examination of the structure and function of computer hard drives and DNA

    PubMed Central

    2010-01-01

    Background The robust storage, updating and utilization of information are necessary for the maintenance and perpetuation of dynamic systems. These systems can exist as constructs of metal-oxide semiconductors and silicon, as in a digital computer, or in the "wetware" of organic compounds, proteins and nucleic acids that make up biological organisms. We propose that there are essential functional properties of centralized information-processing systems; for digital computers these properties reside in the computer's hard drive, and for eukaryotic cells they are manifest in the DNA and associated structures. Methods Presented herein is a descriptive framework that compares DNA and its associated proteins and sub-nuclear structure with the structure and function of the computer hard drive. We identify four essential properties of information for a centralized storage and processing system: (1) orthogonal uniqueness, (2) low level formatting, (3) high level formatting and (4) translation of stored to usable form. The corresponding aspects of the DNA complex and a computer hard drive are categorized using this classification. This is intended to demonstrate a functional equivalence between the components of the two systems, and thus the systems themselves. Results Both the DNA complex and the computer hard drive contain components that fulfill the essential properties of a centralized information storage and processing system. The functional equivalence of these components provides insight into both the design process of engineered systems and the evolved solutions addressing similar system requirements. However, there are points where the comparison breaks down, particularly when there are externally imposed information-organizing structures on the computer hard drive. A specific example of this is the imposition of the File Allocation Table (FAT) during high level formatting of the computer hard drive and the subsequent loading of an operating system (OS). Biological systems do not have an external source for a map of their stored information or for an operational instruction set; rather, they must contain an organizational template conserved within their intra-nuclear architecture that "manipulates" the laws of chemistry and physics into a highly robust instruction set. We propose that the epigenetic structure of the intra-nuclear environment and the non-coding RNA may play the roles of a Biological File Allocation Table (BFAT) and biological operating system (Bio-OS) in eukaryotic cells. Conclusions The comparison of functional and structural characteristics of the DNA complex and the computer hard drive leads to a new descriptive paradigm that identifies the DNA as a dynamic storage system of biological information. This system is embodied in an autonomous operating system that inductively follows organizational structures, data hierarchy and executable operations that are well understood in the computer science industry. Characterizing the "DNA hard drive" in this fashion can lead to insights arising from discrepancies in the descriptive framework, particularly with respect to positing the role of epigenetic processes in an information-processing context. Further expansions arising from this comparison include the view of cells as parallel computing machines and a new approach towards characterizing cellular control systems. PMID:20092652

  19. A comparative approach for the investigation of biological information processing: an examination of the structure and function of computer hard drives and DNA.

    PubMed

    D'Onofrio, David J; An, Gary

    2010-01-21

    The robust storage, updating and utilization of information are necessary for the maintenance and perpetuation of dynamic systems. These systems can exist as constructs of metal-oxide semiconductors and silicon, as in a digital computer, or in the "wetware" of organic compounds, proteins and nucleic acids that make up biological organisms. We propose that there are essential functional properties of centralized information-processing systems; for digital computers these properties reside in the computer's hard drive, and for eukaryotic cells they are manifest in the DNA and associated structures. Presented herein is a descriptive framework that compares DNA and its associated proteins and sub-nuclear structure with the structure and function of the computer hard drive. We identify four essential properties of information for a centralized storage and processing system: (1) orthogonal uniqueness, (2) low level formatting, (3) high level formatting and (4) translation of stored to usable form. The corresponding aspects of the DNA complex and a computer hard drive are categorized using this classification. This is intended to demonstrate a functional equivalence between the components of the two systems, and thus the systems themselves. Both the DNA complex and the computer hard drive contain components that fulfill the essential properties of a centralized information storage and processing system. The functional equivalence of these components provides insight into both the design process of engineered systems and the evolved solutions addressing similar system requirements. However, there are points where the comparison breaks down, particularly when there are externally imposed information-organizing structures on the computer hard drive. A specific example of this is the imposition of the File Allocation Table (FAT) during high level formatting of the computer hard drive and the subsequent loading of an operating system (OS). Biological systems do not have an external source for a map of their stored information or for an operational instruction set; rather, they must contain an organizational template conserved within their intra-nuclear architecture that "manipulates" the laws of chemistry and physics into a highly robust instruction set. We propose that the epigenetic structure of the intra-nuclear environment and the non-coding RNA may play the roles of a Biological File Allocation Table (BFAT) and biological operating system (Bio-OS) in eukaryotic cells. The comparison of functional and structural characteristics of the DNA complex and the computer hard drive leads to a new descriptive paradigm that identifies the DNA as a dynamic storage system of biological information. This system is embodied in an autonomous operating system that inductively follows organizational structures, data hierarchy and executable operations that are well understood in the computer science industry. Characterizing the "DNA hard drive" in this fashion can lead to insights arising from discrepancies in the descriptive framework, particularly with respect to positing the role of epigenetic processes in an information-processing context. Further expansions arising from this comparison include the view of cells as parallel computing machines and a new approach towards characterizing cellular control systems.

  20. A Computer-Based System Integrating Instruction and Information Retrieval: A Description of Some Methodological Considerations.

    ERIC Educational Resources Information Center

    Selig, Judith A.; And Others

    This report, summarizing the activities of the Vision Information Center (VIC) in the field of computer-assisted instruction from December, 1966 to August, 1967, describes the methodology used to load a large body of information--a programed text on basic opthalmology--onto a computer for subsequent information retrieval and computer-assisted…

  1. 40 CFR 86.099-17 - Emission control diagnostic system for 1999 and later light-duty vehicles and light-duty trucks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... of computer codes. The emission control diagnostic system shall record and store in computer memory..., shall be stored in computer memory to identify correctly functioning emission control systems and those... in computer memory. Should a subsequent fuel system or misfire malfunction occur, any previously...

  2. 40 CFR 86.099-17 - Emission control diagnostic system for 1999 and later light-duty vehicles and light-duty trucks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... of computer codes. The emission control diagnostic system shall record and store in computer memory..., shall be stored in computer memory to identify correctly functioning emission control systems and those... in computer memory. Should a subsequent fuel system or misfire malfunction occur, any previously...

  3. Neural activation during imitation with or without performance feedback: An fMRI study.

    PubMed

    Zhang, Kaihua; Wang, Hui; Dong, Guangheng; Wang, Mengxing; Zhang, Jilei; Zhang, Hui; Meng, Weixia; Du, Xiaoxia

    2016-08-26

    In our daily lives, we often receive performance feedback (PF) during imitative learning, and we adjust our behaviors accordingly to improve performance. However, little is known regarding the neural mechanisms underlying this learning process. We hypothesized that appropriate PF would enhance neural activation or recruit additional brain areas during subsequent action imitation. Pictures of 20 different finger gestures without any social meaning were shown to participants from the first-person perspective. Imitation with or without PF was investigated by functional magnetic resonance imaging in 30 healthy subjects. The PF was given by a real person or by a computer. PF from a real person induced hyperactivation of the parietal lobe (precuneus and cuneus), cingulate cortex (posterior and anterior), temporal lobe (superior and transverse temporal gyri), and cerebellum (posterior and anterior lobes) during subsequent imitation. The positive PF and negative PF from a real person, induced the activation of more brain areas during the following imitation. The hyperactivation of the cerebellum, posterior cingulate cortex, precuneus, and cuneus suggests that the subjects exhibited enhanced motor control and visual attention during imitation after PF. Additionally, random PF from a computer had a small effect on the next imitation. We suggest that positive and accurate PF may be helpful for imitation learning. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Population-based imaging biobanks as source of big data.

    PubMed

    Gatidis, Sergios; Heber, Sophia D; Storz, Corinna; Bamberg, Fabian

    2017-06-01

    Advances of computational sciences over the last decades have enabled the introduction of novel methodological approaches in biomedical research. Acquiring extensive and comprehensive data about a research subject and subsequently extracting significant information has opened new possibilities in gaining insight into biological and medical processes. This so-called big data approach has recently found entrance into medical imaging and numerous epidemiological studies have been implementing advanced imaging to identify imaging biomarkers that provide information about physiological processes, including normal development and aging but also on the development of pathological disease states. The purpose of this article is to present existing epidemiological imaging studies and to discuss opportunities, methodological and organizational aspects, and challenges that population imaging poses to the field of big data research.

  5. The evolution of structural and chemical heterogeneity during rapid solidification at gas atomization

    NASA Astrophysics Data System (ADS)

    Golod, V. M.; Sufiiarov, V. Sh

    2017-04-01

    Gas atomization is a high-performance process for manufacturing superfine metal powders. Formation of the powder particles takes place primarily through the fragmentation of alloy melt flow with high-pressure inert gas, which leads to the formation of non-uniform sized micron-scale particles and subsequent their rapid solidification due to heat exchange with gas environment. The article presents results of computer modeling of crystallization process, simulation and experimental studies of the cellular-dendrite structure formation and microsegregation in different size particles. It presents results of adaptation of the approach for local nonequilibrium solidification to conditions of crystallization at gas atomization, detected border values of the particle size at which it is possible a manifestation of diffusionless crystallization.

  6. Integral blow moulding for cycle time reduction of CFR-TP aluminium contour joint processing

    NASA Astrophysics Data System (ADS)

    Barfuss, Daniel; Würfel, Veit; Grützner, Raik; Gude, Maik; Müller, Roland

    2018-05-01

    Integral blow moulding (IBM) as a joining technology of carbon fibre reinforced thermoplastic (CFR-TP) hollow profiles with metallic load introduction elements enables significant cycle time reduction by shortening of the process chain. As the composite part is joined to the metallic part during its consolidation process subsequent joining steps are omitted. In combination with a multi-scale structured load introduction element its form closure function enables to pass very high loads and is capable to achieve high degrees of material utilization. This paper first shows the process set-up utilizing thermoplastic tape braided preforms and two-staged press and internal hydro formed load introduction elements. Second focuses on heating technologies and process optimization. Aiming at cycle time reduction convection and induction heating in regard to the resulting product quality is inspected by photo micrographs and computer tomographic scans. Concluding remarks give final recommendations for the process design in regard to the structural design.

  7. Hybrid Discrete Wavelet Transform and Gabor Filter Banks Processing for Features Extraction from Biomedical Images

    PubMed Central

    Lahmiri, Salim; Boukadoum, Mounir

    2013-01-01

    A new methodology for automatic feature extraction from biomedical images and subsequent classification is presented. The approach exploits the spatial orientation of high-frequency textural features of the processed image as determined by a two-step process. First, the two-dimensional discrete wavelet transform (DWT) is applied to obtain the HH high-frequency subband image. Then, a Gabor filter bank is applied to the latter at different frequencies and spatial orientations to obtain new Gabor-filtered image whose entropy and uniformity are computed. Finally, the obtained statistics are fed to a support vector machine (SVM) binary classifier. The approach was validated on mammograms, retina, and brain magnetic resonance (MR) images. The obtained classification accuracies show better performance in comparison to common approaches that use only the DWT or Gabor filter banks for feature extraction. PMID:27006906

  8. MOSAIC - A space-multiplexing technique for optical processing of large images

    NASA Technical Reports Server (NTRS)

    Athale, Ravindra A.; Astor, Michael E.; Yu, Jeffrey

    1993-01-01

    A technique for Fourier processing of images larger than the space-bandwidth products of conventional or smart spatial light modulators and two-dimensional detector arrays is described. The technique involves a spatial combination of subimages displayed on individual spatial light modulators to form a phase-coherent image, which is subsequently processed with Fourier optical techniques. Because of the technique's similarity with the mosaic technique used in art, the processor used is termed an optical MOSAIC processor. The phase accuracy requirements of this system were studied by computer simulation. It was found that phase errors of less than lambda/8 did not degrade the performance of the system and that the system was relatively insensitive to amplitude nonuniformities. Several schemes for implementing the subimage combination are described. Initial experimental results demonstrating the validity of the mosaic concept are also presented.

  9. Methodologies, Models and Algorithms for Patients Rehabilitation.

    PubMed

    Fardoun, H M; Mashat, A S

    2016-01-01

    This editorial is part of the Focus Theme of Methods of Information in Medicine on "Methodologies, Models and Algorithms for Patients Rehabilitation". The objective of this focus theme is to present current solutions by means of technologies and human factors related to the use of Information and Communication Technologies (ICT) for improving patient rehabilitation. The focus theme examines distinctive measurements of strengthening methodologies, models and algorithms for disabled people in terms of rehabilitation and health care, and to explore the extent to which ICT is a useful tool in this process. The focus theme records a set of solutions for ICT systems developed to improve the rehabilitation process of disabled people and to help them in carrying out their daily life. The development and subsequent setting up of computers for the patients' rehabilitation process is of continuous interest and growth.

  10. Adaptive memory: enhanced location memory after survival processing.

    PubMed

    Nairne, James S; Vanarsdall, Joshua E; Pandeirada, Josefa N S; Blunt, Janell R

    2012-03-01

    Two experiments investigated whether survival processing enhances memory for location. From an adaptive perspective, remembering that food has been located in a particular area, or that potential predators are likely to be found in a given territory, should increase the chances of subsequent survival. Participants were shown pictures of food or animals located at various positions on a computer screen. The task was to rate the ease of collecting the food or capturing the animals relative to a central fixation point. Surprise retention tests revealed that people remembered the locations of the items better when the collection or capturing task was described as relevant to survival. These data extend the generality of survival processing advantages to a new domain (location memory) by means of a task that does not involve rating the relevance of words to a scenario. 2012 APA, all rights reserved

  11. Active-learning strategies in computer-assisted drug discovery.

    PubMed

    Reker, Daniel; Schneider, Gisbert

    2015-04-01

    High-throughput compound screening is time and resource consuming, and considerable effort is invested into screening compound libraries, profiling, and selecting the most promising candidates for further testing. Active-learning methods assist the selection process by focusing on areas of chemical space that have the greatest chance of success while considering structural novelty. The core feature of these algorithms is their ability to adapt the structure-activity landscapes through feedback. Instead of full-deck screening, only focused subsets of compounds are tested, and the experimental readout is used to refine molecule selection for subsequent screening cycles. Once implemented, these techniques have the potential to reduce costs and save precious materials. Here, we provide a comprehensive overview of the various computational active-learning approaches and outline their potential for drug discovery. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Studies on the Himbert Intramolecular Arene/ Allene Diels – Alder Cycloaddition. Mechanistic Studies and Expansion of Scope to All-Carbon Tethers

    PubMed Central

    Schmidt, Yvonne; Lam, Jonathan K.; Pham, Hung V.; Houk, K. N.; Vanderwal, Christopher D.

    2013-01-01

    The unusual intramolecular arene/allene cycloaddition described thirty years ago by Himbert permits rapid access to strained polycyclic compounds that offer great potential for the synthesis of complex scaffolds. To more fully understand the mechanism of this cycloaddition reaction, and to guide efforts to extend its scope to new substrates, quantum mechanical computational methods were employed in concert with laboratory experiments. These studies indicated that the cycloadditions likely proceed via concerted processes; a stepwise biradical mechanism was shown to be higher in energy in the cases studied. The original Himbert cycloaddition chemistry is also extended from heterocyclic to carbocyclic systems, with computational guidance used to predict thermodynamically favorable cases. Complex polycyclic scaffolds result from the combination of the cycloaddition and subsequent ring-rearrangement metathesis reactions. PMID:23634642

  13. Efficient visibility-driven medical image visualisation via adaptive binned visibility histogram.

    PubMed

    Jung, Younhyun; Kim, Jinman; Kumar, Ashnil; Feng, David Dagan; Fulham, Michael

    2016-07-01

    'Visibility' is a fundamental optical property that represents the observable, by users, proportion of the voxels in a volume during interactive volume rendering. The manipulation of this 'visibility' improves the volume rendering processes; for instance by ensuring the visibility of regions of interest (ROIs) or by guiding the identification of an optimal rendering view-point. The construction of visibility histograms (VHs), which represent the distribution of all the visibility of all voxels in the rendered volume, enables users to explore the volume with real-time feedback about occlusion patterns among spatially related structures during volume rendering manipulations. Volume rendered medical images have been a primary beneficiary of VH given the need to ensure that specific ROIs are visible relative to the surrounding structures, e.g. the visualisation of tumours that may otherwise be occluded by neighbouring structures. VH construction and its subsequent manipulations, however, are computationally expensive due to the histogram binning of the visibilities. This limits the real-time application of VH to medical images that have large intensity ranges and volume dimensions and require a large number of histogram bins. In this study, we introduce an efficient adaptive binned visibility histogram (AB-VH) in which a smaller number of histogram bins are used to represent the visibility distribution of the full VH. We adaptively bin medical images by using a cluster analysis algorithm that groups the voxels according to their intensity similarities into a smaller subset of bins while preserving the distribution of the intensity range of the original images. We increase efficiency by exploiting the parallel computation and multiple render targets (MRT) extension of the modern graphical processing units (GPUs) and this enables efficient computation of the histogram. We show the application of our method to single-modality computed tomography (CT), magnetic resonance (MR) imaging and multi-modality positron emission tomography-CT (PET-CT). In our experiments, the AB-VH markedly improved the computational efficiency for the VH construction and thus improved the subsequent VH-driven volume manipulations. This efficiency was achieved without major degradation in the VH visually and numerical differences between the AB-VH and its full-bin counterpart. We applied several variants of the K-means clustering algorithm with varying Ks (the number of clusters) and found that higher values of K resulted in better performance at a lower computational gain. The AB-VH also had an improved performance when compared to the conventional method of down-sampling of the histogram bins (equal binning) for volume rendering visualisation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. DDBJ read annotation pipeline: a cloud computing-based pipeline for high-throughput analysis of next-generation sequencing data.

    PubMed

    Nagasaki, Hideki; Mochizuki, Takako; Kodama, Yuichi; Saruhashi, Satoshi; Morizaki, Shota; Sugawara, Hideaki; Ohyanagi, Hajime; Kurata, Nori; Okubo, Kousaku; Takagi, Toshihisa; Kaminuma, Eli; Nakamura, Yasukazu

    2013-08-01

    High-performance next-generation sequencing (NGS) technologies are advancing genomics and molecular biological research. However, the immense amount of sequence data requires computational skills and suitable hardware resources that are a challenge to molecular biologists. The DNA Data Bank of Japan (DDBJ) of the National Institute of Genetics (NIG) has initiated a cloud computing-based analytical pipeline, the DDBJ Read Annotation Pipeline (DDBJ Pipeline), for a high-throughput annotation of NGS reads. The DDBJ Pipeline offers a user-friendly graphical web interface and processes massive NGS datasets using decentralized processing by NIG supercomputers currently free of charge. The proposed pipeline consists of two analysis components: basic analysis for reference genome mapping and de novo assembly and subsequent high-level analysis of structural and functional annotations. Users may smoothly switch between the two components in the pipeline, facilitating web-based operations on a supercomputer for high-throughput data analysis. Moreover, public NGS reads of the DDBJ Sequence Read Archive located on the same supercomputer can be imported into the pipeline through the input of only an accession number. This proposed pipeline will facilitate research by utilizing unified analytical workflows applied to the NGS data. The DDBJ Pipeline is accessible at http://p.ddbj.nig.ac.jp/.

  15. DDBJ Read Annotation Pipeline: A Cloud Computing-Based Pipeline for High-Throughput Analysis of Next-Generation Sequencing Data

    PubMed Central

    Nagasaki, Hideki; Mochizuki, Takako; Kodama, Yuichi; Saruhashi, Satoshi; Morizaki, Shota; Sugawara, Hideaki; Ohyanagi, Hajime; Kurata, Nori; Okubo, Kousaku; Takagi, Toshihisa; Kaminuma, Eli; Nakamura, Yasukazu

    2013-01-01

    High-performance next-generation sequencing (NGS) technologies are advancing genomics and molecular biological research. However, the immense amount of sequence data requires computational skills and suitable hardware resources that are a challenge to molecular biologists. The DNA Data Bank of Japan (DDBJ) of the National Institute of Genetics (NIG) has initiated a cloud computing-based analytical pipeline, the DDBJ Read Annotation Pipeline (DDBJ Pipeline), for a high-throughput annotation of NGS reads. The DDBJ Pipeline offers a user-friendly graphical web interface and processes massive NGS datasets using decentralized processing by NIG supercomputers currently free of charge. The proposed pipeline consists of two analysis components: basic analysis for reference genome mapping and de novo assembly and subsequent high-level analysis of structural and functional annotations. Users may smoothly switch between the two components in the pipeline, facilitating web-based operations on a supercomputer for high-throughput data analysis. Moreover, public NGS reads of the DDBJ Sequence Read Archive located on the same supercomputer can be imported into the pipeline through the input of only an accession number. This proposed pipeline will facilitate research by utilizing unified analytical workflows applied to the NGS data. The DDBJ Pipeline is accessible at http://p.ddbj.nig.ac.jp/. PMID:23657089

  16. Code Modernization of VPIC

    NASA Astrophysics Data System (ADS)

    Bird, Robert; Nystrom, David; Albright, Brian

    2017-10-01

    The ability of scientific simulations to effectively deliver performant computation is increasingly being challenged by successive generations of high-performance computing architectures. Code development to support efficient computation on these modern architectures is both expensive, and highly complex; if it is approached without due care, it may also not be directly transferable between subsequent hardware generations. Previous works have discussed techniques to support the process of adapting a legacy code for modern hardware generations, but despite the breakthroughs in the areas of mini-app development, portable-performance, and cache oblivious algorithms the problem still remains largely unsolved. In this work we demonstrate how a focus on platform agnostic modern code-development can be applied to Particle-in-Cell (PIC) simulations to facilitate effective scientific delivery. This work builds directly on our previous work optimizing VPIC, in which we replaced intrinsic based vectorisation with compile generated auto-vectorization to improve the performance and portability of VPIC. In this work we present the use of a specialized SIMD queue for processing some particle operations, and also preview a GPU capable OpenMP variant of VPIC. Finally we include a lessons learnt. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.

  17. Towards high fidelity numerical wave tanks for modelling coastal and ocean engineering processes

    NASA Astrophysics Data System (ADS)

    Cozzuto, G.; Dimakopoulos, A.; de Lataillade, T.; Kees, C. E.

    2017-12-01

    With the increasing availability of computational resources, the engineering and research community is gradually moving towards using high fidelity Comutational Fluid Mechanics (CFD) models to perform numerical tests for improving the understanding of physical processes pertaining to wave propapagation and interaction with the coastal environment and morphology, either physical or man-made. It is therefore important to be able to reproduce in these models the conditions that drive these processes. So far, in CFD models the norm is to use regular (linear or nonlinear) waves for performing numerical tests, however, only random waves exist in nature. In this work, we will initially present the verification and validation of numerical wave tanks based on Proteus, an open-soruce computational toolkit based on finite element analysis, with respect to the generation, propagation and absorption of random sea states comprising of long non-repeating wave sequences. Statistical and spectral processing of results demonstrate that the methodologies employed (including relaxation zone methods and moving wave paddles) are capable of producing results of similar quality to the wave tanks used in laboratories (Figure 1). Subsequently cases studies of modelling complex process relevant to coastal defences and floating structures such as sliding and overturning of composite breakwaters, heave and roll response of floating caissons are presented. Figure 1: Wave spectra in the numerical wave tank (coloured symbols), compared against the JONSWAP distribution

  18. Boolean Logic Tree of Label-Free Dual-Signal Electrochemical Aptasensor System for Biosensing, Three-State Logic Computation, and Keypad Lock Security Operation.

    PubMed

    Lu, Jiao Yang; Zhang, Xin Xing; Huang, Wei Tao; Zhu, Qiu Yan; Ding, Xue Zhi; Xia, Li Qiu; Luo, Hong Qun; Li, Nian Bing

    2017-09-19

    The most serious and yet unsolved problems of molecular logic computing consist in how to connect molecular events in complex systems into a usable device with specific functions and how to selectively control branchy logic processes from the cascading logic systems. This report demonstrates that a Boolean logic tree is utilized to organize and connect "plug and play" chemical events DNA, nanomaterials, organic dye, biomolecule, and denaturant for developing the dual-signal electrochemical evolution aptasensor system with good resettability for amplification detection of thrombin, controllable and selectable three-state logic computation, and keypad lock security operation. The aptasensor system combines the merits of DNA-functionalized nanoamplification architecture and simple dual-signal electroactive dye brilliant cresyl blue for sensitive and selective detection of thrombin with a wide linear response range of 0.02-100 nM and a detection limit of 1.92 pM. By using these aforementioned chemical events as inputs and the differential pulse voltammetry current changes at different voltages as dual outputs, a resettable three-input biomolecular keypad lock based on sequential logic is established. Moreover, the first example of controllable and selectable three-state molecular logic computation with active-high and active-low logic functions can be implemented and allows the output ports to assume a high impediment or nothing (Z) state in addition to the 0 and 1 logic levels, effectively controlling subsequent branchy logic computation processes. Our approach is helpful in developing the advanced controllable and selectable logic computing and sensing system in large-scale integration circuits for application in biomedical engineering, intelligent sensing, and control.

  19. Fast generation of Fresnel holograms based on multirate filtering.

    PubMed

    Tsang, Peter; Liu, Jung-Ping; Cheung, Wai-Keung; Poon, Ting-Chung

    2009-12-01

    One of the major problems in computer-generated holography is the high computation cost involved for the calculation of fringe patterns. Recently, the problem has been addressed by imposing a horizontal parallax only constraint whereby the process can be simplified to the computation of one-dimensional sublines, each representing a scan plane of the object scene. Subsequently the sublines can be expanded to a two-dimensional hologram through multiplication with a reference signal. Furthermore, economical hardware is available with which sublines can be generated in a computationally free manner with high throughput of approximately 100 M pixels/second. Apart from decreasing the computation loading, the sublines can be treated as intermediate data that can be compressed by simply downsampling the number of sublines. Despite these favorable features, the method is suitable only for the generation of white light (rainbow) holograms, and the resolution of the reconstructed image is inferior to the classical Fresnel hologram. We propose to generate holograms from one-dimensional sublines so that the above-mentioned problems can be alleviated. However, such an approach also leads to a substantial increase in computation loading. To overcome this problem we encapsulated the conversion of sublines to holograms as a multirate filtering process and implemented the latter by use of a fast Fourier transform. Evaluation reveals that, for holograms of moderate size, our method is capable of operating 40,000 times faster than the calculation of Fresnel holograms based on the precomputed table lookup method. Although there is no relative vertical parallax between object points at different distance planes, a global vertical parallax is preserved for the object scene as a whole and the reconstructed image can be observed easily.

  20. Using the Computer in Evolution Studies

    ERIC Educational Resources Information Center

    Mariner, James L.

    1973-01-01

    Describes a high school biology exercise in which a computer greatly reduces time spent on calculations. Genetic equilibrium demonstrated by the Hardy-Weinberg principle and the subsequent effects of violating any of its premises are more readily understood when frequencies of alleles through many generations are calculated by the computer. (JR)

  1. Differential interference effects of negative emotional states on subsequent semantic and perceptual processing

    PubMed Central

    Gorlick, Marissa A.; Mather, Mara

    2012-01-01

    Past studies have revealed that encountering negative events interferes with cognitive processing of subsequent stimuli. The present study investigated whether negative events affect semantic and perceptual processing differently. Presentation of negative pictures produced slower reaction times than neutral or positive pictures in tasks that require semantic processing, such as natural/man-made judgments about drawings of objects, commonness judgments about objects, and categorical judgments about pairs of words. In contrast, negative picture presentation did not slow down judgments in subsequent perceptual processing (e.g., color judgments about words, and size judgments about objects). The subjective arousal level of negative pictures did not modulate the interference effects on semantic/perceptual processing. These findings indicate that encountering negative emotional events interferes with semantic processing of subsequent stimuli more strongly than perceptual processing, and that not all types of subsequent cognitive processing are impaired by negative events. PMID:22142207

  2. Differential interference effects of negative emotional states on subsequent semantic and perceptual processing.

    PubMed

    Sakaki, Michiko; Gorlick, Marissa A; Mather, Mara

    2011-12-01

    Past studies have revealed that encountering negative events interferes with cognitive processing of subsequent stimuli. The present study investigates whether negative events affect semantic and perceptual processing differently. Presentation of negative pictures produced slower reaction times than neutral or positive pictures in tasks that require semantic processing, such as natural or man-made judgments about drawings of objects, commonness judgments about objects, and categorical judgments about pairs of words. In contrast, negative picture presentation did not slow down judgments in subsequent perceptual processing (e.g., color judgments about words, size judgments about objects). The subjective arousal level of negative pictures did not modulate the interference effects on semantic or perceptual processing. These findings indicate that encountering negative emotional events interferes with semantic processing of subsequent stimuli more strongly than perceptual processing, and that not all types of subsequent cognitive processing are impaired by negative events. (c) 2011 APA, all rights reserved.

  3. IEEE International Symposium on Biomedical Imaging.

    PubMed

    2017-01-01

    The IEEE International Symposium on Biomedical Imaging (ISBI) is a scientific conference dedicated to mathematical, algorithmic, and computational aspects of biological and biomedical imaging, across all scales of observation. It fosters knowledge transfer among different imaging communities and contributes to an integrative approach to biomedical imaging. ISBI is a joint initiative from the IEEE Signal Processing Society (SPS) and the IEEE Engineering in Medicine and Biology Society (EMBS). The 2018 meeting will include tutorials, and a scientific program composed of plenary talks, invited special sessions, challenges, as well as oral and poster presentations of peer-reviewed papers. High-quality papers are requested containing original contributions to the topics of interest including image formation and reconstruction, computational and statistical image processing and analysis, dynamic imaging, visualization, image quality assessment, and physical, biological, and statistical modeling. Accepted 4-page regular papers will be published in the symposium proceedings published by IEEE and included in IEEE Xplore. To encourage attendance by a broader audience of imaging scientists and offer additional presentation opportunities, ISBI 2018 will continue to have a second track featuring posters selected from 1-page abstract submissions without subsequent archival publication.

  4. Principles for the wise use of computers by children.

    PubMed

    Straker, L; Pollock, C; Maslen, B

    2009-11-01

    Computer use by children at home and school is now common in many countries. Child computer exposure varies with the type of computer technology available and the child's age, gender and social group. This paper reviews the current exposure data and the evidence for positive and negative effects of computer use by children. Potential positive effects of computer use by children include enhanced cognitive development and school achievement, reduced barriers to social interaction, enhanced fine motor skills and visual processing and effective rehabilitation. Potential negative effects include threats to child safety, inappropriate content, exposure to violence, bullying, Internet 'addiction', displacement of moderate/vigorous physical activity, exposure to junk food advertising, sleep displacement, vision problems and musculoskeletal problems. The case for child specific evidence-based guidelines for wise use of computers is presented based on children using computers differently to adults, being physically, cognitively and socially different to adults, being in a state of change and development and the potential to impact on later adult risk. Progress towards child-specific guidelines is reported. Finally, a set of guideline principles is presented as the basis for more detailed guidelines on the physical, cognitive and social impact of computer use by children. The principles cover computer literacy, technology safety, child safety and privacy and appropriate social, cognitive and physical development. The majority of children in affluent communities now have substantial exposure to computers. This is likely to have significant effects on child physical, cognitive and social development. Ergonomics can provide and promote guidelines for wise use of computers by children and by doing so promote the positive effects and reduce the negative effects of computer-child, and subsequent computer-adult, interaction.

  5. Biomarkers in Computational Toxicology

    EPA Science Inventory

    Biomarkers are a means to evaluate chemical exposure and/or the subsequent impacts on toxicity pathways that lead to adverse health outcomes. Computational toxicology can integrate biomarker data with knowledge of exposure, chemistry, biology, pharmacokinetics, toxicology, and e...

  6. ATLAS, an integrated structural analysis and design system. Volume 1: ATLAS user's guide

    NASA Technical Reports Server (NTRS)

    Dreisbach, R. L. (Editor)

    1979-01-01

    Some of the many analytical capabilities provided by the ATLAS Version 4.0 System in the logical sequence are described in which model-definition data are prepared and the subsequent computer job is executed. The example data presented and the fundamental technical considerations that are highlighted can be used as guides during the problem solving process. This guide does not describe the details of the ATLAS capabilities, but provides an introduction to the new user of ATLAS to the level at which the complete array of capabilities described in the ATLAS User's Manual can be exploited fully.

  7. On the Violence of High Explosive Reactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tarver, C M; Chidester, S K

    High explosive reactions can be caused by three general energy deposition processes: impact ignition by frictional and/or shear heating; bulk thermal heating; and shock compression. The violence of the subsequent reaction varies from benign slow combustion to catastrophic detonation of the entire charge. The degree of violence depends on many variables, including the rate of energy delivery, the physical and chemical properties of the explosive, and the strength of the confinement surrounding the explosive charge. The current state of experimental and computer modeling research on the violence of impact, thermal, and shock-induced reactions is reviewed.

  8. Dynamic, diagnostic, and pharmacological radionuclide studies of the esophagus in achalasia: correlation with manometric measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rozen, P.; Gelfond, M.; Zaltzman, S.

    1982-08-01

    The esophagus was evaluated in 15 patients with achalasia by continuous gamma camera imaging following ingestion of a semi-solid meal labeled with /sup 99//sup m/Tc. The images were displayed and recorded on a simple computerized data processing/display system. Subsequent cine' mode images of esophagela emptying demonstrated abnormalities of the body of the esophagus not reflected by the manometric examination. Computer-generated time-activity curves representing specific regions of interest were better than manometry in evaluating the results of myotomy, dilatation, and drug therapy. Isosorbide dinitrate significantly improved esophageal emptying.

  9. Emerging role of multi-detector computed tomography in the diagnosis of hematuria following percutaneous nephrolithotomy: A case scenario.

    PubMed

    Sivanandam, S E; Mathew, Georgie; Bhat, Sanjay H

    2009-07-01

    Persistent hematuria is one of the most dreaded complications following percutanous nephrolithotomy (PCNL). Although invasive, a catheter-based angiogram is usually used to localize the bleeding vessel and subsequently embolize it. Advances in imaging technology have now made it possible to use a non invasive multi-detector computed tomography (MDCT) angiogram with 3-D reconstruction to establish the diagnosis. We report a case of post-PCNL hemorrhage due to a pseudo aneurysm that was missed by a conventional angiogram and subsequently detected on MDCT angiogram.

  10. Effects of heat treatments on microstructure and properties of Ti-6Al-4V ELI alloy fabricated by electron beam melting (EBM)

    DOE PAGES

    Galarraga, Haize; Warren, Robert J.; Lados, Diana A.; ...

    2017-01-06

    Electron beam melting (EBM) is a metal powder bed fusion additive manufacturing (AM) technology that is used to fabricate three-dimensional near-net-shaped parts directly from computer models. Ti-6Al-4V is the most widely used and studied alloy for this technology and is the focus of this work in its ELI (Extra Low Interstitial) variation. The mechanisms of microstructure formation, evolution, and its subsequent influence on mechanical properties of the alloy in as-fabricated condition have been documented by various researchers. In the present work, the thermal history resulting in the formation of the as-fabricated microstructure was analyzed and studied by a thermal simulation.more » Subsequently different heat treatments were performed based on three approaches in order to study the effects of heat treatments on the singular and exclusive microstructure formed during the EBM fabrication process. In the first approach, the effect of cooling rate after the solutionizing process was studied. In the second approach, the variation of α lath thickness during annealing treatment and correlation with mechanical properties was established. In the last approach, several solutionizing and aging experiments were conducted.« less

  11. A subsequent closed-form description of propagated signaling phenomena in the membrane of an axon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melendy, Robert F., E-mail: rfmelendy@liberty.edu

    2016-05-15

    I recently introduced a closed-form description of propagated signaling phenomena in the membrane of an axon [R.F. Melendy, Journal of Applied Physics 118, 244701 (2015)]. Those results demonstrate how intracellular conductance, the thermodynamics of magnetization, and current modulation, function together in generating an action potential in a unified, closed-form description. At present, I report on a subsequent closed-form model that unifies intracellular conductance and the thermodynamics of magnetization, with the membrane electric field, E{sub m}. It’s anticipated this work will compel researchers in biophysics, physical biology, and the computational neurosciences, to probe deeper into the classical and quantum features ofmore » membrane magnetization and signaling, informed by the computational features of this subsequent model.« less

  12. Dyadic Instruction for Middle School Students: Liking Promotes Learning

    PubMed Central

    Hartl, Amy C.; DeLay, Dawn; Laursen, Brett; Denner, Jill; Werner, Linda; Campe, Shannon; Ortiz, Eloy

    2015-01-01

    This study examines whether friendship facilitates or hinders learning in a dyadic instructional setting. Working in 80 same-sex pairs, 160 (60 girls, 100 boys) middle school students (M = 12.13 years old) were taught a new computer programming language and programmed a game. Students spent 14 to 30 (M = 22.7) hours in a programming class. At the beginning and the end of the project, each participant separately completed (a) computer programming knowledge assessments and (b) questionnaires rating their affinity for their partner. Results support the proposition that liking promotes learning: Greater partner affinity predicted greater subsequent increases in computer programming knowledge for both partners. One partner’s initial programming knowledge also positively predicted the other partner’s subsequent partner affinity. PMID:26688658

  13. Computational challenges in modeling gene regulatory events.

    PubMed

    Pataskar, Abhijeet; Tiwari, Vijay K

    2016-10-19

    Cellular transcriptional programs driven by genetic and epigenetic mechanisms could be better understood by integrating "omics" data and subsequently modeling the gene-regulatory events. Toward this end, computational biology should keep pace with evolving experimental procedures and data availability. This article gives an exemplified account of the current computational challenges in molecular biology.

  14. 20 CFR 404.252 - Subsequent entitlement to benefits 12 months or more after entitlement to disability benefits ended.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Computing... situation, we compute your second-entitlement primary insurance amount by selecting the higher of the following: (a) New primary insurance amount. The primary insurance amount computed as of the time of your...

  15. 20 CFR 404.251 - Subsequent entitlement to benefits less than 12 months after entitlement to disability benefits...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Computing...) Disability before 1979; second entitlement after 1978. In this situation, we compute your second-entitlement... primary insurance amount computed for you as of the time of your second entitlement under any method for...

  16. A new computer-based counselling system for the promotion of physical activity in patients with chronic diseases--results from a pilot study.

    PubMed

    Becker, Annette; Herzberg, Dominikus; Marsden, Nicola; Thomanek, Sabine; Jung, Hartmut; Leonhardt, Corinna

    2011-05-01

    To develop a computer-based counselling system (CBCS) for the improvement of attitudes towards physical activity in chronically ill patients and to pilot its efficacy and acceptance in primary care. The system is tailored to patients' disease and motivational stage. During a pilot study in five German general practices, patients answered questions before, directly and 6 weeks after using the CBCS. Outcome criteria were attitudes and self-efficacy. Qualitative interviews were performed to identify acceptance indicators. Seventy-nine patients participated (mean age: 64.5 years, 53% males; 38% without previous computer experience). Patients' affective and cognitive attitudes changed significantly, self-efficacy showed only minor changes. Patients mentioned no difficulties in interacting with the CBCS. However, perception of the system's usefulness was inconsistent. Computer-based counselling for physical activity related attitudes in patients with chronic diseases is feasible, but the circumstances of use with respect to the target group and its integration into the management process have to be clarified in future studies. This study adds to the understanding of computer-based counselling in primary health care. Acceptance indicators identified in this study will be validated as part of a questionnaire on technology acceptability in a subsequent study. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  17. Science Support for Space-Based Droplet Combustion: Drop Tower Experiments and Detailed Numerical Modeling

    NASA Technical Reports Server (NTRS)

    Marchese, Anthony J.; Dryer, Frederick L.

    1997-01-01

    This program supports the engineering design, data analysis, and data interpretation requirements for the study of initially single component, spherically symmetric, isolated droplet combustion studies. Experimental emphasis is on the study of simple alcohols (methanol, ethanol) and alkanes (n-heptane, n-decane) as fuels with time dependent measurements of drop size, flame-stand-off, liquid-phase composition, and finally, extinction. Experiments have included bench-scale studies at Princeton, studies in the 2.2 and 5.18 drop towers at NASA-LeRC, and both the Fiber Supported Droplet Combustion (FSDC-1, FSDC-2) and the free Droplet Combustion Experiment (DCE) studies aboard the shuttle. Test matrix and data interpretation are performed through spherically-symmetric, time-dependent numerical computations which embody detailed sub-models for physical and chemical processes. The computed burning rate, flame stand-off, and extinction diameter are compared with the respective measurements for each individual experiment. In particular, the data from FSDC-1 and subsequent space-based experiments provide the opportunity to compare all three types of data simultaneously with the computed parameters. Recent numerical efforts are extending the computational tools to consider time dependent, axisymmetric 2-dimensional reactive flow situations.

  18. Static Memory Deduplication for Performance Optimization in Cloud Computing.

    PubMed

    Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan

    2017-04-27

    In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.

  19. Static Memory Deduplication for Performance Optimization in Cloud Computing

    PubMed Central

    Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan

    2017-01-01

    In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible. PMID:28448434

  20. Evaluation of a deformable registration algorithm for subsequent lung computed tomography imaging during radiochemotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stützer, Kristin; Haase, Robert; Exner, Florian

    2016-09-15

    Purpose: Rating both a lung segmentation algorithm and a deformable image registration (DIR) algorithm for subsequent lung computed tomography (CT) images by different evaluation techniques. Furthermore, investigating the relative performance and the correlation of the different evaluation techniques to address their potential value in a clinical setting. Methods: Two to seven subsequent CT images (69 in total) of 15 lung cancer patients were acquired prior, during, and after radiochemotherapy. Automated lung segmentations were compared to manually adapted contours. DIR between the first and all following CT images was performed with a fast algorithm specialized for lung tissue registration, requiring themore » lung segmentation as input. DIR results were evaluated based on landmark distances, lung contour metrics, and vector field inconsistencies in different subvolumes defined by eroding the lung contour. Correlations between the results from the three methods were evaluated. Results: Automated lung contour segmentation was satisfactory in 18 cases (26%), failed in 6 cases (9%), and required manual correction in 45 cases (66%). Initial and corrected contours had large overlap but showed strong local deviations. Landmark-based DIR evaluation revealed high accuracy compared to CT resolution with an average error of 2.9 mm. Contour metrics of deformed contours were largely satisfactory. The median vector length of inconsistency vector fields was 0.9 mm in the lung volume and slightly smaller for the eroded volumes. There was no clear correlation between the three evaluation approaches. Conclusions: Automatic lung segmentation remains challenging but can assist the manual delineation process. Proven by three techniques, the inspected DIR algorithm delivers reliable results for the lung CT data sets acquired at different time points. Clinical application of DIR demands a fast DIR evaluation to identify unacceptable results, for instance, by combining different automated DIR evaluation methods.« less

  1. System and method for measuring ocean surface currents at locations remote from land masses using synthetic aperture radar

    NASA Technical Reports Server (NTRS)

    Young, Lawrence E. (Inventor)

    1991-01-01

    A system for measuring ocean surface currents from an airborne platform is disclosed. A radar system having two spaced antennas wherein one antenna is driven and return signals from the ocean surface are detected by both antennas is employed to get raw ocean current data which are saved for later processing. There are a pair of global positioning system (GPS) systems including a first antenna carried by the platform at a first location and a second antenna carried by the platform at a second location displaced from the first antenna for determining the position of the antennas from signals from orbiting GPS navigational satellites. Data are also saved for later processing. The saved data are subsequently processed by a ground-based computer system to determine the position, orientation, and velocity of the platform as well as to derive measurements of currents on the ocean surface.

  2. Computer simulations of sympatric speciation in a simple food web

    NASA Astrophysics Data System (ADS)

    Luz-Burgoa, K.; Dell, Tony; de Oliveira, S. Moss

    2005-07-01

    Galapagos finches, have motivated much theoretical research aimed at understanding the processes associated with the formation of the species. Inspired by them, in this paper we investigate the process of sympatric speciation in a simple food web model. For that we modify the individual-based Penna model that has been widely used to study aging as well as other evolutionary processes. Initially, our web consists of a primary food source and a single herbivore species that feeds on this resource. Subsequently we introduce a predator that feeds on the herbivore. In both instances we manipulate directly a basal resource distribution and monitor the changes in the populations. Sympatric speciation is obtained for the top species in both cases, and our results suggest that the speciation velocity depends on how far up, in the food chain, the focus population is feeding. Simulations are done with three different sexual imprintinglike mechanisms, in order to discuss adaptation by natural selection.

  3. Using Student Writing and Lexical Analysis to Reveal Student Thinking about the Role of Stop Codons in the Central Dogma

    PubMed Central

    Prevost, Luanna B.; Smith, Michelle K.; Knight, Jennifer K.

    2016-01-01

    Previous work has shown that students have persistent difficulties in understanding how central dogma processes can be affected by a stop codon mutation. To explore these difficulties, we modified two multiple-choice questions from the Genetics Concept Assessment into three open-ended questions that asked students to write about how a stop codon mutation potentially impacts replication, transcription, and translation. We then used computer-assisted lexical analysis combined with human scoring to categorize student responses. The lexical analysis models showed high agreement with human scoring, demonstrating that this approach can be successfully used to analyze large numbers of student written responses. The results of this analysis show that students’ ideas about one process in the central dogma can affect their thinking about subsequent and previous processes, leading to mixed models of conceptual understanding. PMID:27909016

  4. Quantum memories: emerging applications and recent advances

    NASA Astrophysics Data System (ADS)

    Heshami, Khabat; England, Duncan G.; Humphreys, Peter C.; Bustard, Philip J.; Acosta, Victor M.; Nunn, Joshua; Sussman, Benjamin J.

    2016-11-01

    Quantum light-matter interfaces are at the heart of photonic quantum technologies. Quantum memories for photons, where non-classical states of photons are mapped onto stationary matter states and preserved for subsequent retrieval, are technical realizations enabled by exquisite control over interactions between light and matter. The ability of quantum memories to synchronize probabilistic events makes them a key component in quantum repeaters and quantum computation based on linear optics. This critical feature has motivated many groups to dedicate theoretical and experimental research to develop quantum memory devices. In recent years, exciting new applications, and more advanced developments of quantum memories, have proliferated. In this review, we outline some of the emerging applications of quantum memories in optical signal processing, quantum computation and non-linear optics. We review recent experimental and theoretical developments, and their impacts on more advanced photonic quantum technologies based on quantum memories.

  5. Image interpolation allows accurate quantitative bone morphometry in registered micro-computed tomography scans.

    PubMed

    Schulte, Friederike A; Lambers, Floor M; Mueller, Thomas L; Stauber, Martin; Müller, Ralph

    2014-04-01

    Time-lapsed in vivo micro-computed tomography is a powerful tool to analyse longitudinal changes in the bone micro-architecture. Registration can overcome problems associated with spatial misalignment between scans; however, it requires image interpolation which might affect the outcome of a subsequent bone morphometric analysis. The impact of the interpolation error itself, though, has not been quantified to date. Therefore, the purpose of this ex vivo study was to elaborate the effect of different interpolator schemes [nearest neighbour, tri-linear and B-spline (BSP)] on bone morphometric indices. None of the interpolator schemes led to significant differences between interpolated and non-interpolated images, with the lowest interpolation error found for BSPs (1.4%). Furthermore, depending on the interpolator, the processing order of registration, Gaussian filtration and binarisation played a role. Independent from the interpolator, the present findings suggest that the evaluation of bone morphometry should be done with images registered using greyscale information.

  6. An efficient method for facial component detection in thermal images

    NASA Astrophysics Data System (ADS)

    Paul, Michael; Blanik, Nikolai; Blazek, Vladimir; Leonhardt, Steffen

    2015-04-01

    A method to detect certain regions in thermal images of human faces is presented. In this approach, the following steps are necessary to locate the periorbital and the nose regions: First, the face is segmented from the background by thresholding and morphological filtering. Subsequently, a search region within the face, around its center of mass, is evaluated. Automatically computed temperature thresholds are used per subject and image or image sequence to generate binary images, in which the periorbital regions are located by integral projections. Then, the located positions are used to approximate the nose position. It is possible to track features in the located regions. Therefore, these regions are interesting for different applications like human-machine interaction, biometrics and biomedical imaging. The method is easy to implement and does not rely on any training images or templates. Furthermore, the approach saves processing resources due to simple computations and restricted search regions.

  7. The role of mobile computed tomography in mass fatality incidents.

    PubMed

    Rutty, Guy N; Robinson, Claire E; BouHaidar, Ralph; Jeffery, Amanda J; Morgan, Bruno

    2007-11-01

    Mobile multi-detector computed tomography (MDCT) scanners are potentially available to temporary mortuaries and can be operational within 20 min of arrival. We describe, to our knowledge, the first use of mobile MDCT for a mass fatality incident. A mobile MDCT scanner attended the disaster mortuary after a five vehicle road traffic incident. Five out of six bodies were successfully imaged by MDCT in c. 15 min per body. Subsequent full radiological analysis took c. 1 h per case. The results were compared to the autopsy examinations. We discuss the advantages and disadvantages of imaging with mobile MDCT in relation to mass fatality work, illustrating the body pathway process, and its role in the identification of the pathology, personal effects, and health and safety hazards. We propose that the adoption of a single modality of mobile MDCT could replace the current use of multiple radiological sources within a mass fatality mortuary.

  8. The benefits of the Atlas of Human Cardiac Anatomy website for the design of cardiac devices.

    PubMed

    Spencer, Julianne H; Quill, Jason L; Bateman, Michael G; Eggen, Michael D; Howard, Stephen A; Goff, Ryan P; Howard, Brian T; Quallich, Stephen G; Iaizzo, Paul A

    2013-11-01

    This paper describes how the Atlas of Human Cardiac Anatomy website can be used to improve cardiac device design throughout the process of development. The Atlas is a free-access website featuring novel images of both functional and fixed human cardiac anatomy from over 250 human heart specimens. This website provides numerous educational tutorials on anatomy, physiology and various imaging modalities. For instance, the 'device tutorial' provides examples of devices that were either present at the time of in vitro reanimation or were subsequently delivered, including leads, catheters, valves, annuloplasty rings and stents. Another section of the website displays 3D models of the vasculature, blood volumes and/or tissue volumes reconstructed from computed tomography and magnetic resonance images of various heart specimens. The website shares library images, video clips and computed tomography and MRI DICOM files in honor of the generous gifts received from donors and their families.

  9. Computational Aerothermodynamics in Aeroassist Applications

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2001-01-01

    Aeroassisted planetary entry uses atmospheric drag to decelerate spacecraft from super-orbital to orbital or suborbital velocities. Numerical simulation of flow fields surrounding these spacecraft during hypersonic atmospheric entry is required to define aerothermal loads. The severe compression in the shock layer in front of the vehicle and subsequent, rapid expansion into the wake are characterized by high temperature, thermo-chemical nonequilibrium processes. Implicit algorithms required for efficient, stable computation of the governing equations involving disparate time scales of convection, diffusion, chemical reactions, and thermal relaxation are discussed. Robust point-implicit strategies are utilized in the initialization phase; less robust but more efficient line-implicit strategies are applied in the endgame. Applications to ballutes (balloon-like decelerators) in the atmospheres of Venus, Mars, Titan, Saturn, and Neptune and a Mars Sample Return Orbiter (MSRO) are featured. Examples are discussed where time-accurate simulation is required to achieve a steady-state solution.

  10. Quantum memories: emerging applications and recent advances.

    PubMed

    Heshami, Khabat; England, Duncan G; Humphreys, Peter C; Bustard, Philip J; Acosta, Victor M; Nunn, Joshua; Sussman, Benjamin J

    2016-11-12

    Quantum light-matter interfaces are at the heart of photonic quantum technologies. Quantum memories for photons, where non-classical states of photons are mapped onto stationary matter states and preserved for subsequent retrieval, are technical realizations enabled by exquisite control over interactions between light and matter. The ability of quantum memories to synchronize probabilistic events makes them a key component in quantum repeaters and quantum computation based on linear optics. This critical feature has motivated many groups to dedicate theoretical and experimental research to develop quantum memory devices. In recent years, exciting new applications, and more advanced developments of quantum memories, have proliferated. In this review, we outline some of the emerging applications of quantum memories in optical signal processing, quantum computation and non-linear optics. We review recent experimental and theoretical developments, and their impacts on more advanced photonic quantum technologies based on quantum memories.

  11. Quantum memories: emerging applications and recent advances

    PubMed Central

    Heshami, Khabat; England, Duncan G.; Humphreys, Peter C.; Bustard, Philip J.; Acosta, Victor M.; Nunn, Joshua; Sussman, Benjamin J.

    2016-01-01

    Quantum light–matter interfaces are at the heart of photonic quantum technologies. Quantum memories for photons, where non-classical states of photons are mapped onto stationary matter states and preserved for subsequent retrieval, are technical realizations enabled by exquisite control over interactions between light and matter. The ability of quantum memories to synchronize probabilistic events makes them a key component in quantum repeaters and quantum computation based on linear optics. This critical feature has motivated many groups to dedicate theoretical and experimental research to develop quantum memory devices. In recent years, exciting new applications, and more advanced developments of quantum memories, have proliferated. In this review, we outline some of the emerging applications of quantum memories in optical signal processing, quantum computation and non-linear optics. We review recent experimental and theoretical developments, and their impacts on more advanced photonic quantum technologies based on quantum memories. PMID:27695198

  12. Data-driven coarse graining in action: Modeling and prediction of complex systems

    NASA Astrophysics Data System (ADS)

    Krumscheid, S.; Pradas, M.; Pavliotis, G. A.; Kalliadasis, S.

    2015-10-01

    In many physical, technological, social, and economic applications, one is commonly faced with the task of estimating statistical properties, such as mean first passage times of a temporal continuous process, from empirical data (experimental observations). Typically, however, an accurate and reliable estimation of such properties directly from the data alone is not possible as the time series is often too short, or the particular phenomenon of interest is only rarely observed. We propose here a theoretical-computational framework which provides us with a systematic and rational estimation of statistical quantities of a given temporal process, such as waiting times between subsequent bursts of activity in intermittent signals. Our framework is illustrated with applications from real-world data sets, ranging from marine biology to paleoclimatic data.

  13. A New Test Method of Circuit Breaker Spring Telescopic Characteristics Based Image Processing

    NASA Astrophysics Data System (ADS)

    Huang, Huimin; Wang, Feifeng; Lu, Yufeng; Xia, Xiaofei; Su, Yi

    2018-06-01

    This paper applied computer vision technology to the fatigue condition monitoring of springs, and a new telescopic characteristics test method is proposed for circuit breaker operating mechanism spring based on image processing technology. High-speed camera is utilized to capture spring movement image sequences when high voltage circuit breaker operated. Then the image-matching method is used to obtain the deformation-time curve and speed-time curve, and the spring expansion and deformation parameters are extracted from it, which will lay a foundation for subsequent spring force analysis and matching state evaluation. After performing simulation tests at the experimental site, this image analyzing method could solve the complex problems of traditional mechanical sensor installation and monitoring online, status assessment of the circuit breaker spring.

  14. COMPUTER DATA PROCESSING SYSTEM. PROJECT ROVER, 1962

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narin, F.

    ABS>A system was created for processing large volumes of data from Project ROVER tests at the Nevada Test Site. The data are compiled as analog, frequency modulated tape, which is translated in a Packard-Bell Tape-to-Tape converter into a binary coded decimal (BCD) IBM 7090 computer input tape. This input tape, tape A5, is processed on the 7090 by the RDH-D FORTRAN-II code and its 20 FAP and FORTRAN subroutines. Outputs from the 7090 run are tapes A3, which is a BCD tape used for listing on the IBM 1401 input-output computer, tape B5 which is a binary tape used asmore » input to a Stromberg-Carlson 40/20 cathode ray tube (CRT) plotter, and tape B6 which is a binary tape used for permanent data storage and input to specialized subcodes. The information on tape B5 commands the 40/20 to write grids, data points, and other information on the face of a CRT; the information on the CRT is photographed on 35 mm film which is subsequently developed; full-size (10" x 10") plots are made from the 35 mm film on a Xerox 1824 printer. The 7090 processes a data channel in approximately 4 seconds plus 4 seconds per plot to be made on the 40/20 for that channel. Up to 4500 data and calibration points on any one channel may be processed in one pass of the RDH-D code. This system has been used to produce more than 100,000 prints on the 1824 printer from more than 10,000 different 40/20 plots. At 00 per minute of 7090 time, it costs 60 to process a typical, 3-plot data channel on the 7090; each print on the 1824 costs between 5 and 10 cents including rental, supplies, and operator time. All automatic computer stops in the codes and subroutines are accompanied by on-line instructions to the operator. Extensive redundancy checking is incorporated in the FAP tape handling subroutines. (auth)« less

  15. Computational challenges in modeling gene regulatory events

    PubMed Central

    Pataskar, Abhijeet; Tiwari, Vijay K.

    2016-01-01

    ABSTRACT Cellular transcriptional programs driven by genetic and epigenetic mechanisms could be better understood by integrating “omics” data and subsequently modeling the gene-regulatory events. Toward this end, computational biology should keep pace with evolving experimental procedures and data availability. This article gives an exemplified account of the current computational challenges in molecular biology. PMID:27390891

  16. Real-time Interpolation for True 3-Dimensional Ultrasound Image Volumes

    PubMed Central

    Ji, Songbai; Roberts, David W.; Hartov, Alex; Paulsen, Keith D.

    2013-01-01

    We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1–2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm3 voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery. PMID:21266563

  17. Real-time interpolation for true 3-dimensional ultrasound image volumes.

    PubMed

    Ji, Songbai; Roberts, David W; Hartov, Alex; Paulsen, Keith D

    2011-02-01

    We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1-2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm(3) voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery.

  18. Computational analysis of stochastic heterogeneity in PCR amplification efficiency revealed by single molecule barcoding

    PubMed Central

    Best, Katharine; Oakes, Theres; Heather, James M.; Shawe-Taylor, John; Chain, Benny

    2015-01-01

    The polymerase chain reaction (PCR) is one of the most widely used techniques in molecular biology. In combination with High Throughput Sequencing (HTS), PCR is widely used to quantify transcript abundance for RNA-seq, and in the context of analysis of T and B cell receptor repertoires. In this study, we combine DNA barcoding with HTS to quantify PCR output from individual target molecules. We develop computational tools that simulate both the PCR branching process itself, and the subsequent subsampling which typically occurs during HTS sequencing. We explore the influence of different types of heterogeneity on sequencing output, and compare them to experimental results where the efficiency of amplification is measured by barcodes uniquely identifying each molecule of starting template. Our results demonstrate that the PCR process introduces substantial amplification heterogeneity, independent of primer sequence and bulk experimental conditions. This heterogeneity can be attributed both to inherited differences between different template DNA molecules, and the inherent stochasticity of the PCR process. The results demonstrate that PCR heterogeneity arises even when reaction and substrate conditions are kept as constant as possible, and therefore single molecule barcoding is essential in order to derive reproducible quantitative results from any protocol combining PCR with HTS. PMID:26459131

  19. Rapid prototyping of update algorithm of discrete Fourier transform for real-time signal processing

    NASA Astrophysics Data System (ADS)

    Kakad, Yogendra P.; Sherlock, Barry G.; Chatapuram, Krishnan V.; Bishop, Stephen

    2001-10-01

    An algorithm is developed in the companion paper, to update the existing DFT to represent the new data series that results when a new signal point is received. Updating the DFT in this way uses less computation than directly evaluating the DFT using the FFT algorithm, This reduces the computational order by a factor of log2 N. The algorithm is able to work in the presence of data window function, for use with rectangular window, the split triangular, Hanning, Hamming, and Blackman windows. In this paper, a hardware implementation of this algorithm, using FPGA technology, is outlined. Unlike traditional fully customized VLSI circuits, FPGAs represent a technical break through in the corresponding industry. The FPGA implements thousands of gates of logic in a single IC chip and it can be programmed by users at their site in a few seconds or less depending on the type of device used. The risk is low and the development time is short. The advantages have made FPGAs very popular for rapid prototyping of algorithms in the area of digital communication, digital signal processing, and image processing. Our paper addresses the related issues of implementation using hardware descriptive language in the development of the design and the subsequent downloading on the programmable hardware chip.

  20. Encoding-related brain activity dissociates between the recollective processes underlying successful recall and recognition: a subsequent-memory study.

    PubMed

    Sadeh, Talya; Maril, Anat; Goshen-Gottstein, Yonatan

    2012-07-01

    The subsequent-memory (SM) paradigm uncovers brain mechanisms that are associated with mnemonic activity during encoding by measuring participants' neural activity during encoding and classifying the encoding trials according to performance in the subsequent retrieval phase. The majority of these studies have converged on the notion that the mechanism supporting recognition is mediated by familiarity and recollection. The process of recollection is often assumed to be a recall-like process, implying that the active search for the memory trace is similar, if not identical, for recall and recognition. Here we challenge this assumption and hypothesize - based on previous findings obtained in our lab - that the recollective processes underlying recall and recognition might show dissociative patterns of encoding-related brain activity. To this end, our design controlled for familiarity, thereby focusing on contextual, recollective processes. We found evidence for dissociative neurocognitive encoding mechanisms supporting subsequent-recall and subsequent-recognition. Specifically, the contrast of subsequent-recognition versus subsequent-recall revealed activation in the Parahippocampal cortex (PHc) and the posterior hippocampus--regions associated with contextual processing. Implications of our findings and their relation to current cognitive models of recollection are discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Influence of delivery strategy on message-processing mechanisms and future adherence to a Dutch computer-tailored smoking cessation intervention.

    PubMed

    Stanczyk, Nicola Esther; Crutzen, Rik; Bolman, Catherine; Muris, Jean; de Vries, Hein

    2013-02-06

    Smoking tobacco is one of the most preventable causes of illness and death. Web-based tailored smoking cessation interventions have shown to be effective. Although these interventions have the potential to reach a large number of smokers, they often face high attrition rates, especially among lower educated smokers. A possible reason for the high attrition rates in the latter group is that computer-tailored smoking cessation interventions may not be attractive enough as they are mainly text-based. Video-based messages might be more effective in attracting attention and stimulating comprehension in people with a lower educational level and could therefore reduce attrition rates. The objective of the present study was to investigate whether differences exist in message-processing mechanisms (attention, comprehension, self-reference, appreciation, processing) and future adherence (intention to visit/use the website again, recommend the website to others), according to delivery strategy (video or text based messages) and educational level, to a Dutch computer-tailored smoking cessation program. Smokers who were motivated to quit within the following 6 months and who were aged over 16 were included in the program. Participants were randomly assigned to one of two conditions (video/text CT). The sample was stratified into 2 categories: lower and higher educated participants. In total, 139 participants completed the first session of the web-based tailored intervention and were subsequently asked to fill out a questionnaire assessing message-processing mechanisms and future adherence. ANOVAs and regression analyses were conducted to investigate the differences in message-processing mechanisms and future adherence with regard to delivery strategy and education. No interaction effects were found between delivery strategy (video vs text) and educational level on message-processing mechanisms and future adherence. Delivery strategy had no effect on future adherence and processing mechanisms. However, in both groups results indicated that lower educated participants showed higher attention (F(1,138)=3.97; P=.05) and processing levels (F(1,138)=4.58; P=.04). Results revealed also that lower educated participants were more inclined to visit the computer-tailored intervention website again (F(1,138)=4.43; P=.04). Computer-tailored programs have the potential to positively influence lower educated groups as they might be more involved in the computer-tailored intervention than higher educated smokers. Longitudinal studies with a larger sample are needed to gain more insight into the role of delivery strategy in tailored information and to investigate whether the intention to visit the intervention website again results in the ultimate goal of behavior change. Netherlands Trial Register (NTR3102).

  2. Influence of Delivery Strategy on Message-Processing Mechanisms and Future Adherence to a Dutch Computer-Tailored Smoking Cessation Intervention

    PubMed Central

    Crutzen, Rik; Bolman, Catherine; Muris, Jean; de Vries, Hein

    2013-01-01

    Background Smoking tobacco is one of the most preventable causes of illness and death. Web-based tailored smoking cessation interventions have shown to be effective. Although these interventions have the potential to reach a large number of smokers, they often face high attrition rates, especially among lower educated smokers. A possible reason for the high attrition rates in the latter group is that computer-tailored smoking cessation interventions may not be attractive enough as they are mainly text-based. Video-based messages might be more effective in attracting attention and stimulating comprehension in people with a lower educational level and could therefore reduce attrition rates. Objective The objective of the present study was to investigate whether differences exist in message-processing mechanisms (attention, comprehension, self-reference, appreciation, processing) and future adherence (intention to visit/use the website again, recommend the website to others), according to delivery strategy (video or text based messages) and educational level, to a Dutch computer-tailored smoking cessation program. Methods Smokers who were motivated to quit within the following 6 months and who were aged over 16 were included in the program. Participants were randomly assigned to one of two conditions (video/text CT). The sample was stratified into 2 categories: lower and higher educated participants. In total, 139 participants completed the first session of the web-based tailored intervention and were subsequently asked to fill out a questionnaire assessing message-processing mechanisms and future adherence. ANOVAs and regression analyses were conducted to investigate the differences in message-processing mechanisms and future adherence with regard to delivery strategy and education. Results No interaction effects were found between delivery strategy (video vs text) and educational level on message-processing mechanisms and future adherence. Delivery strategy had no effect on future adherence and processing mechanisms. However, in both groups results indicated that lower educated participants showed higher attention (F 1,138=3.97; P=.05) and processing levels (F 1,138=4.58; P=.04). Results revealed also that lower educated participants were more inclined to visit the computer-tailored intervention website again (F 1,138=4.43; P=.04). Conclusions Computer-tailored programs have the potential to positively influence lower educated groups as they might be more involved in the computer-tailored intervention than higher educated smokers. Longitudinal studies with a larger sample are needed to gain more insight into the role of delivery strategy in tailored information and to investigate whether the intention to visit the intervention website again results in the ultimate goal of behavior change. Trial Registration Netherlands Trial Register (NTR3102). PMID:23388554

  3. SU-E-J-91: FFT Based Medical Image Registration Using a Graphics Processing Unit (GPU).

    PubMed

    Luce, J; Hoggarth, M; Lin, J; Block, A; Roeske, J

    2012-06-01

    To evaluate the efficiency gains obtained from using a Graphics Processing Unit (GPU) to perform a Fourier Transform (FT) based image registration. Fourier-based image registration involves obtaining the FT of the component images, and analyzing them in Fourier space to determine the translations and rotations of one image set relative to another. An important property of FT registration is that by enlarging the images (adding additional pixels), one can obtain translations and rotations with sub-pixel resolution. The expense, however, is an increased computational time. GPUs may decrease the computational time associated with FT image registration by taking advantage of their parallel architecture to perform matrix computations much more efficiently than a Central Processor Unit (CPU). In order to evaluate the computational gains produced by a GPU, images with known translational shifts were utilized. A program was written in the Interactive Data Language (IDL; Exelis, Boulder, CO) to performCPU-based calculations. Subsequently, the program was modified using GPU bindings (Tech-X, Boulder, CO) to perform GPU-based computation on the same system. Multiple image sizes were used, ranging from 256×256 to 2304×2304. The time required to complete the full algorithm by the CPU and GPU were benchmarked and the speed increase was defined as the ratio of the CPU-to-GPU computational time. The ratio of the CPU-to- GPU time was greater than 1.0 for all images, which indicates the GPU is performing the algorithm faster than the CPU. The smallest improvement, a 1.21 ratio, was found with the smallest image size of 256×256, and the largest speedup, a 4.25 ratio, was observed with the largest image size of 2304×2304. GPU programming resulted in a significant decrease in computational time associated with a FT image registration algorithm. The inclusion of the GPU may provide near real-time, sub-pixel registration capability. © 2012 American Association of Physicists in Medicine.

  4. Computer Center CDC Libraries/NSRD (Subprograms).

    DTIC Science & Technology

    1984-06-01

    VALUES Y - ARRAY OR CORRESPONDING Y-VALUES N - NUMBER OF VALUES CM REQUIRED: IOOB ERROR MESSAGE ’ L=XXXXX, X=X.XXXXXXX E+YY, X NOT MONOTONE STOP SELF ...PARAMETERS (SUBSEQUENT REPORTS MAY BE UNSOLICITED) . PCRTP1 - REQUEST TERMINAL PARAMETERS (SUBSEQUENT REPORTS ONLY IN RESPOSE TO HOST REQUEST) DA - REQUEST

  5. Interpolation Environment of Tensor Mathematics at the Corpuscular Stage of Computational Experiments in Hydromechanics

    NASA Astrophysics Data System (ADS)

    Bogdanov, Alexander; Degtyarev, Alexander; Khramushin, Vasily; Shichkina, Yulia

    2018-02-01

    Stages of direct computational experiments in hydromechanics based on tensor mathematics tools are represented by conditionally independent mathematical models for calculations separation in accordance with physical processes. Continual stage of numerical modeling is constructed on a small time interval in a stationary grid space. Here coordination of continuity conditions and energy conservation is carried out. Then, at the subsequent corpuscular stage of the computational experiment, kinematic parameters of mass centers and surface stresses at the boundaries of the grid cells are used in modeling of free unsteady motions of volume cells that are considered as independent particles. These particles can be subject to vortex and discontinuous interactions, when restructuring of free boundaries and internal rheological states has place. Transition from one stage to another is provided by interpolation operations of tensor mathematics. Such interpolation environment formalizes the use of physical laws for mechanics of continuous media modeling, provides control of rheological state and conditions for existence of discontinuous solutions: rigid and free boundaries, vortex layers, their turbulent or empirical generalizations.

  6. Trainable multiscript orientation detection

    NASA Astrophysics Data System (ADS)

    Van Beusekom, Joost; Rangoni, Yves; Breuel, Thomas M.

    2010-01-01

    Detecting the correct orientation of document images is an important step in large scale digitization processes, as most subsequent document analysis and optical character recognition methods assume upright position of the document page. Many methods have been proposed to solve the problem, most of which base on ascender to descender ratio computation. Unfortunately, this cannot be used for scripts having no descenders nor ascenders. Therefore, we present a trainable method using character similarity to compute the correct orientation. A connected component based distance measure is computed to compare the characters of the document image to characters whose orientation is known. This allows to detect the orientation for which the distance is lowest as the correct orientation. Training is easily achieved by exchanging the reference characters by characters of the script to be analyzed. Evaluation of the proposed approach showed accuracy of above 99% for Latin and Japanese script from the public UW-III and UW-II datasets. An accuracy of 98.9% was obtained for Fraktur on a non-public dataset. Comparison of the proposed method to two methods using ascender / descender ratio based orientation detection shows a significant improvement.

  7. Forward calculation of gravity and its gradient using polyhedral representation of density interfaces: an application of spherical or ellipsoidal topographic gravity effect

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Chen, Chao

    2018-02-01

    A density interface modeling method using polyhedral representation is proposed to construct 3-D models of spherical or ellipsoidal interfaces such as the terrain surface of the Earth and applied to forward calculating gravity effect of topography and bathymetry for regional or global applications. The method utilizes triangular facets to fit undulation of the target interface. The model maintains almost equal accuracy and resolution at different locations of the globe. Meanwhile, the exterior gravitational field of the model, including its gravity and gravity gradients, is obtained simultaneously using analytic solutions. Additionally, considering the effect of distant relief, an adaptive computation process is introduced to reduce the computational burden. Then features and errors of the method are analyzed. Subsequently, the method is applied to an area for the ellipsoidal Bouguer shell correction as an example and the result is compared to existing methods, which shows our method provides high accuracy and great computational efficiency. Suggestions for further developments and conclusions are drawn at last.

  8. Parameter estimation and sensitivity analysis in an agent-based model of Leishmania major infection

    PubMed Central

    Jones, Douglas E.; Dorman, Karin S.

    2009-01-01

    Computer models of disease take a systems biology approach toward understanding host-pathogen interactions. In particular, data driven computer model calibration is the basis for inference of immunological and pathogen parameters, assessment of model validity, and comparison between alternative models of immune or pathogen behavior. In this paper we describe the calibration and analysis of an agent-based model of Leishmania major infection. A model of macrophage loss following uptake of necrotic tissue is proposed to explain macrophage depletion following peak infection. Using Gaussian processes to approximate the computer code, we perform a sensitivity analysis to identify important parameters and to characterize their influence on the simulated infection. The analysis indicates that increasing growth rate can favor or suppress pathogen loads, depending on the infection stage and the pathogen’s ability to avoid detection. Subsequent calibration of the model against previously published biological observations suggests that L. major has a relatively slow growth rate and can replicate for an extended period of time before damaging the host cell. PMID:19837088

  9. ChemScreener: A Distributed Computing Tool for Scaffold based Virtual Screening.

    PubMed

    Karthikeyan, Muthukumarasamy; Pandit, Deepak; Vyas, Renu

    2015-01-01

    In this work we present ChemScreener, a Java-based application to perform virtual library generation combined with virtual screening in a platform-independent distributed computing environment. ChemScreener comprises a scaffold identifier, a distinct scaffold extractor, an interactive virtual library generator as well as a virtual screening module for subsequently selecting putative bioactive molecules. The virtual libraries are annotated with chemophore-, pharmacophore- and toxicophore-based information for compound prioritization. The hits selected can then be further processed using QSAR, docking and other in silico approaches which can all be interfaced within the ChemScreener framework. As a sample application, in this work scaffold selectivity, diversity, connectivity and promiscuity towards six important therapeutic classes have been studied. In order to illustrate the computational power of the application, 55 scaffolds extracted from 161 anti-psychotic compounds were enumerated to produce a virtual library comprising 118 million compounds (17 GB) and annotated with chemophore, pharmacophore and toxicophore based features in a single step which would be non-trivial to perform with many standard software tools today on libraries of this size.

  10. Warpage Measurement of Thin Wafers by Reflectometry

    NASA Astrophysics Data System (ADS)

    Ng, Chi Seng; Asundi, Anand Krishna

    To cope with advances in the electronic and portable devices, electronic packaging industries have employed thinner and larger wafers to produce thinner packages/ electronic devices. As the thickness of the wafer decrease (below 250um), there is an increased tendency for it to warp. Large stresses are induced during manufacturing processes, particularly during backside metal deposition. The wafers bend due to these stresses. Warpage results from the residual stress will affect subsequent manufacturing processes. For example, warpage due to this residual stresses lead to crack dies during singulation process which will severely reorient the residual stress distributions, thus, weakening the mechanical and electrical properties of the singulated die. It is impossible to completely prevent the residual stress induced on thin wafers during the manufacturing processes. Monitoring of curvature/flatness is thus necessary to ensure reliability of device and its uses. A simple whole-field curvature measurement system using a novel computer aided phase shift reflection grating method has been developed and this project aims to take it to the next step for residual stress and full field surface shape measurement. The system was developed from our earlier works on Computer Aided Moiré Methods and Novel Techniques in Reflection Moiré, Experimental Mechanics (1994) in which novel structured light approach was shown for surface slope and curvature measurement. This method uses similar technology but coupled with a novel phase shift system to accurately measure slope and curvature. In this study, slope of the surface were obtain using the versatility of computer aided reflection grating method to manipulate and generate gratings in two orthogonal directions. The curvature and stress can be evaluated by performing a single order differentiation on slope data.

  11. Accelerating activity coefficient calculations using multicore platforms, and profiling the energy use resulting from such calculations.

    NASA Astrophysics Data System (ADS)

    Topping, David; Alibay, Irfan; Bane, Michael

    2017-04-01

    To predict the evolving concentration, chemical composition and ability of aerosol particles to act as cloud droplets, we rely on numerical modeling. Mechanistic models attempt to account for the movement of compounds between the gaseous and condensed phases at a molecular level. This 'bottom up' approach is designed to increase our fundamental understanding. However, such models rely on predicting the properties of molecules and subsequent mixtures. For partitioning between the gaseous and condensed phases this includes: saturation vapour pressures; Henrys law coefficients; activity coefficients; diffusion coefficients and reaction rates. Current gas phase chemical mechanisms predict the existence of potentially millions of individual species. Within a dynamic ensemble model, this can often be used as justification for neglecting computationally expensive process descriptions. Indeed, on whether we can quantify the true sensitivity to uncertainties in molecular properties, even at the single aerosol particle level it has been impossible to embed fully coupled representations of process level knowledge with all possible compounds, typically relying on heavily parameterised descriptions. Relying on emerging numerical frameworks, and designed for the changing landscape of high-performance computing (HPC), in this study we focus specifically on the ability to capture activity coefficients in liquid solutions using the UNIFAC method. Activity coefficients are often neglected with the largely untested hypothesis that they are simply too computationally expensive to include in dynamic frameworks. We present results demonstrating increased computational efficiency for a range of typical scenarios, including a profiling of the energy use resulting from reliance on such computations. As the landscape of HPC changes, the latter aspect is important to consider in future applications.

  12. Revisiting Frazier's subdeltas: enhancing datasets with dimensionality, better to understand geologic systems

    USGS Publications Warehouse

    Flocks, James

    2006-01-01

    Scientific knowledge from the past century is commonly represented by two-dimensional figures and graphs, as presented in manuscripts and maps. Using today's computer technology, this information can be extracted and projected into three- and four-dimensional perspectives. Computer models can be applied to datasets to provide additional insight into complex spatial and temporal systems. This process can be demonstrated by applying digitizing and modeling techniques to valuable information within widely used publications. The seminal paper by D. Frazier, published in 1967, identified 16 separate delta lobes formed by the Mississippi River during the past 6,000 yrs. The paper includes stratigraphic descriptions through geologic cross-sections, and provides distribution and chronologies of the delta lobes. The data from Frazier's publication are extensively referenced in the literature. Additional information can be extracted from the data through computer modeling. Digitizing and geo-rectifying Frazier's geologic cross-sections produce a three-dimensional perspective of the delta lobes. Adding the chronological data included in the report provides the fourth-dimension of the delta cycles, which can be visualized through computer-generated animation. Supplemental information can be added to the model, such as post-abandonment subsidence of the delta-lobe surface. Analyzing the regional, net surface-elevation balance between delta progradations and land subsidence is computationally intensive. By visualizing this process during the past 4,500 yrs through multi-dimensional animation, the importance of sediment compaction in influencing both the shape and direction of subsequent delta progradations becomes apparent. Visualization enhances a classic dataset, and can be further refined using additional data, as well as provide a guide for identifying future areas of study.

  13. Epileptic Seizure Forewarning by Nonlinear Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hively, L.M.

    2002-04-19

    This report describes work that was performed under a Cooperative Research and Development Agreement (CRADA) between UT-Battelle, LLC (Contractor) and a commercial participant, VIASYS Healthcare Inc. (formerly Nicolet Biomedical, Inc.). The Contractor has patented technology that forewarns of impending epileptic events via scalp electroencephalograph (EEG) data and successfully demonstrated this technology on 20 datasets from the Participant under pre-CRADA effort. This CRADA sought to bridge the gap between the Contractor's existing research-class software and a prototype medical device for subsequent commercialization by the Participant. The objectives of this CRADA were (1) development of a combination of existing computer hardware andmore » Contractor-patented software into a clinical process for warning of impending epileptic events in human patients, and (2) validation of the epilepsy warning methodology. This work modified the ORNL research-class FORTRAN for forewarning to run under a graphical user interface (GUI). The GUI-FORTRAN software subsequently was installed on desktop computers at five epilepsy monitoring units. The forewarning prototypes have run for more than one year without any hardware or software failures. This work also reported extensive analysis of model and EEG datasets to demonstrate the usefulness of the methodology. However, the Participant recently chose to stop work on the CRADA, due to a change in business priorities. Much work remains to convert the technology into a commercial clinical or ambulatory device for patient use, as discussed in App. H.« less

  14. Using Bayesian Nonparametric Hidden Semi-Markov Models to Disentangle Affect Processes during Marital Interaction

    PubMed Central

    Griffin, William A.; Li, Xun

    2016-01-01

    Sequential affect dynamics generated during the interaction of intimate dyads, such as married couples, are associated with a cascade of effects—some good and some bad—on each partner, close family members, and other social contacts. Although the effects are well documented, the probabilistic structures associated with micro-social processes connected to the varied outcomes remain enigmatic. Using extant data we developed a method of classifying and subsequently generating couple dynamics using a Hierarchical Dirichlet Process Hidden semi-Markov Model (HDP-HSMM). Our findings indicate that several key aspects of existing models of marital interaction are inadequate: affect state emissions and their durations, along with the expected variability differences between distressed and nondistressed couples are present but highly nuanced; and most surprisingly, heterogeneity among highly satisfied couples necessitate that they be divided into subgroups. We review how this unsupervised learning technique generates plausible dyadic sequences that are sensitive to relationship quality and provide a natural mechanism for computational models of behavioral and affective micro-social processes. PMID:27187319

  15. Artificial Neural Networks for Processing Graphs with Application to Image Understanding: A Survey

    NASA Astrophysics Data System (ADS)

    Bianchini, Monica; Scarselli, Franco

    In graphical pattern recognition, each data is represented as an arrangement of elements, that encodes both the properties of each element and the relations among them. Hence, patterns are modelled as labelled graphs where, in general, labels can be attached to both nodes and edges. Artificial neural networks able to process graphs are a powerful tool for addressing a great variety of real-world problems, where the information is naturally organized in entities and relationships among entities and, in fact, they have been widely used in computer vision, f.i. in logo recognition, in similarity retrieval, and for object detection. In this chapter, we propose a survey of neural network models able to process structured information, with a particular focus on those architectures tailored to address image understanding applications. Starting from the original recursive model (RNNs), we subsequently present different ways to represent images - by trees, forests of trees, multiresolution trees, directed acyclic graphs with labelled edges, general graphs - and, correspondingly, neural network architectures appropriate to process such structures.

  16. Modeling and Validation of a Three-Stage Solidification Model for Sprays

    NASA Astrophysics Data System (ADS)

    Tanner, Franz X.; Feigl, Kathleen; Windhab, Erich J.

    2010-09-01

    A three-stage freezing model and its validation are presented. In the first stage, the cooling of the droplet down to the freezing temperature is described as a convective heat transfer process in turbulent flow. In the second stage, when the droplet has reached the freezing temperature, the solidification process is initiated via nucleation and crystal growth. The latent heat release is related to the amount of heat convected away from the droplet and the rate of solidification is expressed with a freezing progress variable. After completion of the solidification process, in stage three, the cooling of the solidified droplet (particle) is described again by a convective heat transfer process until the particle approaches the temperature of the gaseous environment. The model has been validated by experimental data of a single cocoa butter droplet suspended in air. The subsequent spray validations have been performed with data obtained from a cocoa butter melt in an experimental spray tower using the open-source computational fluid dynamics code KIVA-3.

  17. Self-Associations Influence Task-Performance through Bayesian Inference

    PubMed Central

    Bengtsson, Sara L.; Penny, Will D.

    2013-01-01

    The way we think about ourselves impacts greatly on our behavior. This paper describes a behavioral study and a computational model that shed new light on this important area. Participants were primed “clever” and “stupid” using a scrambled sentence task, and we measured the effect on response time and error-rate on a rule-association task. First, we observed a confirmation bias effect in that associations to being “stupid” led to a gradual decrease in performance, whereas associations to being “clever” did not. Second, we observed that the activated self-concepts selectively modified attention toward one’s performance. There was an early to late double dissociation in RTs in that primed “clever” resulted in RT increase following error responses, whereas primed “stupid” resulted in RT increase following correct responses. We propose a computational model of subjects’ behavior based on the logic of the experimental task that involves two processes; memory for rules and the integration of rules with subsequent visual cues. The model incorporates an adaptive decision threshold based on Bayes rule, whereby decision thresholds are increased if integration was inferred to be faulty. Fitting the computational model to experimental data confirmed our hypothesis that priming affects the memory process. This model explains both the confirmation bias and double dissociation effects and demonstrates that Bayesian inferential principles can be used to study the effect of self-concepts on behavior. PMID:23966937

  18. Selective thermal transformation of old computer printed circuit boards to Cu-Sn based alloy.

    PubMed

    Shokri, Ali; Pahlevani, Farshid; Cole, Ivan; Sahajwalla, Veena

    2017-09-01

    This study investigates, verifies and determines the optimal parameters for the selective thermal transformation of problematic electronic waste (e-waste) to produce value-added copper-tin (Cu-Sn) based alloys; thereby demonstrating a novel new pathway for the cost-effective recovery of resources from one of the world's fastest growing and most challenging waste streams. Using outdated computer printed circuit boards (PCBs), a ubiquitous component of e-waste, we investigated transformations across a range of temperatures and time frames. Results indicate a two-step heat treatment process, using a low temperature step followed by a high temperature step, can be used to produce and separate off, first, a lead (Pb) based alloy and, subsequently, a Cu-Sn based alloy. We also found a single-step heat treatment process at a moderate temperature of 900 °C can be used to directly transform old PCBs to produce a Cu-Sn based alloy, while capturing the Pb and antimony (Sb) as alloying elements to prevent the emission of these low melting point elements. These results demonstrate old computer PCBs, large volumes of which are already within global waste stockpiles, can be considered a potential source of value-added metal alloys, opening up a new opportunity for utilizing e-waste to produce metal alloys in local micro-factories. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Automated extraction of natural drainage density patterns for the conterminous United States through high performance computing

    USGS Publications Warehouse

    Stanislawski, Larry V.; Falgout, Jeff T.; Buttenfield, Barbara P.

    2015-01-01

    Hydrographic networks form an important data foundation for cartographic base mapping and for hydrologic analysis. Drainage density patterns for these networks can be derived to characterize local landscape, bedrock and climate conditions, and further inform hydrologic and geomorphological analysis by indicating areas where too few headwater channels have been extracted. But natural drainage density patterns are not consistently available in existing hydrographic data for the United States because compilation and capture criteria historically varied, along with climate, during the period of data collection over the various terrain types throughout the country. This paper demonstrates an automated workflow that is being tested in a high-performance computing environment by the U.S. Geological Survey (USGS) to map natural drainage density patterns at the 1:24,000-scale (24K) for the conterminous United States. Hydrographic network drainage patterns may be extracted from elevation data to guide corrections for existing hydrographic network data. The paper describes three stages in this workflow including data pre-processing, natural channel extraction, and generation of drainage density patterns from extracted channels. The workflow is concurrently implemented by executing procedures on multiple subbasin watersheds within the U.S. National Hydrography Dataset (NHD). Pre-processing defines parameters that are needed for the extraction process. Extraction proceeds in standard fashion: filling sinks, developing flow direction and weighted flow accumulation rasters. Drainage channels with assigned Strahler stream order are extracted within a subbasin and simplified. Drainage density patterns are then estimated with 100-meter resolution and subsequently smoothed with a low-pass filter. The extraction process is found to be of better quality in higher slope terrains. Concurrent processing through the high performance computing environment is shown to facilitate and refine the choice of drainage density extraction parameters and more readily improve extraction procedures than conventional processing.

  20. Why Verbalization of Non-Verbal Memory Reduces Recognition Accuracy: A Computational Approach to Verbal Overshadowing.

    PubMed

    Hatano, Aya; Ueno, Taiji; Kitagami, Shinji; Kawaguchi, Jun

    2015-01-01

    Verbal overshadowing refers to a phenomenon whereby verbalization of non-verbal stimuli (e.g., facial features) during the maintenance phase (after the target information is no longer available from the sensory inputs) impairs subsequent non-verbal recognition accuracy. Two primary mechanisms have been proposed for verbal overshadowing, namely the recoding interference hypothesis, and the transfer-inappropriate processing shift. The former assumes that verbalization renders non-verbal representations less accurate. In contrast, the latter assumes that verbalization shifts processing operations to a verbal mode and increases the chance of failing to return to non-verbal, face-specific processing operations (i.e., intact, yet inaccessible non-verbal representations). To date, certain psychological phenomena have been advocated as inconsistent with the recoding-interference hypothesis. These include a decline in non-verbal memory performance following verbalization of non-target faces, and occasional failures to detect a significant correlation between the accuracy of verbal descriptions and the non-verbal memory performance. Contrary to these arguments against the recoding interference hypothesis, however, the present computational model instantiated core processing principles of the recoding interference hypothesis to simulate face recognition, and nonetheless successfully reproduced these behavioral phenomena, as well as the standard verbal overshadowing. These results demonstrate the plausibility of the recoding interference hypothesis to account for verbal overshadowing, and suggest there is no need to implement separable mechanisms (e.g., operation-specific representations, different processing principles, etc.). In addition, detailed inspections of the internal processing of the model clarified how verbalization rendered internal representations less accurate and how such representations led to reduced recognition accuracy, thereby offering a computationally grounded explanation. Finally, the model also provided an explanation as to why some studies have failed to report verbal overshadowing. Thus, the present study suggests it is not constructive to discuss whether verbal overshadowing exists or not in an all-or-none manner, and instead suggests a better experimental paradigm to further explore this phenomenon.

  1. Why Verbalization of Non-Verbal Memory Reduces Recognition Accuracy: A Computational Approach to Verbal Overshadowing

    PubMed Central

    Hatano, Aya; Ueno, Taiji; Kitagami, Shinji; Kawaguchi, Jun

    2015-01-01

    Verbal overshadowing refers to a phenomenon whereby verbalization of non-verbal stimuli (e.g., facial features) during the maintenance phase (after the target information is no longer available from the sensory inputs) impairs subsequent non-verbal recognition accuracy. Two primary mechanisms have been proposed for verbal overshadowing, namely the recoding interference hypothesis, and the transfer-inappropriate processing shift. The former assumes that verbalization renders non-verbal representations less accurate. In contrast, the latter assumes that verbalization shifts processing operations to a verbal mode and increases the chance of failing to return to non-verbal, face-specific processing operations (i.e., intact, yet inaccessible non-verbal representations). To date, certain psychological phenomena have been advocated as inconsistent with the recoding-interference hypothesis. These include a decline in non-verbal memory performance following verbalization of non-target faces, and occasional failures to detect a significant correlation between the accuracy of verbal descriptions and the non-verbal memory performance. Contrary to these arguments against the recoding interference hypothesis, however, the present computational model instantiated core processing principles of the recoding interference hypothesis to simulate face recognition, and nonetheless successfully reproduced these behavioral phenomena, as well as the standard verbal overshadowing. These results demonstrate the plausibility of the recoding interference hypothesis to account for verbal overshadowing, and suggest there is no need to implement separable mechanisms (e.g., operation-specific representations, different processing principles, etc.). In addition, detailed inspections of the internal processing of the model clarified how verbalization rendered internal representations less accurate and how such representations led to reduced recognition accuracy, thereby offering a computationally grounded explanation. Finally, the model also provided an explanation as to why some studies have failed to report verbal overshadowing. Thus, the present study suggests it is not constructive to discuss whether verbal overshadowing exists or not in an all-or-none manner, and instead suggests a better experimental paradigm to further explore this phenomenon. PMID:26061046

  2. ERTS operations and data processing

    NASA Technical Reports Server (NTRS)

    Gonzales, L.; Sos, J. Y.

    1974-01-01

    The overall communications and data flow between the ERTS spacecraft and the ground stations and processing centers are generally described. Data from the multispectral scanner and the return beam vidicon are telemetered to a primary ground station where they are demodulated, processed, and recorded. The tapes are then transferred to the NASA Data Processing Facility (NDPF) at Goddard. Housekeeping data are relayed from the prime ground stations to the Operations Control Center at Goddard. Tracking data are processed at the ground stations, and the calculated parameters are transmitted by teletype to the orbit determination group at Goddard. The ERTS orbit has been designed so that the same swaths of the ground coverage pattern viewed during one 18-day coverage cycle are repeated by the swaths viewed on all subsequent cycles. The Operations Control Center is the focal point for all communications with the spacecraft. NDPF is a job-oriented facility which processes and stores all sensor data, and which disseminates large quantities of these data to users in the form of films, computer-compatible tapes, and data collection system data.

  3. SU-E-I-63: Quantitative Evaluation of the Effects of Orthopedic Metal Artifact Reduction (OMAR) Software On CT Images for Radiotherapy Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jani, S

    Purpose: CT simulation for patients with metal implants can often be challenging due to artifacts that obscure tumor/target delineation and normal organ definition. Our objective was to evaluate the effectiveness of Orthopedic Metal Artifact Reduction (OMAR), a commercially available software, in reducing metal-induced artifacts and its effect on computed dose during treatment planning. Methods: CT images of water surrounding metallic cylindrical rods made of aluminum, copper and iron were studied in terms of Hounsfield Units (HU) spread. Metal-induced artifacts were characterized in terms of HU/Volume Histogram (HVH) using the Pinnacle treatment planning system. Effects of OMAR on enhancing our abilitymore » to delineate organs on CT and subsequent dose computation were examined in nine (9) patients with hip implants and two (2) patients with breast tissue expanders. Results: Our study characterized water at 1000 HU with a standard deviation (SD) of about 20 HU. The HVHs allowed us to evaluate how the presence of metal changed the HU spread. For example, introducing a 2.54 cm diameter copper rod in water increased the SD in HU of the surrounding water from 20 to 209, representing an increase in artifacts. Subsequent use of OMAR brought the SD down to 78. Aluminum produced least artifacts whereas Iron showed largest amount of artifacts. In general, an increase in kVp and mA during CT scanning showed better effectiveness of OMAR in reducing artifacts. Our dose analysis showed that some isodose contours shifted by several mm with OMAR but infrequently and were nonsignificant in planning process. Computed volumes of various dose levels showed <2% change. Conclusions: In our experience, OMAR software greatly reduced the metal-induced CT artifacts for the majority of patients with implants, thereby improving our ability to delineate tumor and surrounding organs. OMAR had a clinically negligible effect on computed dose within tissues. Partially funded by unrestricted educational grant from Philips.« less

  4. CMSC-130 Introductory Computer Science, Lecture Notes

    DTIC Science & Technology

    1993-07-01

    Introductory Computer Science lecture notes are used in the classroom for teaching CMSC 130, an introductory computer science course , using the ...Unit Testing 2. The Syntax Of Subunits Will Be Studied In The Subsequent Course CMSC130 -5- Lecture 11 TOP-DOWN TESTING Data Processor Procedure...used in the preparation of these lecture notes: Reference Manual For The Ada Prosramming Language, ANSI/MIL-STD

  5. Resetting Educational Technology Coursework for Pre-Service Teachers: A Computational Thinking Approach to the Development of Technological Pedagogical Content Knowledge (TPACK)

    ERIC Educational Resources Information Center

    Mouza, Chrystalla; Yang, Hui; Pan, Yi-Cheng; Ozden, Sule Yilmaz; Pollock, Lori

    2017-01-01

    This study presents the design of an educational technology course for pre-service teachers specific to incorporating computational thinking in K-8 classroom settings. Subsequently, it examines how participation in the course influences pre-service teachers' dispositions and knowledge of computational thinking concepts and the ways in which such…

  6. Optical air data systems and methods

    NASA Technical Reports Server (NTRS)

    Caldwell, Loren M. (Inventor); Tang, Shoou-yu (Inventor); O'Brien, Martin (Inventor)

    2010-01-01

    Systems and methods for sensing air outside a moving aircraft are presented. In one embodiment, a system includes a laser for generating laser energy. The system also includes one or more transceivers for projecting the laser energy as laser radiation to the air. Subsequently, each transceiver receives laser energy as it is backscattered from the air. A computer processes signals from the transceivers to distinguish molecular scattered laser radiation from aerosol scattered laser radiation and determines one or more air parameters based on the scattered laser radiation. Such air parameters may include air speed, air pressure, air temperature and aircraft orientation angle, such as yaw, angle of attack and sideslip.

  7. 2001: Things to come.

    PubMed

    Apuzzo, M L; Liu, C Y

    2001-10-01

    THIS ARTICLE DISCUSSES elements in the definition of modernity and emerging futurism in neurological surgery. In particular, it describes evolution, discovery, and paradigm shifts in the field and forces responsible for their realization. It analyzes the cyclical reinvention of the discipline experienced during the past generation and attempts to identify apertures to the near and more remote future. Subsequently, it focuses on forces and discovery in computational science, imaging, molecular science, biomedical engineering, and information processing as they relate to the theme of minimalism that is evident in the field. These areas are explained in the light of future possibilities offered by the emerging field of nanotechnology with molecular engineering.

  8. Optical air data systems and methods

    NASA Technical Reports Server (NTRS)

    Caldwell, Loren M. (Inventor); O'Brien, Martin J. (Inventor); Weimer, Carl S. (Inventor); Nelson, Loren D. (Inventor)

    2008-01-01

    Systems and methods for sensing air outside a moving aircraft are presented. In one embodiment, a system includes a laser for generating laser energy. The system also includes one or more transceivers for projecting the laser energy as laser radiation to the air. Subsequently, each transceiver receives laser energy as it is backscattered from the air. A computer processes signals from the transceivers to distinguish molecular scattered laser radiation from aerosol scattered laser radiation and determines one or more air parameters based on the scattered laser radiation. Such air parameters may include air speed, air pressure, air temperature and aircraft orientation angle, such as yaw, angle of attack and sideslip.

  9. Optical air data systems and methods

    NASA Technical Reports Server (NTRS)

    Caldwell, Loren M. (Inventor); O'Brien, Martin J. (Inventor); Weimer, Carl S. (Inventor); Nelson, Loren D. (Inventor)

    2005-01-01

    Systems and methods for sensing air outside a moving aircraft are presented. In one embodiment, a system includes a laser for generating laser energy. The system also includes one or more transceivers for projecting the laser energy as laser radiation to the air. Subsequently, each transceiver receives laser energy as it is backscattered from the air. A computer processes signals from the transceivers to distinguish molecular scattered laser radiation from aerosol scattered laser radiation and determines one or more air parameters based on the scattered laser radiation. Such air parameters may include air speed, air pressure, air temperature and aircraft orientation angle, such as yaw, angle of attack and sideslip.

  10. HRLSim: a high performance spiking neural network simulator for GPGPU clusters.

    PubMed

    Minkovich, Kirill; Thibeault, Corey M; O'Brien, Michael John; Nogin, Aleksey; Cho, Youngkwan; Srinivasa, Narayan

    2014-02-01

    Modeling of large-scale spiking neural models is an important tool in the quest to understand brain function and subsequently create real-world applications. This paper describes a spiking neural network simulator environment called HRL Spiking Simulator (HRLSim). This simulator is suitable for implementation on a cluster of general purpose graphical processing units (GPGPUs). Novel aspects of HRLSim are described and an analysis of its performance is provided for various configurations of the cluster. With the advent of inexpensive GPGPU cards and compute power, HRLSim offers an affordable and scalable tool for design, real-time simulation, and analysis of large-scale spiking neural networks.

  11. Guidelines and Procedures for Computing Time-Series Suspended-Sediment Concentrations and Loads from In-Stream Turbidity-Sensor and Streamflow Data

    USGS Publications Warehouse

    Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.

    2009-01-01

    In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.

  12. Numerical simulation and validation of SI-CAI hybrid combustion in a CAI/HCCI gasoline engine

    NASA Astrophysics Data System (ADS)

    Wang, Xinyan; Xie, Hui; Xie, Liyan; Zhang, Lianfang; Li, Le; Chen, Tao; Zhao, Hua

    2013-02-01

    SI-CAI hybrid combustion, also known as spark-assisted compression ignition (SACI), is a promising concept to extend the operating range of CAI (Controlled Auto-Ignition) and achieve the smooth transition between spark ignition (SI) and CAI in the gasoline engine. In this study, a SI-CAI hybrid combustion model (HCM) has been constructed on the basis of the 3-Zones Extended Coherent Flame Model (ECFM3Z). An ignition model is included to initiate the ECFM3Z calculation and induce the flame propagation. In order to precisely depict the subsequent auto-ignition process of the unburned fuel and air mixture independently after the initiation of flame propagation, the tabulated chemistry concept is adopted to describe the auto-ignition chemistry. The methodology for extracting tabulated parameters from the chemical kinetics calculations is developed so that both cool flame reactions and main auto-ignition combustion can be well captured under a wider range of thermodynamic conditions. The SI-CAI hybrid combustion model (HCM) is then applied in the three-dimensional computational fluid dynamics (3-D CFD) engine simulation. The simulation results are compared with the experimental data obtained from a single cylinder VVA engine. The detailed analysis of the simulations demonstrates that the SI-CAI hybrid combustion process is characterised with the early flame propagation and subsequent multi-site auto-ignition around the main flame front, which is consistent with the optical results reported by other researchers. Besides, the systematic study of the in-cylinder condition reveals the influence mechanism of the early flame propagation on the subsequent auto-ignition.

  13. Ensemble modeling of stochastic unsteady open-channel flow in terms of its time-space evolutionary probability distribution - Part 1: theoretical development

    NASA Astrophysics Data System (ADS)

    Dib, Alain; Kavvas, M. Levent

    2018-03-01

    The Saint-Venant equations are commonly used as the governing equations to solve for modeling the spatially varied unsteady flow in open channels. The presence of uncertainties in the channel or flow parameters renders these equations stochastic, thus requiring their solution in a stochastic framework in order to quantify the ensemble behavior and the variability of the process. While the Monte Carlo approach can be used for such a solution, its computational expense and its large number of simulations act to its disadvantage. This study proposes, explains, and derives a new methodology for solving the stochastic Saint-Venant equations in only one shot, without the need for a large number of simulations. The proposed methodology is derived by developing the nonlocal Lagrangian-Eulerian Fokker-Planck equation of the characteristic form of the stochastic Saint-Venant equations for an open-channel flow process, with an uncertain roughness coefficient. A numerical method for its solution is subsequently devised. The application and validation of this methodology are provided in a companion paper, in which the statistical results computed by the proposed methodology are compared against the results obtained by the Monte Carlo approach.

  14. Characteristics of the Time Variable Component of the Coronal Heating Process

    NASA Technical Reports Server (NTRS)

    Habbal, Shadia R.; Poland, Art (Technical Monitor)

    2001-01-01

    The goal of the proposed study was to explore the non-steady nature of the coronal heating processes and its manifestations in the inner corona and interplanetary space by coordinating coronal SOHO observations in white light, ultraviolet, and extreme ultraviolet, with complementary radio occultation measurements during an unprecedented and rare coincidence of a total solar eclipse with the superior conjunction of a planetary spacecraft, Galileo, in February 1998. In addition, radio occultation measurements by the Mars Global Surveyor spacecraft in May 1998 spanned the inner heliosphere observed by coronal SOHO instruments and probing it to within 0.5 R(sub S), above the solar surface. Inferences of physical properties derived from these simultaneous observations were subsequently used in solar wind model computations to yield the range of plasma parameters characteristic of the fast and slow solar wind.

  15. [Lessons learned from a distribution incident at the Alps-Mediterranean Division of the French Blood Establishment].

    PubMed

    Legrand, D

    2008-11-01

    The Alps-Mediterranean division of the French blood establishment (EFS Alpes-Mediterranée) has implemented a risk management program. Within this framework, the labile blood product distribution process was assessed to identify critical steps. Subsequently, safety measures were instituted including computer-assisted decision support, detailed written instructions and control checks at each step. Failure of these measures to prevent an incident underlines the vulnerability of the process to the human factor. Indeed root cause analysis showed that the incident was due to underestimation of the danger by one individual. Elimination of this type of risk will require continuous training, testing and updating of personnel. Identification and reporting of nonconformities will allow personnel at all levels (local, regional, and national) to share lessons and implement appropriate risk mitigation strategies.

  16. An evaluation of software tools for the design and development of cockpit displays

    NASA Technical Reports Server (NTRS)

    Ellis, Thomas D., Jr.

    1993-01-01

    The use of all-glass cockpits at the NASA Langley Research Center (LaRC) simulation facility has changed the means of design, development, and maintenance of instrument displays. The human-machine interface has evolved from a physical hardware device to a software-generated electronic display system. This has subsequently caused an increased workload at the facility. As computer processing power increases and the glass cockpit becomes predominant in facilities, software tools used in the design and development of cockpit displays are becoming both feasible and necessary for a more productive simulation environment. This paper defines LaRC requirements of a display software development tool and compares two available applications against these requirements. As a part of the software engineering process, these tools reduce development time, provide a common platform for display development, and produce exceptional real-time results.

  17. Off-shell production of top-antitop pairs in the lepton+jets channel at NLO QCD

    NASA Astrophysics Data System (ADS)

    Denner, Ansgar; Pellen, Mathieu

    2018-02-01

    The production of top-quark pairs that subsequently decay hadronically and leptonically (lepton+jets channel) is one of the key processes for the study of top-quark properties at the LHC. In this article, NLO QCD corrections of order O({α}s^3{α}^4) to the hadronic process pp\\to {μ}-{\\overline{ν}}_{μ}b\\overline{b}jj are presented. The computation includes off-shell as well as non-resonant contributions, and experimental event selections are used in order to provide realistic predictions. The results are provided in the form of cross sections and differential distributions. The QCD corrections are sizeable and different from the ones of the fully leptonic channel. This is due to the different final state where here four jets are present at leading order.

  18. Robust extrema features for time-series data analysis.

    PubMed

    Vemulapalli, Pramod K; Monga, Vishal; Brennan, Sean N

    2013-06-01

    The extraction of robust features for comparing and analyzing time series is a fundamentally important problem. Research efforts in this area encompass dimensionality reduction using popular signal analysis tools such as the discrete Fourier and wavelet transforms, various distance metrics, and the extraction of interest points from time series. Recently, extrema features for analysis of time-series data have assumed increasing significance because of their natural robustness under a variety of practical distortions, their economy of representation, and their computational benefits. Invariably, the process of encoding extrema features is preceded by filtering of the time series with an intuitively motivated filter (e.g., for smoothing), and subsequent thresholding to identify robust extrema. We define the properties of robustness, uniqueness, and cardinality as a means to identify the design choices available in each step of the feature generation process. Unlike existing methods, which utilize filters "inspired" from either domain knowledge or intuition, we explicitly optimize the filter based on training time series to optimize robustness of the extracted extrema features. We demonstrate further that the underlying filter optimization problem reduces to an eigenvalue problem and has a tractable solution. An encoding technique that enhances control over cardinality and uniqueness is also presented. Experimental results obtained for the problem of time series subsequence matching establish the merits of the proposed algorithm.

  19. Applicability of initial optimal maternal and fetal electrocardiogram combination vectors to subsequent recordings

    NASA Astrophysics Data System (ADS)

    Yan, Hua-Wen; Huang, Xiao-Lin; Zhao, Ying; Si, Jun-Feng; Liu, Tie-Bing; Liu, Hong-Xing

    2014-11-01

    A series of experiments are conducted to confirm whether the vectors calculated for an early section of a continuous non-invasive fetal electrocardiogram (fECG) recording can be directly applied to subsequent sections in order to reduce the computation required for real-time monitoring. Our results suggest that it is generally feasible to apply the initial optimal maternal and fetal ECG combination vectors to extract the fECG and maternal ECG in subsequent recorded sections.

  20. [Cooling of boar spermatozoa prior to freezing and post thaw quality and evaluation of membrane state using chlortetracycline (CTC) staining].

    PubMed

    Kotzias-Bandeira, E; Waberski, D; Weitze, K F

    1997-08-01

    The influence of an extended holding time at room temperature (+18 degrees C) before freezing on boar sperm quality was investigated. 17 ejaculates were collected from 5 different boars by separation in sperm rich and sperm poor fraction. The ejaculate were split, diluted 1+1 with Merck I-Medium, and submitted to three different treatments before freezing: 1. Sperm rich fraction, cooling to +20 degrees C for 1.5 h and subsequent cooling to +15 degrees C for 2.5 h; 2. Sperm rich fraction, cooling to +18 degrees C for 4 h and subsequent holding time at +18 degrees C for 16 h; 3. Whole ejaculate (sperm rich fraction plus seminal plasma), cooling to +18 degrees C for 4 h and subsequent holding time at +18 degrees C for 16 h. Subjectively assessed post thaw motility (SMOT), computer-measured motility (CMOT), and acrosome integrity (NAR), assessed by phase contrast microscopy were significantly (p < 0.05) higher after extended holding time (procedure 2 and 3) compared to short holding time (procedure 1). The exposure to seminal plasma during holding had no significant effect. Chlortetracyclin (CTC) staining of sperm membranes gave no reliable information in the presence of an EDTA-containing preservation medium, used routinely in the preservation process.

  1. Concept mapping as an approach for expert-guided model building: The example of health literacy.

    PubMed

    Soellner, Renate; Lenartz, Norbert; Rudinger, Georg

    2017-02-01

    Concept mapping served as the starting point for the aim of capturing the comprehensive structure of the construct of 'health literacy.' Ideas about health literacy were generated by 99 experts and resulted in 105 statements that were subsequently organized by 27 experts in an unstructured card sorting. Multidimensional scaling was applied to the sorting data and a two and three-dimensional solution was computed. The three dimensional solution was used in subsequent cluster analysis and resulted in a concept map of nine "clusters": (1) self-regulation, (2) self-perception, (3) proactive approach to health, (4) basic literacy and numeracy skills, (5) information appraisal, (6) information search, (7) health care system knowledge and acting, (8) communication and cooperation, and (9) beneficial personality traits. Subsequently, this concept map served as a starting point for developing a "qualitative" structural model of health literacy and a questionnaire for the measurement of health literacy. On the basis of questionnaire data, a "quantitative" structural model was created by first applying exploratory factor analyses (EFA) and then cross-validating the model with confirmatory factor analyses (CFA). Concept mapping proved to be a highly valuable tool for the process of model building up to translational research in the "real world". Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. The use of computer graphics in the visual analysis of the proposed Sunshine Ski Area expansion

    Treesearch

    Mark Angelo

    1979-01-01

    This paper describes the use of computer graphics in designing part of the Sunshine Ski Area in Banff National Park. The program used was capable of generating perspective landscape drawings from a number of different viewpoints. This allowed managers to predict, and subsequently reduce, the adverse visual impacts of ski-run development. Computer graphics have proven,...

  3. 32 CFR Appendix A to Part 292 - Uniform Agency Fees for Search and Duplication Under the Freedom of Information Act (as Amended)

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... a. above) for the computer/operator/programmer determining how to conduct and subsequently executing the search will be recorded as part of the computer search. c. Actual time spent travelling to a...

  4. From chalkboard, slides, and paper to e-learning: How computing technologies have transformed anatomical sciences education.

    PubMed

    Trelease, Robert B

    2016-11-01

    Until the late-twentieth century, primary anatomical sciences education was relatively unenhanced by advanced technology and dependent on the mainstays of printed textbooks, chalkboard- and photographic projection-based classroom lectures, and cadaver dissection laboratories. But over the past three decades, diffusion of innovations in computer technology transformed the practices of anatomical education and research, along with other aspects of work and daily life. Increasing adoption of first-generation personal computers (PCs) in the 1980s paved the way for the first practical educational applications, and visionary anatomists foresaw the usefulness of computers for teaching. While early computers lacked high-resolution graphics capabilities and interactive user interfaces, applications with video discs demonstrated the practicality of programming digital multimedia linking descriptive text with anatomical imaging. Desktop publishing established that computers could be used for producing enhanced lecture notes, and commercial presentation software made it possible to give lectures using anatomical and medical imaging, as well as animations. Concurrently, computer processing supported the deployment of medical imaging modalities, including computed tomography, magnetic resonance imaging, and ultrasound, that were subsequently integrated into anatomy instruction. Following its public birth in the mid-1990s, the World Wide Web became the ubiquitous multimedia networking technology underlying the conduct of contemporary education and research. Digital video, structural simulations, and mobile devices have been more recently applied to education. Progressive implementation of computer-based learning methods interacted with waves of ongoing curricular change, and such technologies have been deemed crucial for continuing medical education reforms, providing new challenges and opportunities for anatomical sciences educators. Anat Sci Educ 9: 583-602. © 2016 American Association of Anatomists. © 2016 American Association of Anatomists.

  5. Implementation and evaluation of various demons deformable image registration algorithms on a GPU.

    PubMed

    Gu, Xuejun; Pan, Hubert; Liang, Yun; Castillo, Richard; Yang, Deshan; Choi, Dongju; Castillo, Edward; Majumdar, Amitava; Guerrero, Thomas; Jiang, Steve B

    2010-01-07

    Online adaptive radiation therapy (ART) promises the ability to deliver an optimal treatment in response to daily patient anatomic variation. A major technical barrier for the clinical implementation of online ART is the requirement of rapid image segmentation. Deformable image registration (DIR) has been used as an automated segmentation method to transfer tumor/organ contours from the planning image to daily images. However, the current computational time of DIR is insufficient for online ART. In this work, this issue is addressed by using computer graphics processing units (GPUs). A gray-scale-based DIR algorithm called demons and five of its variants were implemented on GPUs using the compute unified device architecture (CUDA) programming environment. The spatial accuracy of these algorithms was evaluated over five sets of pulmonary 4D CT images with an average size of 256 x 256 x 100 and more than 1100 expert-determined landmark point pairs each. For all the testing scenarios presented in this paper, the GPU-based DIR computation required around 7 to 11 s to yield an average 3D error ranging from 1.5 to 1.8 mm. It is interesting to find out that the original passive force demons algorithms outperform subsequently proposed variants based on the combination of accuracy, efficiency and ease of implementation.

  6. Evaluation of three electronic report processing systems for preparing hydrologic reports of the U.S Geological Survey, Water Resources Division

    USGS Publications Warehouse

    Stiltner, G.J.

    1990-01-01

    In 1987, the Water Resources Division of the U.S. Geological Survey undertook three pilot projects to evaluate electronic report processing systems as a means to improve the quality and timeliness of reports pertaining to water resources investigations. The three projects selected for study included the use of the following configuration of software and hardware: Ventura Publisher software on an IBM model AT personal computer, PageMaker software on a Macintosh computer, and FrameMaker software on a Sun Microsystems workstation. The following assessment criteria were to be addressed in the pilot studies: The combined use of text, tables, and graphics; analysis of time; ease of learning; compatibility with the existing minicomputer system; and technical limitations. It was considered essential that the camera-ready copy produced be in a format suitable for publication. Visual improvement alone was not a consideration. This report consolidates and summarizes the findings of the electronic report processing pilot projects. Text and table files originating on the existing minicomputer system were successfully transformed to the electronic report processing systems in American Standard Code for Information Interchange (ASCII) format. Graphics prepared using a proprietary graphics software package were transferred to all the electronic report processing software through the use of Computer Graphic Metafiles. Graphics from other sources were entered into the systems by scanning paper images. Comparative analysis of time needed to process text and tables by the electronic report processing systems and by conventional methods indicated that, although more time is invested in creating the original page composition for an electronically processed report , substantial time is saved in producing subsequent reports because the format can be stored and re-used by electronic means as a template. Because of the more compact page layouts, costs of printing the reports were 15% to 25% less than costs of printing the reports prepared by conventional methods. Because the largest report workload in the offices conducting water resources investigations is preparation of Water-Resources Investigations Reports, Open-File Reports, and annual State Data Reports, the pilot studies only involved these projects. (USGS)

  7. The multimedia computer for low-literacy patient education: a pilot project of cancer risk perceptions.

    PubMed

    Wofford, J L; Currin, D; Michielutte, R; Wofford, M M

    2001-04-20

    Inadequate reading literacy is a major barrier to better educating patients. Despite its high prevalence, practical solutions for detecting and overcoming low literacy in a busy clinical setting remain elusive. In exploring the potential role for the multimedia computer in improving office-based patient education, we compared the accuracy of information captured from audio-computer interviewing of patients with that obtained from subsequent verbal questioning. Adult medicine clinic, urban community health center Convenience sample of patients awaiting clinic appointments (n = 59). Exclusion criteria included obvious psychoneurologic impairment or primary language other than English. A multimedia computer presentation that used audio-computer interviewing with localized imagery and voices to elicit responses to 4 questions on prior computer use and cancer risk perceptions. Three patients refused or were unable to interact with the computer at all, and 3 patients required restarting the presentation from the beginning but ultimately completed the computerized survey. Of the 51 evaluable patients (72.5% African-American, 66.7% female, mean age 47.5 [+/- 18.1]), the mean time in the computer presentation was significantly longer with older age and with no prior computer use but did not differ by gender or race. Despite a high proportion of no prior computer use (60.8%), there was a high rate of agreement (88.7% overall) between audio-computer interviewing and subsequent verbal questioning. Audio-computer interviewing is feasible in this urban community health center. The computer offers a partial solution for overcoming literacy barriers inherent in written patient education materials and provides an efficient means of data collection that can be used to better target patients' educational needs.

  8. 23 CFR 1340.9 - Computation of estimates.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... OBSERVATIONAL SURVEYS OF SEAT BELT USE Survey Design Requirements § 1340.9 Computation of estimates. (a) Data... design and any subsequent adjustments. (e) Sampling weight adjustments for observation sites with no... section, the nonresponse rate for the entire survey shall not exceed 10 percent for the ratio of the total...

  9. 23 CFR 1340.9 - Computation of estimates.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... OBSERVATIONAL SURVEYS OF SEAT BELT USE Survey Design Requirements § 1340.9 Computation of estimates. (a) Data... design and any subsequent adjustments. (e) Sampling weight adjustments for observation sites with no... section, the nonresponse rate for the entire survey shall not exceed 10 percent for the ratio of the total...

  10. 23 CFR 1340.9 - Computation of estimates.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... OBSERVATIONAL SURVEYS OF SEAT BELT USE Survey Design Requirements § 1340.9 Computation of estimates. (a) Data... design and any subsequent adjustments. (e) Sampling weight adjustments for observation sites with no... section, the nonresponse rate for the entire survey shall not exceed 10 percent for the ratio of the total...

  11. The Relationship between Emotional Intelligence and Attitudes toward Computer-Based Instruction of Postsecondary Hospitality Students

    ERIC Educational Resources Information Center

    Behnke, Carl; Greenan, James P.

    2011-01-01

    This study examined the relationship between postsecondary students' emotional-social intelligence and attitudes toward computer-based instructional materials. Research indicated that emotions and emotional intelligence directly impact motivation, while instructional design has been shown to impact student attitudes and subsequent engagement with…

  12. Counterfactual Thinking and Anticipated Emotions Enhance Performance in Computer Skills Training

    ERIC Educational Resources Information Center

    Chan, Amy Y. C.; Caputi, Peter; Jayasuriya, Rohan; Browne, Jessica L.

    2013-01-01

    The present study examined the relationship between novice learners' counterfactual thinking (i.e. generating "what if" and "if only" thoughts) about their initial training experience with a computer application and subsequent improvement in task performance. The role of anticipated emotions towards goal attainment in task…

  13. Verification and Validation of Monte Carlo N-Particle 6 for Computing Gamma Protection Factors

    DTIC Science & Technology

    2015-03-26

    methods for evaluating RPFs, which it used for the subsequent 30 years. These approaches included computational modeling, radioisotopes , and a high...1.2.1. Past Methods of Experimental Evaluation ........................................................ 2 1.2.2. Modeling Efforts...Other Considerations ......................................................................................... 14 2.4. Monte Carlo Methods

  14. Designing a Network and Systems Computing Curriculum: The Stakeholders and the Issues

    ERIC Educational Resources Information Center

    Tan, Grace; Venables, Anne

    2010-01-01

    Since 2001, there has been a dramatic decline in Information Technology and Computer Science student enrolments worldwide. As a consequence, many institutions have evaluated their offerings and revamped their programs to include units designed to capture students' interests and increase subsequent enrolment. Likewise, at Victoria University the…

  15. E-Assessment Adaptation at a Military Vocational College: Student Perceptions

    ERIC Educational Resources Information Center

    Cigdem, Harun; Oncu, Semiral

    2015-01-01

    This survey study examines an assessment methodology through e-quizzes administered at a military vocational college and subsequent student perceptions in spring 2013 at the "Computer Networks" course. A total of 30 Computer Technologies and 261 Electronic and Communication Technologies students took three e-quizzes. Data were gathered…

  16. Development of Hybrid Computer Programs for AAFSS/COBRA/COIN Weapons Effectiveness Studies. Volume I. Simulating Aircraft Maneuvers and Weapon Firing Runs.

    DTIC Science & Technology

    for the game. Subsequent duels , flown with single armed escorts, calculated reduction in losses and damage states. For the study, hybrid computer...6) a duel between a ground weapon, armed escort, and formation of lift aircraft. (Author)

  17. New horizons in forensic radiology: the 60-second digital autopsy-full-body examination of a gunshot victim by multislice computed tomography.

    PubMed

    Thali, Michael J; Schweitzer, Wolf; Yen, Kathrin; Vock, Peter; Ozdoba, Christoph; Spielvogel, Elke; Dirnhofer, Richard

    2003-03-01

    The goal of this study was the full-body documentation of a gunshot wound victim with multislice helical computed tomography for subsequent comparison with the findings of the standard forensic autopsy. Complete volume data of the head, neck, and trunk were acquired by use of two acquisitions of less than 1 minute of total scanning time. Subsequent two-dimensional multiplanar reformations and three-dimensional shaded surface display reconstructions helped document the gunshot-created skull fractures and brain injuries, including the wound track, and the intracerebral bone fragments. Computed tomography also demonstrated intracardiac air embolism and pulmonary aspiration of blood resulting from bullet wound-related trauma. The "digital autopsy," even when postprocessing time was added, was more rapid than the classic forensic autopsy and, based on the nondestructive approach, offered certain advantages in comparison with the forensic autopsy.

  18. What's statistical about learning? Insights from modelling statistical learning as a set of memory processes

    PubMed Central

    2017-01-01

    Statistical learning has been studied in a variety of different tasks, including word segmentation, object identification, category learning, artificial grammar learning and serial reaction time tasks (e.g. Saffran et al. 1996 Science 274, 1926–1928; Orban et al. 2008 Proceedings of the National Academy of Sciences 105, 2745–2750; Thiessen & Yee 2010 Child Development 81, 1287–1303; Saffran 2002 Journal of Memory and Language 47, 172–196; Misyak & Christiansen 2012 Language Learning 62, 302–331). The difference among these tasks raises questions about whether they all depend on the same kinds of underlying processes and computations, or whether they are tapping into different underlying mechanisms. Prior theoretical approaches to statistical learning have often tried to explain or model learning in a single task. However, in many cases these approaches appear inadequate to explain performance in multiple tasks. For example, explaining word segmentation via the computation of sequential statistics (such as transitional probability) provides little insight into the nature of sensitivity to regularities among simultaneously presented features. In this article, we will present a formal computational approach that we believe is a good candidate to provide a unifying framework to explore and explain learning in a wide variety of statistical learning tasks. This framework suggests that statistical learning arises from a set of processes that are inherent in memory systems, including activation, interference, integration of information and forgetting (e.g. Perruchet & Vinter 1998 Journal of Memory and Language 39, 246–263; Thiessen et al. 2013 Psychological Bulletin 139, 792–814). From this perspective, statistical learning does not involve explicit computation of statistics, but rather the extraction of elements of the input into memory traces, and subsequent integration across those memory traces that emphasize consistent information (Thiessen and Pavlik 2013 Cognitive Science 37, 310–343). This article is part of the themed issue ‘New frontiers for statistical learning in the cognitive sciences'. PMID:27872374

  19. What's statistical about learning? Insights from modelling statistical learning as a set of memory processes.

    PubMed

    Thiessen, Erik D

    2017-01-05

    Statistical learning has been studied in a variety of different tasks, including word segmentation, object identification, category learning, artificial grammar learning and serial reaction time tasks (e.g. Saffran et al. 1996 Science 274: , 1926-1928; Orban et al. 2008 Proceedings of the National Academy of Sciences 105: , 2745-2750; Thiessen & Yee 2010 Child Development 81: , 1287-1303; Saffran 2002 Journal of Memory and Language 47: , 172-196; Misyak & Christiansen 2012 Language Learning 62: , 302-331). The difference among these tasks raises questions about whether they all depend on the same kinds of underlying processes and computations, or whether they are tapping into different underlying mechanisms. Prior theoretical approaches to statistical learning have often tried to explain or model learning in a single task. However, in many cases these approaches appear inadequate to explain performance in multiple tasks. For example, explaining word segmentation via the computation of sequential statistics (such as transitional probability) provides little insight into the nature of sensitivity to regularities among simultaneously presented features. In this article, we will present a formal computational approach that we believe is a good candidate to provide a unifying framework to explore and explain learning in a wide variety of statistical learning tasks. This framework suggests that statistical learning arises from a set of processes that are inherent in memory systems, including activation, interference, integration of information and forgetting (e.g. Perruchet & Vinter 1998 Journal of Memory and Language 39: , 246-263; Thiessen et al. 2013 Psychological Bulletin 139: , 792-814). From this perspective, statistical learning does not involve explicit computation of statistics, but rather the extraction of elements of the input into memory traces, and subsequent integration across those memory traces that emphasize consistent information (Thiessen and Pavlik 2013 Cognitive Science 37: , 310-343).This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'. © 2016 The Author(s).

  20. Unicameral bone cyst in the spinous process of a thoracic vertebra.

    PubMed

    Tsirikos, Athanasios I; Bowen, J Richard

    2002-10-01

    Unicameral bone cysts affecting the spine are extremely rare and tend to be misdiagnosed. We report on a 17-year-old female patient who presented with a 2-year history of persistent low back pain. The radiographic evaluation and bone scan failed to reveal a pathologic process. Magnetic resonance of the painful area and subsequent computed tomography scan showed a well-circumscribed osteolytic lesion originating from the spinous process and extending into both laminae of T9 vertebra. Aneurysmal bone cyst or osteoblastoma was considered to be the most probable diagnosis. The patient underwent excisional biopsy of the tumor. The intraoperative findings were suggestive of solitary bone cyst, a diagnosis that was confirmed histologically. Because the tumor had not invaded the articular facets, no posterolateral spine fusion was required. The patient had an unremarkable postoperative clinical course. Her symptoms resolved and she returned to her previous level of physical activities. Unicameral bone cysts, although uncommon, should be included in the differential diagnosis of an osteolytic lesion involving the spine.

  1. Neuroimaging with functional near infrared spectroscopy: From formation to interpretation

    NASA Astrophysics Data System (ADS)

    Herrera-Vega, Javier; Treviño-Palacios, Carlos G.; Orihuela-Espina, Felipe

    2017-09-01

    Functional Near Infrared Spectroscopy (fNIRS) is gaining momentum as a functional neuroimaging modality to investigate the cerebral hemodynamics subsequent to neural metabolism. As other neuroimaging modalities, it is neuroscience's tool to understand brain systems functions at behaviour and cognitive levels. To extract useful knowledge from functional neuroimages it is critical to understand the series of transformations applied during the process of the information retrieval and how they bound the interpretation. This process starts with the irradiation of the head tissues with infrared light to obtain the raw neuroimage and proceeds with computational and statistical analysis revealing hidden associations between pixels intensities and neural activity encoded to end up with the explanation of some particular aspect regarding brain function.To comprehend the overall process involved in fNIRS there is extensive literature addressing each individual step separately. This paper overviews the complete transformation sequence through image formation, reconstruction and analysis to provide an insight of the final functional interpretation.

  2. Graphics processing unit accelerated phase field dislocation dynamics: Application to bi-metallic interfaces

    DOE PAGES

    Eghtesad, Adnan; Germaschewski, Kai; Beyerlein, Irene J.; ...

    2017-10-14

    We present the first high-performance computing implementation of the meso-scale phase field dislocation dynamics (PFDD) model on a graphics processing unit (GPU)-based platform. The implementation takes advantage of the portable OpenACC standard directive pragmas along with Nvidia's compute unified device architecture (CUDA) fast Fourier transform (FFT) library called CUFFT to execute the FFT computations within the PFDD formulation on the same GPU platform. The overall implementation is termed ACCPFDD-CUFFT. The package is entirely performance portable due to the use of OPENACC-CUDA inter-operability, in which calls to CUDA functions are replaced with the OPENACC data regions for a host central processingmore » unit (CPU) and device (GPU). A comprehensive benchmark study has been conducted, which compares a number of FFT routines, the Numerical Recipes FFT (FOURN), Fastest Fourier Transform in the West (FFTW), and the CUFFT. The last one exploits the advantages of the GPU hardware for FFT calculations. The novel ACCPFDD-CUFFT implementation is verified using the analytical solutions for the stress field around an infinite edge dislocation and subsequently applied to simulate the interaction and motion of dislocations through a bi-phase copper-nickel (Cu–Ni) interface. It is demonstrated that the ACCPFDD-CUFFT implementation on a single TESLA K80 GPU offers a 27.6X speedup relative to the serial version and a 5X speedup relative to the 22-multicore Intel Xeon CPU E5-2699 v4 @ 2.20 GHz version of the code.« less

  3. Graphics processing unit accelerated phase field dislocation dynamics: Application to bi-metallic interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eghtesad, Adnan; Germaschewski, Kai; Beyerlein, Irene J.

    We present the first high-performance computing implementation of the meso-scale phase field dislocation dynamics (PFDD) model on a graphics processing unit (GPU)-based platform. The implementation takes advantage of the portable OpenACC standard directive pragmas along with Nvidia's compute unified device architecture (CUDA) fast Fourier transform (FFT) library called CUFFT to execute the FFT computations within the PFDD formulation on the same GPU platform. The overall implementation is termed ACCPFDD-CUFFT. The package is entirely performance portable due to the use of OPENACC-CUDA inter-operability, in which calls to CUDA functions are replaced with the OPENACC data regions for a host central processingmore » unit (CPU) and device (GPU). A comprehensive benchmark study has been conducted, which compares a number of FFT routines, the Numerical Recipes FFT (FOURN), Fastest Fourier Transform in the West (FFTW), and the CUFFT. The last one exploits the advantages of the GPU hardware for FFT calculations. The novel ACCPFDD-CUFFT implementation is verified using the analytical solutions for the stress field around an infinite edge dislocation and subsequently applied to simulate the interaction and motion of dislocations through a bi-phase copper-nickel (Cu–Ni) interface. It is demonstrated that the ACCPFDD-CUFFT implementation on a single TESLA K80 GPU offers a 27.6X speedup relative to the serial version and a 5X speedup relative to the 22-multicore Intel Xeon CPU E5-2699 v4 @ 2.20 GHz version of the code.« less

  4. Computer simulations of disordering kinetics in irradiated intermetallic compounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spaczer, M.; Caro, A.; Victoria, M.

    1994-11-01

    Molecular-dynamics computer simulations of collision cascades in intermetallic Cu[sub 3]Au, Ni[sub 3]Al, and NiAl have been performed to study the nature of the disordering processes in the collision cascade. The choice of these systems was suggested by the quite accurate description of the thermodynamic properties obtained using embedded-atom-type potentials. Since melting occurs in the core of the cascades, interesting effects appear as a result of the superposition of the loss (and subsequent recovery) of the crystalline order and the evolution of the chemical order, both processes being developed on different time scales. In our previous simulations on Ni[sub 3]Al andmore » Cu[sub 3]Au [T. Diaz de la Rubia, A. Caro, and M. Spaczer, Phys. Rev. B 47, 11 483 (1993)] we found a significant difference between the time evolution of the chemical short-range order (SRO) and the crystalline order in the cascade core for both alloys, namely the complete loss of the crystalline structure but only partial chemical disordering. Recent computer simulations in NiAl show the same phenomena. To understand these features we study the liquid phase of these three alloys and present simulation results concerning the dynamical melting of small samples, examining the atomic mobility, the relaxation time, and the saturation value of the chemical short-range order. An analytic model for the time evolution of the SRO is given.« less

  5. Force and Stress along Simulated Dissociation Pathways of Cucurbituril-Guest Systems.

    PubMed

    Velez-Vega, Camilo; Gilson, Michael K

    2012-03-13

    The field of host-guest chemistry provides computationally tractable yet informative model systems for biomolecular recognition. We applied molecular dynamics simulations to study the forces and mechanical stresses associated with forced dissociation of aqueous cucurbituril-guest complexes with high binding affinities. First, the unbinding transitions were modeled with constant velocity pulling (steered dynamics) and a soft spring constant, to model atomic force microscopy (AFM) experiments. The computed length-force profiles yield rupture forces in good agreement with available measurements. We also used steered dynamics with high spring constants to generate paths characterized by a tight control over the specified pulling distance; these paths were then equilibrated via umbrella sampling simulations and used to compute time-averaged mechanical stresses along the dissociation pathways. The stress calculations proved to be informative regarding the key interactions determining the length-force profiles and rupture forces. In particular, the unbinding transition of one complex is found to be a stepwise process, which is initially dominated by electrostatic interactions between the guest's ammoniums and the host's carbonyl groups, and subsequently limited by the extraction of the guest's bulky bicyclooctane moiety; the latter step requires some bond stretching at the cucurbituril's extraction portal. Conversely, the dissociation of a second complex with a more slender guest is mainly driven by successive electrostatic interactions between the different guest's ammoniums and the host's carbonyl groups. The calculations also provide information on the origins of thermodynamic irreversibilities in these forced dissociation processes.

  6. Kinematic Measurement of Knee Prosthesis from Single-Plane Projection Images

    NASA Astrophysics Data System (ADS)

    Hirokawa, Shunji; Ariyoshi, Shogo; Takahashi, Kenji; Maruyama, Koichi

    In this paper, the measurement of 3D motion from 2D perspective projections of knee prosthesis is described. The technique reported by Banks and Hodge was further developed in this study. The estimation was performed in two steps. The first-step estimation was performed on the assumption of orthogonal projection. Then, the second-step estimation was subsequently carried out based upon the perspective projection to accomplish more accurate estimation. The simulation results have demonstrated that the technique archived sufficient accuracies of position/orientation estimation for prosthetic kinematics. Then we applied our algorithm to the CCD images, thereby examining the influences of various artifacts, possibly incorporated through an imaging process, on the estimation accuracies. We found that accuracies in the experiment were influenced mainly by the geometric discrepancies between the prosthesis component and computer generated model and by the spacial inconsistencies between the coordinate axes of the positioner and that of the computer model. However, we verified that our algorithm could achieve proper and consistent estimation even for the CCD images.

  7. Design optimization of hydraulic turbine draft tube based on CFD and DOE method

    NASA Astrophysics Data System (ADS)

    Nam, Mun chol; Dechun, Ba; Xiangji, Yue; Mingri, Jin

    2018-03-01

    In order to improve performance of the hydraulic turbine draft tube in its design process, the optimization for draft tube is performed based on multi-disciplinary collaborative design optimization platform by combining the computation fluid dynamic (CFD) and the design of experiment (DOE) in this paper. The geometrical design variables are considered as the median section in the draft tube and the cross section in its exit diffuser and objective function is to maximize the pressure recovery factor (Cp). Sample matrixes required for the shape optimization of the draft tube are generated by optimal Latin hypercube (OLH) method of the DOE technique and their performances are evaluated through computational fluid dynamic (CFD) numerical simulation. Subsequently the main effect analysis and the sensitivity analysis of the geometrical parameters of the draft tube are accomplished. Then, the design optimization of the geometrical design variables is determined using the response surface method. The optimization result of the draft tube shows a marked performance improvement over the original.

  8. Computer Game Play Reduces Intrusive Memories of Experimental Trauma via Reconsolidation-Update Mechanisms.

    PubMed

    James, Ella L; Bonsall, Michael B; Hoppitt, Laura; Tunbridge, Elizabeth M; Geddes, John R; Milton, Amy L; Holmes, Emily A

    2015-08-01

    Memory of a traumatic event becomes consolidated within hours. Intrusive memories can then flash back repeatedly into the mind's eye and cause distress. We investigated whether reconsolidation-the process during which memories become malleable when recalled-can be blocked using a cognitive task and whether such an approach can reduce these unbidden intrusions. We predicted that reconsolidation of a reactivated visual memory of experimental trauma could be disrupted by engaging in a visuospatial task that would compete for visual working memory resources. We showed that intrusive memories were virtually abolished by playing the computer game Tetris following a memory-reactivation task 24 hr after initial exposure to experimental trauma. Furthermore, both memory reactivation and playing Tetris were required to reduce subsequent intrusions (Experiment 2), consistent with reconsolidation-update mechanisms. A simple, noninvasive cognitive-task procedure administered after emotional memory has already consolidated (i.e., > 24 hours after exposure to experimental trauma) may prevent the recurrence of intrusive memories of those emotional events. © The Author(s) 2015.

  9. Computer Game Play Reduces Intrusive Memories of Experimental Trauma via Reconsolidation-Update Mechanisms

    PubMed Central

    James, Ella L.; Bonsall, Michael B.; Hoppitt, Laura; Tunbridge, Elizabeth M.; Geddes, John R.; Milton, Amy L.

    2015-01-01

    Memory of a traumatic event becomes consolidated within hours. Intrusive memories can then flash back repeatedly into the mind’s eye and cause distress. We investigated whether reconsolidation—the process during which memories become malleable when recalled—can be blocked using a cognitive task and whether such an approach can reduce these unbidden intrusions. We predicted that reconsolidation of a reactivated visual memory of experimental trauma could be disrupted by engaging in a visuospatial task that would compete for visual working memory resources. We showed that intrusive memories were virtually abolished by playing the computer game Tetris following a memory-reactivation task 24 hr after initial exposure to experimental trauma. Furthermore, both memory reactivation and playing Tetris were required to reduce subsequent intrusions (Experiment 2), consistent with reconsolidation-update mechanisms. A simple, noninvasive cognitive-task procedure administered after emotional memory has already consolidated (i.e., > 24 hours after exposure to experimental trauma) may prevent the recurrence of intrusive memories of those emotional events. PMID:26133572

  10. Real-time detection and data acquisition system for the left ventricular outline. Ph.D. Thesis - Stanford Univ.

    NASA Technical Reports Server (NTRS)

    Reiber, J. H. C.

    1976-01-01

    To automate the data acquisition procedure, a real-time contour detection and data acquisition system for the left ventricular outline was developed using video techniques. The X-ray image of the contrast-filled left ventricle is stored for subsequent processing on film (cineangiogram), video tape or disc. The cineangiogram is converted into video format using a television camera. The video signal from either the TV camera, video tape or disc is the input signal to the system. The contour detection is based on a dynamic thresholding technique. Since the left ventricular outline is a smooth continuous function, for each contour side a narrow expectation window is defined in which the next borderpoint will be detected. A computer interface was designed and built for the online acquisition of the coordinates using a PDP-12 computer. The advantage of this system over other available systems is its potential for online, real-time acquisition of the left ventricular size and shape during angiocardiography.

  11. CLASSIFYING MEDICAL IMAGES USING MORPHOLOGICAL APPEARANCE MANIFOLDS.

    PubMed

    Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos

    2013-12-31

    Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and use the results as input to the subsequent classification step. The variation in registration results due to choice of parameters thus translates to variation of performance of the classifiers that depend on the registration step for input. Analogous issues have been investigated in the computer vision literature, where image appearance varies with pose and illumination, thereby making classification vulnerable to these confounding parameters. The proposed methodology addresses this issue by sampling image appearances as registration parameters vary, and shows that better classification accuracies can be obtained this way, compared to the conventional approach.

  12. Incremental Lexical Learning in Speech Production: A Computational Model and Empirical Evaluation

    ERIC Educational Resources Information Center

    Oppenheim, Gary Michael

    2011-01-01

    Naming a picture of a dog primes the subsequent naming of a picture of a dog (repetition priming) and interferes with the subsequent naming of a picture of a cat (semantic interference). Behavioral studies suggest that these effects derive from persistent changes in the way that words are activated and selected for production, and some have…

  13. ANTLR Tree Grammar Generator and Extensions

    NASA Technical Reports Server (NTRS)

    Craymer, Loring

    2005-01-01

    A computer program implements two extensions of ANTLR (Another Tool for Language Recognition), which is a set of software tools for translating source codes between different computing languages. ANTLR supports predicated- LL(k) lexer and parser grammars, a notation for annotating parser grammars to direct tree construction, and predicated tree grammars. [ LL(k) signifies left-right, leftmost derivation with k tokens of look-ahead, referring to certain characteristics of a grammar.] One of the extensions is a syntax for tree transformations. The other extension is the generation of tree grammars from annotated parser or input tree grammars. These extensions can simplify the process of generating source-to-source language translators and they make possible an approach, called "polyphase parsing," to translation between computing languages. The typical approach to translator development is to identify high-level semantic constructs such as "expressions," "declarations," and "definitions" as fundamental building blocks in the grammar specification used for language recognition. The polyphase approach is to lump ambiguous syntactic constructs during parsing and then disambiguate the alternatives in subsequent tree transformation passes. Polyphase parsing is believed to be useful for generating efficient recognizers for C++ and other languages that, like C++, have significant ambiguities.

  14. Liquid Microjunction Surface Sampling Probe Fluid Dynamics: Computational and Experimental Analysis of Coaxial Intercapillary Positioning Effects on Sample Manipulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ElNaggar, Mariam S; Barbier, Charlotte N; Van Berkel, Gary J

    A coaxial geometry liquid microjunction surface sampling probe (LMJ-SSP) enables direct extraction of analytes from surfaces for subsequent analysis by techniques like mass spectrometry. Solution dynamics at the probe-to-sample surface interface in the LMJ-SSP has been suspected to influence sampling efficiency and dispersion but has not been rigorously investigated. The effect on flow dynamics and analyte transport to the mass spectrometer caused by coaxial retraction of the inner and outer capillaries from each other and the surface during sampling with a LMJ-SSP was investigated using computational fluid dynamics and experimentation. A transparent LMJ-SSP was constructed to provide the means formore » visual observation of the dynamics of the surface sampling process. Visual observation, computational fluid dynamics (CFD) analysis, and experimental results revealed that inner capillary axial retraction from the flush position relative to the outer capillary transitioned the probe from a continuous sampling and injection mode through an intermediate regime to sample plug formationmode caused by eddy currents at the sampling end of the probe. The potential for analytical implementation of these newly discovered probe operational modes is discussed.« less

  15. Efficient architecture for spike sorting in reconfigurable hardware.

    PubMed

    Hwang, Wen-Jyi; Lee, Wei-Hao; Lin, Shiow-Jyu; Lai, Sheng-Ying

    2013-11-01

    This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA) and fuzzy C-means (FCM) algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA). It is embedded in a System-on-Chip (SOC) platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation.

  16. Making Advanced Scientific Algorithms and Big Scientific Data Management More Accessible

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venkatakrishnan, S. V.; Mohan, K. Aditya; Beattie, Keith

    2016-02-14

    Synchrotrons such as the Advanced Light Source (ALS) at Lawrence Berkeley National Laboratory are known as user facilities. They are sources of extremely bright X-ray beams, and scientists come from all over the world to perform experiments that require these beams. As the complexity of experiments has increased, and the size and rates of data sets has exploded, managing, analyzing and presenting the data collected at synchrotrons has been an increasing challenge. The ALS has partnered with high performance computing, fast networking, and applied mathematics groups to create a"super-facility", giving users simultaneous access to the experimental, computational, and algorithmic resourcesmore » to overcome this challenge. This combination forms an efficient closed loop, where data despite its high rate and volume is transferred and processed, in many cases immediately and automatically, on appropriate compute resources, and results are extracted, visualized, and presented to users or to the experimental control system, both to provide immediate insight and to guide decisions about subsequent experiments during beam-time. In this paper, We will present work done on advanced tomographic reconstruction algorithms to support users of the 3D micron-scale imaging instrument (Beamline 8.3.2, hard X-ray micro-tomography).« less

  17. Knowledge-acquisition tools for medical knowledge-based systems.

    PubMed

    Lanzola, G; Quaglini, S; Stefanelli, M

    1995-03-01

    Knowledge-based systems (KBS) have been proposed to solve a large variety of medical problems. A strategic issue for KBS development and maintenance are the efforts required for both knowledge engineers and domain experts. The proposed solution is building efficient knowledge acquisition (KA) tools. This paper presents a set of KA tools we are developing within a European Project called GAMES II. They have been designed after the formulation of an epistemological model of medical reasoning. The main goal is that of developing a computational framework which allows knowledge engineers and domain experts to interact cooperatively in developing a medical KBS. To this aim, a set of reusable software components is highly recommended. Their design was facilitated by the development of a methodology for KBS construction. It views this process as comprising two activities: the tailoring of the epistemological model to the specific medical task to be executed and the subsequent translation of this model into a computational architecture so that the connections between computational structures and their knowledge level counterparts are maintained. The KA tools we developed are illustrated taking examples from the behavior of a KBS we are building for the management of children with acute myeloid leukemia.

  18. A machine-learned analysis of human gene polymorphisms modulating persisting pain points at major roles of neuroimmune processes.

    PubMed

    Kringel, Dario; Lippmann, Catharina; Parnham, Michael J; Kalso, Eija; Ultsch, Alfred; Lötsch, Jörn

    2018-06-19

    Human genetic research has implicated functional variants of more than one hundred genes in the modulation of persisting pain. Artificial intelligence and machine learning techniques may combine this knowledge with results of genetic research gathered in any context, which permits the identification of the key biological processes involved in chronic sensitization to pain. Based on published evidence, a set of 110 genes carrying variants reported to be associated with modulation of the clinical phenotype of persisting pain in eight different clinical settings was submitted to unsupervised machine-learning aimed at functional clustering. Subsequently, a mathematically supported subset of genes, comprising those most consistently involved in persisting pain, was analyzed by means of computational functional genomics in the Gene Ontology knowledgebase. Clustering of genes with evidence for a modulation of persisting pain elucidated a functionally heterogeneous set. The situation cleared when the focus was narrowed to a genetic modulation consistently observed throughout several clinical settings. On this basis, two groups of biological processes, the immune system and nitric oxide signaling, emerged as major players in sensitization to persisting pain, which is biologically highly plausible and in agreement with other lines of pain research. The present computational functional genomics-based approach provided a computational systems-biology perspective on chronic sensitization to pain. Human genetic control of persisting pain points to the immune system as a source of potential future targets for drugs directed against persisting pain. Contemporary machine-learned methods provide innovative approaches to knowledge discovery from previous evidence. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  19. Automatic vision system for analysis of microscopic behavior of flow and transport in porous media

    NASA Astrophysics Data System (ADS)

    Rashidi, Mehdi; Dehmeshki, Jamshid; Dickenson, Eric; Daemi, M. Farhang

    1997-10-01

    This paper describes the development of a novel automated and efficient vision system to obtain velocity and concentration measurement within a porous medium. An aqueous fluid lace with a fluorescent dye to microspheres flows through a transparent, refractive-index-matched column packed with transparent crystals. For illumination purposes, a planar sheet of laser passes through the column as a CCD camera records all the laser illuminated planes. Detailed microscopic velocity and concentration fields have been computed within a 3D volume of the column. For measuring velocities, while the aqueous fluid, laced with fluorescent microspheres, flows through the transparent medium, a CCD camera records the motions of the fluorescing particles by a video cassette recorder. The recorded images are acquired automatically frame by frame and transferred to the computer for processing, by using a frame grabber an written relevant algorithms through an RS-232 interface. Since the grabbed image is poor in this stage, some preprocessings are used to enhance particles within images. Finally, these enhanced particles are monitored to calculate velocity vectors in the plane of the beam. For concentration measurements, while the aqueous fluid, laced with a fluorescent organic dye, flows through the transparent medium, a CCD camera sweeps back and forth across the column and records concentration slices on the planes illuminated by the laser beam traveling simultaneously with the camera. Subsequently, these recorded images are transferred to the computer for processing in similar fashion to the velocity measurement. In order to have a fully automatic vision system, several detailed image processing techniques are developed to match exact images that have different intensities values but the same topological characteristics. This results in normalized interstitial chemical concentrations as a function of time within the porous column.

  20. A novel forward projection-based metal artifact reduction method for flat-detector computed tomography.

    PubMed

    Prell, Daniel; Kyriakou, Yiannis; Beister, Marcel; Kalender, Willi A

    2009-11-07

    Metallic implants generate streak-like artifacts in flat-detector computed tomography (FD-CT) reconstructed volumetric images. This study presents a novel method for reducing these disturbing artifacts by inserting discarded information into the original rawdata using a three-step correction procedure and working directly with each detector element. Computation times are minimized by completely implementing the correction process on graphics processing units (GPUs). First, the original volume is corrected using a three-dimensional interpolation scheme in the rawdata domain, followed by a second reconstruction. This metal artifact-reduced volume is then segmented into three materials, i.e. air, soft-tissue and bone, using a threshold-based algorithm. Subsequently, a forward projection of the obtained tissue-class model substitutes the missing or corrupted attenuation values directly for each flat detector element that contains attenuation values corresponding to metal parts, followed by a final reconstruction. Experiments using tissue-equivalent phantoms showed a significant reduction of metal artifacts (deviations of CT values after correction compared to measurements without metallic inserts reduced typically to below 20 HU, differences in image noise to below 5 HU) caused by the implants and no significant resolution losses even in areas close to the inserts. To cover a variety of different cases, cadaver measurements and clinical images in the knee, head and spine region were used to investigate the effectiveness and applicability of our method. A comparison to a three-dimensional interpolation correction showed that the new approach outperformed interpolation schemes. Correction times are minimized, and initial and corrected images are made available at almost the same time (12.7 s for the initial reconstruction, 46.2 s for the final corrected image compared to 114.1 s and 355.1 s on central processing units (CPUs)).

  1. Iterons, fractals and computations of automata

    NASA Astrophysics Data System (ADS)

    Siwak, Paweł

    1999-03-01

    Processing of strings by some automata, when viewed on space-time (ST) diagrams, reveals characteristic soliton-like coherent periodic objects. They are inherently associated with iterations of automata mappings thus we call them the iterons. In the paper we present two classes of one-dimensional iterons: particles and filtrons. The particles are typical for parallel (cellular) processing, while filtrons, introduced in (32) are specific for serial processing of strings. In general, the images of iterated automata mappings exhibit not only coherent entities but also the fractals, and quasi-periodic and chaotic dynamics. We show typical images of such computations: fractals, multiplication by a number, and addition of binary numbers defined by a Turing machine. Then, the particles are presented as iterons generated by cellular automata in three computations: B/U code conversion (13, 29), majority classification (9), and in discrete version of the FPU (Fermi-Pasta-Ulam) dynamics (7, 23). We disclose particles by a technique of combinational recoding of ST diagrams (as opposed to sequential recoding). Subsequently, we recall the recursive filters based on FCA (filter cellular automata) window operators, and considered by Park (26), Ablowitz (1), Fokas (11), Fuchssteiner (12), Bruschi (5) and Jiang (20). We present the automata equivalents to these filters (33). Some of them belong to the class of filter automata introduced in (30). We also define and illustrate some properties of filtrons. Contrary to particles, the filtrons interact nonlocally in the sense that distant symbols may influence one another. Thus their interactions are very unusual. Some examples have been given in (32). Here we show new examples of filtron phenomena: multifiltron solitonic collisions, attracting and repelling filtrons, trapped bouncing filtrons (which behave like a resonance cavity) and quasi filtrons.

  2. Microfocal X-ray computed tomography post-processing operations for optimizing reconstruction volumes of stented arteries during 3D computational fluid dynamics modeling.

    PubMed

    Ladisa, John F; Olson, Lars E; Ropella, Kristina M; Molthen, Robert C; Haworth, Steven T; Kersten, Judy R; Warltier, David C; Pagel, Paul S

    2005-08-01

    Restenosis caused by neointimal hyperplasia (NH) remains an important clinical problem after stent implantation. Restenosis varies with stent geometry, and idealized computational fluid dynamics (CFD) models have indicated that geometric properties of the implanted stent may differentially influence NH. However, 3D studies capturing the in vivo flow domain within stented vessels have not been conducted at a resolution sufficient to detect subtle alterations in vascular geometry caused by the stent and the subsequent temporal development of NH. We present the details and limitations of a series of post-processing operations used in conjunction with microfocal X-ray CT imaging and reconstruction to generate geometrically accurate flow domains within the localized region of a stent several weeks after implantation. Microfocal X-ray CT reconstruction volumes were subjected to an automated program to perform arterial thresholding, spatial orientation, and surface smoothing of stented and unstented rabbit iliac arteries several weeks after antegrade implantation. A transfer function was obtained for the current post-processing methodology containing reconstructed 16 mm stents implanted into rabbit iliac arteries for up to 21 days after implantation and resolved at circumferential and axial resolutions of 32 and 50 microm, respectively. The results indicate that the techniques presented are sufficient to resolve distributions of WSS with 80% accuracy in segments containing 16 surface perturbations over a 16 mm stented region. These methods will be used to test the hypothesis that reductions in normalized wall shear stress (WSS) and increases in the spatial disparity of WSS immediately after stent implantation may spatially correlate with the temporal development of NH within the stented region.

  3. Guidelines for Preparation of a Scientific Paper

    PubMed Central

    Kosiba, Margaret M.

    1988-01-01

    Even the experienced scientific writer may have difficulty transferring research results to clear, concise, publishable words. To assist the beginning scientific writer, guidelines are proposed that will provide direction for determining a topic, developing protocols, collecting data, using computers for analysis and word processing, incorporating copyediting notations, consulting scientific writing manuals, and developing sound writing habits. Guidelines for writing each section of a research paper are described to help the writer prepare the title page, introduction, materials and methods, results, and discussion sections of the paper, as well as the acknowledgments and references. Procedures for writing the first draft and subsequent revisions include a checklist of structural and stylistic problems and common errors in English usage. PMID:3339646

  4. Protostar formation in the early universe.

    PubMed

    Yoshida, Naoki; Omukai, Kazuyuki; Hernquist, Lars

    2008-08-01

    The nature of the first generation of stars in the universe remains largely unknown. Observations imply the existence of massive primordial stars early in the history of the universe, and the standard theory for the growth of cosmic structure predicts that structures grow hierarchically through gravitational instability. We have developed an ab initio computer simulation of the formation of primordial stars that follows the relevant atomic and molecular processes in a primordial gas in an expanding universe. The results show that primeval density fluctuations left over from the Big Bang can drive the formation of a tiny protostar with a mass 1% that of the Sun. The protostar is a seed for the subsequent formation of a massive primordial star.

  5. Application of Semantic Tagging to Generate Superimposed Information on a Digital Encyclopedia

    NASA Astrophysics Data System (ADS)

    Garrido, Piedad; Tramullas, Jesus; Martinez, Francisco J.

    We can find in the literature several works regarding the automatic or semi-automatic processing of textual documents with historic information using free software technologies. However, more research work is needed to integrate the analysis of the context and provide coverage to the peculiarities of the Spanish language from a semantic point of view. This research work proposes a novel knowledge-based strategy based on combining subject-centric computing, a topic-oriented approach, and superimposed information. It subsequent combination with artificial intelligence techniques led to an automatic analysis after implementing a made-to-measure interpreted algorithm which, in turn, produced a good number of associations and events with 90% reliability.

  6. Inertial navigation sensor integrated motion analysis for autonomous vehicle navigation

    NASA Technical Reports Server (NTRS)

    Roberts, Barry; Bhanu, Bir

    1992-01-01

    Recent work on INS integrated motion analysis is described. Results were obtained with a maximally passive system of obstacle detection (OD) for ground-based vehicles and rotorcraft. The OD approach involves motion analysis of imagery acquired by a passive sensor in the course of vehicle travel to generate range measurements to world points within the sensor FOV. INS data and scene analysis results are used to enhance interest point selection, the matching of the interest points, and the subsequent motion-based computations, tracking, and OD. The most important lesson learned from the research described here is that the incorporation of inertial data into the motion analysis program greatly improves the analysis and makes the process more robust.

  7. Starting apparatus for internal combustion engines

    DOEpatents

    Dyches, Gregory M.; Dudar, Aed M.

    1997-01-01

    An internal combustion engine starting apparatus uses a signal from a curt sensor to determine when the engine is energized and the starter motor should be de-energized. One embodiment comprises a transmitter, receiver, computer processing unit, current sensor and relays to energize a starter motor and subsequently de-energize the same when the engine is running. Another embodiment comprises a switch, current transducer, low-pass filter, gain/comparator, relay and a plurality of switches to energize and de-energize a starter motor. Both embodiments contain an indicator lamp or speaker which alerts an operator as to whether a successful engine start has been achieved. Both embodiments also contain circuitry to protect the starter and to de-energize the engine.

  8. Melodic Priming of Motor Sequence Performance: The Role of the Dorsal Premotor Cortex.

    PubMed

    Stephan, Marianne A; Brown, Rachel; Lega, Carlotta; Penhune, Virginia

    2016-01-01

    The purpose of this study was to determine whether exposure to specific auditory sequences leads to the induction of new motor memories and to investigate the role of the dorsal premotor cortex (dPMC) in this crossmodal learning process. Fifty-two young healthy non-musicians were familiarized with the sound to key-press mapping on a computer keyboard and tested on their baseline motor performance. Each participant received subsequently either continuous theta burst stimulation (cTBS) or sham stimulation over the dPMC and was then asked to remember a 12-note melody without moving. For half of the participants, the contour of the melody memorized was congruent to a subsequently performed, but never practiced, finger movement sequence (Congruent group). For the other half, the melody memorized was incongruent to the subsequent finger movement sequence (Incongruent group). Hearing a congruent melody led to significantly faster performance of a motor sequence immediately thereafter compared to hearing an incongruent melody. In addition, cTBS speeded up motor performance in both groups, possibly by relieving motor consolidation from interference by the declarative melody memorization task. Our findings substantiate recent evidence that exposure to a movement-related tone sequence can induce specific, crossmodal encoding of a movement sequence representation. They further suggest that cTBS over the dPMC may enhance early offline procedural motor skill consolidation in cognitive states where motor consolidation would normally be disturbed by concurrent declarative memory processes. These findings may contribute to a better understanding of auditory-motor system interactions and have implications for the development of new motor rehabilitation approaches using sound and non-invasive brain stimulation as neuromodulatory tools.

  9. Analog Design for Digital Deployment of a Serious Leadership Game

    NASA Technical Reports Server (NTRS)

    Maxwell, Nicholas; Lang, Tristan; Herman, Jeffrey L.; Phares, Richard

    2012-01-01

    This paper presents the design, development, and user testing of a leadership development simulation. The authors share lessons learned from using a design process for a board game to allow for quick and inexpensive revision cycles during the development of a serious leadership development game. The goal of this leadership simulation is to accelerate the development of leadership capacity in high-potential mid-level managers (GS-15 level) in a federal government agency. Simulation design included a mixed-method needs analysis, using both quantitative and qualitative approaches to determine organizational leadership needs. Eight design iterations were conducted, including three user testing phases. Three re-design iterations followed initial development, enabling game testing as part of comprehensive instructional events. Subsequent design, development and testing processes targeted digital application to a computer- and tablet-based environment. Recommendations include pros and cons of development and learner testing of an initial analog simulation prior to full digital simulation development.

  10. Vectorized image segmentation via trixel agglomeration

    DOEpatents

    Prasad, Lakshman [Los Alamos, NM; Skourikhine, Alexei N [Los Alamos, NM

    2006-10-24

    A computer implemented method transforms an image comprised of pixels into a vectorized image specified by a plurality of polygons that can be subsequently used to aid in image processing and understanding. The pixelated image is processed to extract edge pixels that separate different colors and a constrained Delaunay triangulation of the edge pixels forms a plurality of triangles having edges that cover the pixelated image. A color for each one of the plurality of triangles is determined from the color pixels within each triangle. A filter is formed with a set of grouping rules related to features of the pixelated image and applied to the plurality of triangle edges to merge adjacent triangles consistent with the filter into polygons having a plurality of vertices. The pixelated image may be then reformed into an array of the polygons, that can be represented collectively and efficiently by standard vector image.

  11. Application of linker technique to trap transiently interacting protein complexes for structural studies

    PubMed Central

    Reddy Chichili, Vishnu Priyanka; Kumar, Veerendra; Sivaraman, J.

    2016-01-01

    Protein-protein interactions are key events controlling several biological processes. We have developed and employed a method to trap transiently interacting protein complexes for structural studies using glycine-rich linkers to fuse interacting partners, one of which is unstructured. Initial steps involve isothermal titration calorimetry to identify the minimum binding region of the unstructured protein in its interaction with its stable binding partner. This is followed by computational analysis to identify the approximate site of the interaction and to design an appropriate linker length. Subsequently, fused constructs are generated and characterized using size exclusion chromatography and dynamic light scattering experiments. The structure of the chimeric protein is then solved by crystallization, and validated both in vitro and in vivo by substituting key interacting residues of the full length, unlinked proteins with alanine. This protocol offers the opportunity to study crucial and currently unattainable transient protein interactions involved in various biological processes. PMID:26985443

  12. Thermal Destruction of TETS: Experiments and Modeling ...

    EPA Pesticide Factsheets

    Symposium Paper In the event of a contamination event involving chemical warfare agents (CWAs) or toxic industrial chemicals (TICs), large quantities of potentially contaminated materials, both indoor and outdoor, may be treated with thermal incineration during the site remediation process. Even if the CWAs or TICs of interest are not particularly thermally stable and might be expected to decompose readily in a high temperature combustion environment, the refractory nature of many materials found inside and outside buildings may present heat transfer challenges in an incineration system depending on how the materials are packaged and fed into the incinerator. This paper reports on a study to examine the thermal decomposition of a banned rodenticide, tetramethylene disulfotetramine (TETS) in a laboratory reactor, analysis of the results using classical reactor design theory, and subsequent scale-up of the results to a computer-simulation of a full-scale commercial hazardous waste incinerator processing ceiling tile contaminated with residual TETS.

  13. Gaining insights from social media language: Methodologies and challenges.

    PubMed

    Kern, Margaret L; Park, Gregory; Eichstaedt, Johannes C; Schwartz, H Andrew; Sap, Maarten; Smith, Laura K; Ungar, Lyle H

    2016-12-01

    Language data available through social media provide opportunities to study people at an unprecedented scale. However, little guidance is available to psychologists who want to enter this area of research. Drawing on tools and techniques developed in natural language processing, we first introduce psychologists to social media language research, identifying descriptive and predictive analyses that language data allow. Second, we describe how raw language data can be accessed and quantified for inclusion in subsequent analyses, exploring personality as expressed on Facebook to illustrate. Third, we highlight challenges and issues to be considered, including accessing and processing the data, interpreting effects, and ethical issues. Social media has become a valuable part of social life, and there is much we can learn by bringing together the tools of computer science with the theories and insights of psychology. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  14. The renormalization group and the implicit function theorem for amplitude equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirkinis, Eleftherios

    2008-07-15

    This article lays down the foundations of the renormalization group (RG) approach for differential equations characterized by multiple scales. The renormalization of constants through an elimination process and the subsequent derivation of the amplitude equation [Chen et al., Phys. Rev. E 54, 376 (1996)] are given a rigorous but not abstract mathematical form whose justification is based on the implicit function theorem. Developing the theoretical framework that underlies the RG approach leads to a systematization of the renormalization process and to the derivation of explicit closed-form expressions for the amplitude equations that can be carried out with symbolic computation formore » both linear and nonlinear scalar differential equations and first order systems but independently of their particular forms. Certain nonlinear singular perturbation problems are considered that illustrate the formalism and recover well-known results from the literature as special cases.« less

  15. Computational Model of the Fathead Minnow Hypothalamic-Pituitary-Gonadal Axis: Incorporating Protein Synthesis in Improving Predictability of Responses to Endocrine Active Chemicals

    EPA Science Inventory

    There is international concern about chemicals that alter endocrine system function in humans and/or wildlife and subsequently cause adverse effects. We previously developed a mechanistic computational model of the hypothalamic-pituitary-gonadal (HPG) axis in female fathead minno...

  16. NEDLite user's manual: forest inventory for Palm OS handheld computers

    Treesearch

    Peter D. Knopp; Mark J. Twery

    2006-01-01

    A user's manual for NEDLite, software that enables collection of forest inventory data on Palm OS handheld computers, with the option of transferring data into NED software for analysis and subsequent prescription development. NEDLite software is included. Download the NEDLite software at: http://www.fs.fed.us/ne/burlington/ned

  17. Computing Robust, Bootstrap-Adjusted Fit Indices for Use with Nonnormal Data

    ERIC Educational Resources Information Center

    Walker, David A.; Smith, Thomas J.

    2017-01-01

    Nonnormality of data presents unique challenges for researchers who wish to carry out structural equation modeling. The subsequent SPSS syntax program computes bootstrap-adjusted fit indices (comparative fit index, Tucker-Lewis index, incremental fit index, and root mean square error of approximation) that adjust for nonnormality, along with the…

  18. Submillisecond Optical Knife-Edge Testing

    NASA Technical Reports Server (NTRS)

    Thurlow, P.

    1983-01-01

    Fast computer-controlled sampling of optical knife-edge response (KER) signal increases accuracy of optical system aberration measurement. Submicrosecond-response detectors in optical focal plane convert optical signals to electrical signals converted to digital data, sampled and feed into computer for storage and subsequent analysis. Optical data are virtually free of effects of index-of-refraction gradients.

  19. Temporal integration at consecutive processing stages in the auditory pathway of the grasshopper.

    PubMed

    Wirtssohn, Sarah; Ronacher, Bernhard

    2015-04-01

    Temporal integration in the auditory system of locusts was quantified by presenting single clicks and click pairs while performing intracellular recordings. Auditory neurons were studied at three processing stages, which form a feed-forward network in the metathoracic ganglion. Receptor neurons and most first-order interneurons ("local neurons") encode the signal envelope, while second-order interneurons ("ascending neurons") tend to extract more complex, behaviorally relevant sound features. In different neuron types of the auditory pathway we found three response types: no significant temporal integration (some ascending neurons), leaky energy integration (receptor neurons and some local neurons), and facilitatory processes (some local and ascending neurons). The receptor neurons integrated input over very short time windows (<2 ms). Temporal integration on longer time scales was found at subsequent processing stages, indicative of within-neuron computations and network activity. These different strategies, realized at separate processing stages and in parallel neuronal pathways within one processing stage, could enable the grasshopper's auditory system to evaluate longer time windows and thus to implement temporal filters, while at the same time maintaining a high temporal resolution. Copyright © 2015 the American Physiological Society.

  20. Stability assessment of structures under earthquake hazard through GRID technology

    NASA Astrophysics Data System (ADS)

    Prieto Castrillo, F.; Boton Fernandez, M.

    2009-04-01

    This work presents a GRID framework to estimate the vulnerability of structures under earthquake hazard. The tool has been designed to cover the needs of a typical earthquake engineering stability analysis; preparation of input data (pre-processing), response computation and stability analysis (post-processing). In order to validate the application over GRID, a simplified model of structure under artificially generated earthquake records has been implemented. To achieve this goal, the proposed scheme exploits the GRID technology and its main advantages (parallel intensive computing, huge storage capacity and collaboration analysis among institutions) through intensive interaction among the GRID elements (Computing Element, Storage Element, LHC File Catalogue, federated database etc.) The dynamical model is described by a set of ordinary differential equations (ODE's) and by a set of parameters. Both elements, along with the integration engine, are encapsulated into Java classes. With this high level design, subsequent improvements/changes of the model can be addressed with little effort. In the procedure, an earthquake record database is prepared and stored (pre-processing) in the GRID Storage Element (SE). The Metadata of these records is also stored in the GRID federated database. This Metadata contains both relevant information about the earthquake (as it is usual in a seismic repository) and also the Logical File Name (LFN) of the record for its later retrieval. Then, from the available set of accelerograms in the SE, the user can specify a range of earthquake parameters to carry out a dynamic analysis. This way, a GRID job is created for each selected accelerogram in the database. At the GRID Computing Element (CE), displacements are then obtained by numerical integration of the ODE's over time. The resulting response for that configuration is stored in the GRID Storage Element (SE) and the maximum structure displacement is computed. Then, the corresponding Metadata containing the response LFN, earthquake magnitude and maximum structure displacement is also stored. Finally, the displacements are post-processed through a statistically-based algorithm from the available Metadata to obtain the probability of collapse of the structure for different earthquake magnitudes. From this study, it is possible to build a vulnerability report for the structure type and seismic data. The proposed methodology can be combined with the on-going initiatives to build a European earthquake record database. In this context, Grid enables collaboration analysis over shared seismic data and results among different institutions.

  1. Fast polyenergetic forward projection for image formation using OpenCL on a heterogeneous parallel computing platform.

    PubMed

    Zhou, Lili; Clifford Chao, K S; Chang, Jenghwa

    2012-11-01

    Simulated projection images of digital phantoms constructed from CT scans have been widely used for clinical and research applications but their quality and computation speed are not optimal for real-time comparison with the radiography acquired with an x-ray source of different energies. In this paper, the authors performed polyenergetic forward projections using open computing language (OpenCL) in a parallel computing ecosystem consisting of CPU and general purpose graphics processing unit (GPGPU) for fast and realistic image formation. The proposed polyenergetic forward projection uses a lookup table containing the NIST published mass attenuation coefficients (μ∕ρ) for different tissue types and photon energies ranging from 1 keV to 20 MeV. The CT images of interested sites are first segmented into different tissue types based on the CT numbers and converted to a three-dimensional attenuation phantom by linking each voxel to the corresponding tissue type in the lookup table. The x-ray source can be a radioisotope or an x-ray generator with a known spectrum described as weight w(n) for energy bin E(n). The Siddon method is used to compute the x-ray transmission line integral for E(n) and the x-ray fluence is the weighted sum of the exponential of line integral for all energy bins with added Poisson noise. To validate this method, a digital head and neck phantom constructed from the CT scan of a Rando head phantom was segmented into three (air, gray∕white matter, and bone) regions for calculating the polyenergetic projection images for the Mohan 4 MV energy spectrum. To accelerate the calculation, the authors partitioned the workloads using the task parallelism and data parallelism and scheduled them in a parallel computing ecosystem consisting of CPU and GPGPU (NVIDIA Tesla C2050) using OpenCL only. The authors explored the task overlapping strategy and the sequential method for generating the first and subsequent DRRs. A dispatcher was designed to drive the high-degree parallelism of the task overlapping strategy. Numerical experiments were conducted to compare the performance of the OpenCL∕GPGPU-based implementation with the CPU-based implementation. The projection images were similar to typical portal images obtained with a 4 or 6 MV x-ray source. For a phantom size of 512 × 512 × 223, the time for calculating the line integrals for a 512 × 512 image panel was 16.2 ms on GPGPU for one energy bin in comparison to 8.83 s on CPU. The total computation time for generating one polyenergetic projection image of 512 × 512 was 0.3 s (141 s for CPU). The relative difference between the projection images obtained with the CPU-based and OpenCL∕GPGPU-based implementations was on the order of 10(-6) and was virtually indistinguishable. The task overlapping strategy was 5.84 and 1.16 times faster than the sequential method for the first and the subsequent digitally reconstruction radiographies, respectively. The authors have successfully built digital phantoms using anatomic CT images and NIST μ∕ρ tables for simulating realistic polyenergetic projection images and optimized the processing speed with parallel computing using GPGPU∕OpenCL-based implementation. The computation time was fast (0.3 s per projection image) enough for real-time IGRT (image-guided radiotherapy) applications.

  2. Validation of an Improved Computer-Assisted Technique for Mining Free-Text Electronic Medical Records.

    PubMed

    Duz, Marco; Marshall, John F; Parkin, Tim

    2017-06-29

    The use of electronic medical records (EMRs) offers opportunity for clinical epidemiological research. With large EMR databases, automated analysis processes are necessary but require thorough validation before they can be routinely used. The aim of this study was to validate a computer-assisted technique using commercially available content analysis software (SimStat-WordStat v.6 (SS/WS), Provalis Research) for mining free-text EMRs. The dataset used for the validation process included life-long EMRs from 335 patients (17,563 rows of data), selected at random from a larger dataset (141,543 patients, ~2.6 million rows of data) and obtained from 10 equine veterinary practices in the United Kingdom. The ability of the computer-assisted technique to detect rows of data (cases) of colic, renal failure, right dorsal colitis, and non-steroidal anti-inflammatory drug (NSAID) use in the population was compared with manual classification. The first step of the computer-assisted analysis process was the definition of inclusion dictionaries to identify cases, including terms identifying a condition of interest. Words in inclusion dictionaries were selected from the list of all words in the dataset obtained in SS/WS. The second step consisted of defining an exclusion dictionary, including combinations of words to remove cases erroneously classified by the inclusion dictionary alone. The third step was the definition of a reinclusion dictionary to reinclude cases that had been erroneously classified by the exclusion dictionary. Finally, cases obtained by the exclusion dictionary were removed from cases obtained by the inclusion dictionary, and cases from the reinclusion dictionary were subsequently reincluded using Rv3.0.2 (R Foundation for Statistical Computing, Vienna, Austria). Manual analysis was performed as a separate process by a single experienced clinician reading through the dataset once and classifying each row of data based on the interpretation of the free-text notes. Validation was performed by comparison of the computer-assisted method with manual analysis, which was used as the gold standard. Sensitivity, specificity, negative predictive values (NPVs), positive predictive values (PPVs), and F values of the computer-assisted process were calculated by comparing them with the manual classification. Lowest sensitivity, specificity, PPVs, NPVs, and F values were 99.82% (1128/1130), 99.88% (16410/16429), 94.6% (223/239), 100.00% (16410/16412), and 99.0% (100×2×0.983×0.998/[0.983+0.998]), respectively. The computer-assisted process required few seconds to run, although an estimated 30 h were required for dictionary creation. Manual classification required approximately 80 man-hours. The critical step in this work is the creation of accurate and inclusive dictionaries to ensure that no potential cases are missed. It is significantly easier to remove false positive terms from a SS/WS selected subset of a large database than search that original database for potential false negatives. The benefits of using this method are proportional to the size of the dataset to be analyzed. ©Marco Duz, John F Marshall, Tim Parkin. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 29.06.2017.

  3. Validation of an Improved Computer-Assisted Technique for Mining Free-Text Electronic Medical Records

    PubMed Central

    Marshall, John F; Parkin, Tim

    2017-01-01

    Background The use of electronic medical records (EMRs) offers opportunity for clinical epidemiological research. With large EMR databases, automated analysis processes are necessary but require thorough validation before they can be routinely used. Objective The aim of this study was to validate a computer-assisted technique using commercially available content analysis software (SimStat-WordStat v.6 (SS/WS), Provalis Research) for mining free-text EMRs. Methods The dataset used for the validation process included life-long EMRs from 335 patients (17,563 rows of data), selected at random from a larger dataset (141,543 patients, ~2.6 million rows of data) and obtained from 10 equine veterinary practices in the United Kingdom. The ability of the computer-assisted technique to detect rows of data (cases) of colic, renal failure, right dorsal colitis, and non-steroidal anti-inflammatory drug (NSAID) use in the population was compared with manual classification. The first step of the computer-assisted analysis process was the definition of inclusion dictionaries to identify cases, including terms identifying a condition of interest. Words in inclusion dictionaries were selected from the list of all words in the dataset obtained in SS/WS. The second step consisted of defining an exclusion dictionary, including combinations of words to remove cases erroneously classified by the inclusion dictionary alone. The third step was the definition of a reinclusion dictionary to reinclude cases that had been erroneously classified by the exclusion dictionary. Finally, cases obtained by the exclusion dictionary were removed from cases obtained by the inclusion dictionary, and cases from the reinclusion dictionary were subsequently reincluded using Rv3.0.2 (R Foundation for Statistical Computing, Vienna, Austria). Manual analysis was performed as a separate process by a single experienced clinician reading through the dataset once and classifying each row of data based on the interpretation of the free-text notes. Validation was performed by comparison of the computer-assisted method with manual analysis, which was used as the gold standard. Sensitivity, specificity, negative predictive values (NPVs), positive predictive values (PPVs), and F values of the computer-assisted process were calculated by comparing them with the manual classification. Results Lowest sensitivity, specificity, PPVs, NPVs, and F values were 99.82% (1128/1130), 99.88% (16410/16429), 94.6% (223/239), 100.00% (16410/16412), and 99.0% (100×2×0.983×0.998/[0.983+0.998]), respectively. The computer-assisted process required few seconds to run, although an estimated 30 h were required for dictionary creation. Manual classification required approximately 80 man-hours. Conclusions The critical step in this work is the creation of accurate and inclusive dictionaries to ensure that no potential cases are missed. It is significantly easier to remove false positive terms from a SS/WS selected subset of a large database than search that original database for potential false negatives. The benefits of using this method are proportional to the size of the dataset to be analyzed. PMID:28663163

  4. 20 CFR 663.535 - What is the process for determining the subsequent eligibility of a provider?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... subsequent eligibility of a provider? 663.535 Section 663.535 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR ADULT AND DISLOCATED WORKER ACTIVITIES UNDER TITLE I OF THE WORKFORCE INVESTMENT ACT Eligible Training Providers § 663.535 What is the process for determining the subsequent...

  5. 20 CFR 663.535 - What is the process for determining the subsequent eligibility of a provider?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... subsequent eligibility of a provider? 663.535 Section 663.535 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR ADULT AND DISLOCATED WORKER ACTIVITIES UNDER TITLE I OF THE WORKFORCE INVESTMENT ACT Eligible Training Providers § 663.535 What is the process for determining the subsequent...

  6. Computational Challenges in the Analysis of Petrophysics Using Microtomography and Upscaling

    NASA Astrophysics Data System (ADS)

    Liu, J.; Pereira, G.; Freij-Ayoub, R.; Regenauer-Lieb, K.

    2014-12-01

    Microtomography provides detailed 3D internal structures of rocks in micro- to tens of nano-meter resolution and is quickly turning into a new technology for studying petrophysical properties of materials. An important step is the upscaling of these properties as micron or sub-micron resolution can only be done on the sample-scale of millimeters or even less than a millimeter. We present here a recently developed computational workflow for the analysis of microstructures including the upscaling of material properties. Computations of properties are first performed using conventional material science simulations at micro to nano-scale. The subsequent upscaling of these properties is done by a novel renormalization procedure based on percolation theory. We have tested the workflow using different rock samples, biological and food science materials. We have also applied the technique on high-resolution time-lapse synchrotron CT scans. In this contribution we focus on the computational challenges that arise from the big data problem of analyzing petrophysical properties and its subsequent upscaling. We discuss the following challenges: 1) Characterization of microtomography for extremely large data sets - our current capability. 2) Computational fluid dynamics simulations at pore-scale for permeability estimation - methods, computing cost and accuracy. 3) Solid mechanical computations at pore-scale for estimating elasto-plastic properties - computational stability, cost, and efficiency. 4) Extracting critical exponents from derivative models for scaling laws - models, finite element meshing, and accuracy. Significant progress in each of these challenges is necessary to transform microtomography from the current research problem into a robust computational big data tool for multi-scale scientific and engineering problems.

  7. An in silico method to identify computer-based protocols worthy of clinical study: An insulin infusion protocol use case

    PubMed Central

    Wong, Anthony F; Pielmeier, Ulrike; Haug, Peter J; Andreassen, Steen

    2016-01-01

    Objective Develop an efficient non-clinical method for identifying promising computer-based protocols for clinical study. An in silico comparison can provide information that informs the decision to proceed to a clinical trial. The authors compared two existing computer-based insulin infusion protocols: eProtocol-insulin from Utah, USA, and Glucosafe from Denmark. Materials and Methods The authors used eProtocol-insulin to manage intensive care unit (ICU) hyperglycemia with intravenous (IV) insulin from 2004 to 2010. Recommendations accepted by the bedside clinicians directly link the subsequent blood glucose values to eProtocol-insulin recommendations and provide a unique clinical database. The authors retrospectively compared in silico 18 984 eProtocol-insulin continuous IV insulin infusion rate recommendations from 408 ICU patients with those of Glucosafe, the candidate computer-based protocol. The subsequent blood glucose measurement value (low, on target, high) was used to identify if the insulin recommendation was too high, on target, or too low. Results Glucosafe consistently provided more favorable continuous IV insulin infusion rate recommendations than eProtocol-insulin for on target (64% of comparisons), low (80% of comparisons), or high (70% of comparisons) blood glucose. Aggregated eProtocol-insulin and Glucosafe continuous IV insulin infusion rates were clinically similar though statistically significantly different (Wilcoxon signed rank test P = .01). In contrast, when stratified by low, on target, or high subsequent blood glucose measurement, insulin infusion rates from eProtocol-insulin and Glucosafe were statistically significantly different (Wilcoxon signed rank test, P < .001), and clinically different. Discussion This in silico comparison appears to be an efficient nonclinical method for identifying promising computer-based protocols. Conclusion Preclinical in silico comparison analytical framework allows rapid and inexpensive identification of computer-based protocol care strategies that justify expensive and burdensome clinical trials. PMID:26228765

  8. New methods for clinical pathways-Business Process Modeling Notation (BPMN) and Tangible Business Process Modeling (t.BPM).

    PubMed

    Scheuerlein, Hubert; Rauchfuss, Falk; Dittmar, Yves; Molle, Rüdiger; Lehmann, Torsten; Pienkos, Nicole; Settmacher, Utz

    2012-06-01

    Clinical pathways (CP) are nowadays used in numerous institutions, but their real impact is still a matter of debate. The optimal design of a clinical pathway remains unclear and is mainly determined by the expectations of the individual institution. The purpose of the here described pilot project was the development of two CP (colon and rectum carcinoma) according to Business Process Modeling Notation (BPMN) and Tangible Business Process Modeling (t.BPM). BPMN is an established standard for business process modelling in industry and economy. It is, in the broadest sense, a computer programme which enables the description and a relatively easy graphical imaging of complex processes. t.BPM is a modular construction system of the BPMN symbols which enables the creation of an outline or raw model, e.g. by placing the symbols on a spread-out paper sheet. The thus created outline can then be transferred to the computer and further modified as required. CP for the treatment of colon and rectal cancer have been developed with support of an external IT coach. The pathway was developed in an interdisciplinary and interprofessional manner (55 man-days over 15 working days). During this time, necessary interviews with medical, nursing and administrative staffs were conducted as well. Both pathways were developed parallel. Subsequent analysis was focussed on feasibility, expenditure, clarity and suitability for daily clinical practice. The familiarization with BPMN was relatively quick and intuitive. The use of t.BPM enabled the pragmatic, effective and results-directed creation of outlines for the CP. The development of both CP was finished from the diagnostic evaluation to the adjuvant/neoadjuvant therapy and rehabilitation phase. The integration of checklists, guidelines and important medical or other documents is easily accomplished. A direct integration into the hospital computer system is currently not possible for technical reasons. BPMN and t.BPM are sufficiently suitable for the planned modelling and imaging of CP. The application in medicine is new, and transfer from the industrial process management is in principle possible. BPMN-CP may be used for teaching and training, patient information and quality management. The graphical image is clearly structured and appealing. Even though the efficiency in the creation of BPMN-CP increases markedly after the training phase, high amounts of manpower and time are required. The most sensible and consequent application of a BPMN-CP would be the direct integration into the hospital computer system. The integration of a modelling language, such as BPMN, into the hospital computer systems could be a very sensible approach for the development of new hospital information systems in the future.

  9. Computing and Visualizing Reachable Volumes for Maneuvering Satellites

    NASA Astrophysics Data System (ADS)

    Jiang, M.; de Vries, W.; Pertica, A.; Olivier, S.

    2011-09-01

    Detecting and predicting maneuvering satellites is an important problem for Space Situational Awareness. The spatial envelope of all possible locations within reach of such a maneuvering satellite is known as the Reachable Volume (RV). As soon as custody of a satellite is lost, calculating the RV and its subsequent time evolution is a critical component in the rapid recovery of the satellite. In this paper, we present a Monte Carlo approach to computing the RV for a given object. Essentially, our approach samples all possible trajectories by randomizing thrust-vectors, thrust magnitudes and time of burn. At any given instance, the distribution of the "point-cloud" of the virtual particles defines the RV. For short orbital time-scales, the temporal evolution of the point-cloud can result in complex, multi-reentrant manifolds. Visualization plays an important role in gaining insight and understanding into this complex and evolving manifold. In the second part of this paper, we focus on how to effectively visualize the large number of virtual trajectories and the computed RV. We present a real-time out-of-core rendering technique for visualizing the large number of virtual trajectories. We also examine different techniques for visualizing the computed volume of probability density distribution, including volume slicing, convex hull and isosurfacing. We compare and contrast these techniques in terms of computational cost and visualization effectiveness, and describe the main implementation issues encountered during our development process. Finally, we will present some of the results from our end-to-end system for computing and visualizing RVs using examples of maneuvering satellites.

  10. Influence of mechanical rock properties and fracture healing rate on crustal fluid flow dynamics

    NASA Astrophysics Data System (ADS)

    Sachau, Till; Bons, Paul; Gomez-Rivas, Enrique; Koehn, Daniel; de Riese, Tamara

    2016-04-01

    Fluid flow in the Earth's crust is very slow over extended periods of time, during which it occurs within the connected pore space of rocks. If the fluid production rate exceeds a certain threshold, matrix permeability alone is insufficient to drain the fluid volume and fluid pressure builds up, thereby reducing the effective stress supported by the rock matrix. Hydraulic fractures form once the effective pressure exceeds the tensile strength of the rock matrix and act subsequently as highly effective fluid conduits. Once local fluid pressure is sufficiently low again, flow ceases and fractures begin to heal. Since fluid flow is controlled by the alternation of fracture permeability and matrix permeability, the flow rate in the system is strongly discontinuous and occurs in intermittent pulses. Resulting hydraulic fracture networks are largely self-organized: opening and subsequent healing of hydraulic fractures depends on the local fluid pressure and on the time-span between fluid pulses. We simulate this process with a computer model and describe the resulting dynamics statistically. Special interest is given to a) the spatially and temporally discontinuous formation and closure of fractures and fracture networks and b) the total flow rate over time. The computer model consists of a crustal-scale dual-porosity setup. Control parameters are the pressure- and time-dependent fracture healing rate, and the strength and the permeability of the intact rock. Statistical analysis involves determination of the multifractal properties and of the power spectral density of the temporal development of the total drainage rate and hydraulic fractures. References Bons, P. D. (2001). The formation of large quartz veins by rapid ascent of fluids in mobile hydrofractures. Tectonophysics, 336, 1-17. Miller, S. a., & Nur, A. (2000). Permeability as a toggle switch in fluid-controlled crustal processes. Earth and Planetary Science Letters, 183(1-2), 133-146. Sachau, T., Bons, P. D., & Gomez-Rivas, E. (2015). Transport efficiency and dynamics of hydraulic fracture networks. Frontiers in Physics, 3.

  11. Influence of regional climate change on meteorological characteristics and their subsequent effect on ozone dispersion in Taiwan

    NASA Astrophysics Data System (ADS)

    Cheng, Fang-Yi; Jian, Shan-Ping; Yang, Zhih-Min; Yen, Ming-Cheng; Tsuang, Ben-Jei

    2015-02-01

    The objective of this study is to understand the influence of regional climate change on local meteorological conditions and their subsequent effect on local ozone (O3) dispersion in Taiwan. The 33-year NCEP-DOE Reanalysis 2 (NNR2) data set (1979-2011) was analyzed to understand the variations in regional-scale atmospheric conditions in East Asia and the western North Pacific. To save computational processing time, two scenarios representative of past (1979-86) and current (2004-11) atmospheric conditions were selected but only targeting the autumn season (September, October and November) when the O3 concentrations were at high levels. Numerical simulations were performed using weather research and forecasting (WRF) model and Community Multiscale Air Quality (CMAQ) model for the past and current scenarios individually but only for the month of October because of limited computational resources. Analysis of NNR2 data exhibited increased air temperature, weakened Asian continental anticyclone, enhanced northeasterly monsoonal flow, and a deepened low-pressure system forming near Taiwan. With enhanced evaporation from oceans along with a deepened low-pressure system, precipitation amounts increased in Taiwan in the current scenario. As demonstrated in the WRF simulation, the land surface physical process responded to the enhanced precipitation resulting in damper soil conditions, and reduced ground temperatures that in turn restricted the development of boundary layer height. The weakened land-sea breeze flow was simulated in the current scenario. With reduced dispersion capability, air pollutants would tend to accumulate near the emission source leading to a degradation of air quality in this region. The conditions would be even worse in southwestern Taiwan due to the fact that stagnant wind fields would occur more frequently in the current scenario. On the other hand, in northern Taiwan, the simulated O3 concentrations are lower during the day in the current scenario due to the enhanced cloud conditions and reduced solar radiation.

  12. Simultaneous Measurements of Temperature and Major Species Concentration in a Hydrocarbon-Fueled Dual Mode Scramjet Using WIDECARS

    NASA Astrophysics Data System (ADS)

    Gallo, Emanuela Carolina Angela

    Width increased dual-pump enhanced coherent anti-Stokes Raman spectroscopy (WIDECARS) measurements were conducted in a McKenna air-ethylene premixed burner, at nominal equivalence ratio range between 0.55 and 2.50 to provide quantitative measurements of six major combustion species (C2H 4, N2, O2, H2, CO, CO2) concentration and temperature simultaneously. The purpose of this test was to investigate the uncertainties in the experimental and spectral modeling methods in preparation for an subsequent scramjet C2H4/air combustion test at the University of Virginia-Aerospace Research Laboratory. A broadband Pyrromethene (PM) PM597 and PM650 dye laser mixture and optical cavity were studied and optimized to excite the Raman shift of all the target species. Two hundred single shot recorded spectra were processed, theoretically fitted and then compared to computational models, to verify where chemical equilibrium or adiabatic condition occurred, providing experimental flame location and formation, species concentrations, temperature, and heat losses inputs to computational kinetic models. The Stark effect, temperature, and concentration errors are discussed. Subsequently, WIDECARS measurements of a premixed air-ethylene flame were successfully acquired in a direct connect small-scale dual-mode scramjet combustor, at University of Virginia Supersonic Combustion Facility (UVaSCF). A nominal Mach 5 flight condition was simulated (stagnation pressure p0 = 300 kPa, temperature T0 = 1200 K, equivalence ratio range ER = 0.3 -- 0.4). The purpose of this test was to provide quantitative measurements of the six major combustion species concentration and temperature. Point-wise measurements were taken by mapping four two-dimensional orthogonal planes (before, within, and two planes after the cavity flame holder) with respect to the combustor freestream direction. Two hundred single shot recorded spectra were processed and theoretically fitted. Mean flow and standard deviation are provided for each investigated case. Within the flame limits tested, WIDECARS data were analyzed and compared with CFD simulations and OH-PLIF measurements.

  13. Neural source dynamics of brain responses to continuous stimuli: Speech processing from acoustics to comprehension.

    PubMed

    Brodbeck, Christian; Presacco, Alessandro; Simon, Jonathan Z

    2018-05-15

    Human experience often involves continuous sensory information that unfolds over time. This is true in particular for speech comprehension, where continuous acoustic signals are processed over seconds or even minutes. We show that brain responses to such continuous stimuli can be investigated in detail, for magnetoencephalography (MEG) data, by combining linear kernel estimation with minimum norm source localization. Previous research has shown that the requirement to average data over many trials can be overcome by modeling the brain response as a linear convolution of the stimulus and a kernel, or response function, and estimating a kernel that predicts the response from the stimulus. However, such analysis has been typically restricted to sensor space. Here we demonstrate that this analysis can also be performed in neural source space. We first computed distributed minimum norm current source estimates for continuous MEG recordings, and then computed response functions for the current estimate at each source element, using the boosting algorithm with cross-validation. Permutation tests can then assess the significance of individual predictor variables, as well as features of the corresponding spatio-temporal response functions. We demonstrate the viability of this technique by computing spatio-temporal response functions for speech stimuli, using predictor variables reflecting acoustic, lexical and semantic processing. Results indicate that processes related to comprehension of continuous speech can be differentiated anatomically as well as temporally: acoustic information engaged auditory cortex at short latencies, followed by responses over the central sulcus and inferior frontal gyrus, possibly related to somatosensory/motor cortex involvement in speech perception; lexical frequency was associated with a left-lateralized response in auditory cortex and subsequent bilateral frontal activity; and semantic composition was associated with bilateral temporal and frontal brain activity. We conclude that this technique can be used to study the neural processing of continuous stimuli in time and anatomical space with the millisecond temporal resolution of MEG. This suggests new avenues for analyzing neural processing of naturalistic stimuli, without the necessity of averaging over artificially short or truncated stimuli. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Automatic brain MR image denoising based on texture feature-based artificial neural networks.

    PubMed

    Chang, Yu-Ning; Chang, Herng-Hua

    2015-01-01

    Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time.

  15. Computer-aided diagnosis and artificial intelligence in clinical imaging.

    PubMed

    Shiraishi, Junji; Li, Qiang; Appelbaum, Daniel; Doi, Kunio

    2011-11-01

    Computer-aided diagnosis (CAD) is rapidly entering the radiology mainstream. It has already become a part of the routine clinical work for the detection of breast cancer with mammograms. The computer output is used as a "second opinion" in assisting radiologists' image interpretations. The computer algorithm generally consists of several steps that may include image processing, image feature analysis, and data classification via the use of tools such as artificial neural networks (ANN). In this article, we will explore these and other current processes that have come to be referred to as "artificial intelligence." One element of CAD, temporal subtraction, has been applied for enhancing interval changes and for suppressing unchanged structures (eg, normal structures) between 2 successive radiologic images. To reduce misregistration artifacts on the temporal subtraction images, a nonlinear image warping technique for matching the previous image to the current one has been developed. Development of the temporal subtraction method originated with chest radiographs, with the method subsequently being applied to chest computed tomography (CT) and nuclear medicine bone scans. The usefulness of the temporal subtraction method for bone scans was demonstrated by an observer study in which reading times and diagnostic accuracy improved significantly. An additional prospective clinical study verified that the temporal subtraction image could be used as a "second opinion" by radiologists with negligible detrimental effects. ANN was first used in 1990 for computerized differential diagnosis of interstitial lung diseases in CAD. Since then, ANN has been widely used in CAD schemes for the detection and diagnosis of various diseases in different imaging modalities, including the differential diagnosis of lung nodules and interstitial lung diseases in chest radiography, CT, and position emission tomography/CT. It is likely that CAD will be integrated into picture archiving and communication systems and will become a standard of care for diagnostic examinations in daily clinical work. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. THELMA: a mobile app for crowdsourcing environmental data

    NASA Astrophysics Data System (ADS)

    Hintz, Kenneth J.; Hintz, Christopher J.; Almomen, Faris; Adounvo, Christian; D'Amato, Michael

    2014-06-01

    The collection of environmental light pollution data related to sea turtle nesting sites is a laborious and time consuming effort entailing the use of several pieces of measurement equipment, their transportation and calibration, the manual logging of results in the field, and subsequent transfer of the data to a computer for post-collection analysis. Serendipitously, the current generation of mobile smart phones (e.g., iPhone® 5) contains the requisite measurement capability, namely location data in aided GPS coordinates, magnetic compass heading, and elevation at the time an image is taken, image parameter data, and the image itself. The Turtle Habitat Environmental Light Measurement App (THELMA) is a mobile phone app whose graphical user interface (GUI) guides an untrained user through the image acquisition process in order to capture 360° of images with pointing guidance. It subsequently uploads the user-tagged images, all of the associated image parameters, and position, azimuth, elevation metadata to a central internet repository. Provision is also made for the capture of calibration images and the review of images before upload. THELMA allows for inexpensive, highly-efficient, worldwide crowdsourcing of calibratable beachfront lighting/light pollution data collected by untrained volunteers. This data can be later processed, analyzed, and used by scientists conducting sea turtle conservation in order to identify beach locations with hazardous levels of light pollution that may alter sea turtle behavior and necessitate human intervention after hatchling emergence.

  17. Graphical user interface to optimize image contrast parameters used in object segmentation - biomed 2009.

    PubMed

    Anderson, Jeffrey R; Barrett, Steven F

    2009-01-01

    Image segmentation is the process of isolating distinct objects within an image. Computer algorithms have been developed to aid in the process of object segmentation, but a completely autonomous segmentation algorithm has yet to be developed [1]. This is because computers do not have the capability to understand images and recognize complex objects within the image. However, computer segmentation methods [2], requiring user input, have been developed to quickly segment objects in serial sectioned images, such as magnetic resonance images (MRI) and confocal laser scanning microscope (CLSM) images. In these cases, the segmentation process becomes a powerful tool in visualizing the 3D nature of an object. The user input is an important part of improving the performance of many segmentation methods. A double threshold segmentation method has been investigated [3] to separate objects in gray scaled images, where the gray level of the object is among the gray levels of the background. In order to best determine the threshold values for this segmentation method the image must be manipulated for optimal contrast. The same is true of other segmentation and edge detection methods as well. Typically, the better the image contrast, the better the segmentation results. This paper describes a graphical user interface (GUI) that allows the user to easily change image contrast parameters that will optimize the performance of subsequent object segmentation. This approach makes use of the fact that the human brain is extremely effective in object recognition and understanding. The GUI provides the user with the ability to define the gray scale range of the object of interest. These lower and upper bounds of this range are used in a histogram stretching process to improve image contrast. Also, the user can interactively modify the gamma correction factor that provides a non-linear distribution of gray scale values, while observing the corresponding changes to the image. This interactive approach gives the user the power to make optimal choices in the contrast enhancement parameters.

  18. Maximizing neotissue growth kinetics in a perfusion bioreactor: An in silico strategy using model reduction and Bayesian optimization.

    PubMed

    Mehrian, Mohammad; Guyot, Yann; Papantoniou, Ioannis; Olofsson, Simon; Sonnaert, Maarten; Misener, Ruth; Geris, Liesbet

    2018-03-01

    In regenerative medicine, computer models describing bioreactor processes can assist in designing optimal process conditions leading to robust and economically viable products. In this study, we started from a (3D) mechanistic model describing the growth of neotissue, comprised of cells, and extracellular matrix, in a perfusion bioreactor set-up influenced by the scaffold geometry, flow-induced shear stress, and a number of metabolic factors. Subsequently, we applied model reduction by reformulating the problem from a set of partial differential equations into a set of ordinary differential equations. Comparing the reduced model results to the mechanistic model results and to dedicated experimental results assesses the reduction step quality. The obtained homogenized model is 10 5 fold faster than the 3D version, allowing the application of rigorous optimization techniques. Bayesian optimization was applied to find the medium refreshment regime in terms of frequency and percentage of medium replaced that would maximize neotissue growth kinetics during 21 days of culture. The simulation results indicated that maximum neotissue growth will occur for a high frequency and medium replacement percentage, a finding that is corroborated by reports in the literature. This study demonstrates an in silico strategy for bioprocess optimization paying particular attention to the reduction of the associated computational cost. © 2017 Wiley Periodicals, Inc.

  19. Prediction of the translocon-mediated membrane insertion free energies of protein sequences.

    PubMed

    Park, Yungki; Helms, Volkhard

    2008-05-15

    Helical membrane proteins (HMPs) play crucial roles in a variety of cellular processes. Unlike water-soluble proteins, HMPs need not only to fold but also get inserted into the membrane to be fully functional. This process of membrane insertion is mediated by the translocon complex. Thus, it is of great interest to develop computational methods for predicting the translocon-mediated membrane insertion free energies of protein sequences. We have developed Membrane Insertion (MINS), a novel sequence-based computational method for predicting the membrane insertion free energies of protein sequences. A benchmark test gives a correlation coefficient of 0.74 between predicted and observed free energies for 357 known cases, which corresponds to a mean unsigned error of 0.41 kcal/mol. These results are significantly better than those obtained by traditional hydropathy analysis. Moreover, the ability of MINS to reasonably predict membrane insertion free energies of protein sequences allows for effective identification of transmembrane (TM) segments. Subsequently, MINS was applied to predict the membrane insertion free energies of 316 TM segments found in known structures. An in-depth analysis of the predicted free energies reveals a number of interesting findings about the biogenesis and structural stability of HMPs. A web server for MINS is available at http://service.bioinformatik.uni-saarland.de/mins

  20. A computer program for performance prediction of tripropellant rocket engines with tangential slot injection

    NASA Technical Reports Server (NTRS)

    Dang, Anthony; Nickerson, Gary R.

    1987-01-01

    For the development of a Heavy Lift Launch Vehicle (HLLV) several engines with different operating cycles and using LOX/Hydrocarbon propellants are presently being examined. Some concepts utilize hydrogen for thrust chamber wall cooling followed by a gas generator turbine drive cycle with subsequent dumping of H2/O2 combustion products into the nozzle downstream of the throat. In the Space Transportation Booster Engine (STBE) selection process the specific impulse will be one of the optimization criteria; however, the current performance prediction programs do not have the capability to include a third propellant in this process, nor to account for the effect of dumping the gas-generator product tangentially inside the nozzle. The purpose is to describe a computer program for accurately predicting the performance of such an engine. The code consists of two modules; one for the inviscid performance, and the other for the viscous loss. For the first module, the two-dimensional kinetics program (TDK) was modified to account for tripropellant chemistry, and for the effect of tangential slot injection. For the viscous loss, the Mass Addition Boundary Layer program (MABL) was modified to include the effects of the boundary layer-shear layer interaction, and tripropellant chemistry. Calculations were made for a real engine and compared with available data.

  1. Guidelines for computer security in general practice.

    PubMed

    Schattner, Peter; Pleteshner, Catherine; Bhend, Heinz; Brouns, Johan

    2007-01-01

    As general practice becomes increasingly computerised, data security becomes increasingly important for both patient health and the efficient operation of the practice. To develop guidelines for computer security in general practice based on a literature review, an analysis of available information on current practice and a series of key stakeholder interviews. While the guideline was produced in the context of Australian general practice, we have developed a template that is also relevant for other countries. Current data on computer security measures was sought from Australian divisions of general practice. Semi-structured interviews were conducted with general practitioners (GPs), the medical software industry, senior managers within government responsible for health IT (information technology) initiatives, technical IT experts, divisions of general practice and a member of a health information consumer group. The respondents were asked to assess both the likelihood and the consequences of potential risks in computer security being breached. The study suggested that the most important computer security issues in general practice were: the need for a nominated IT security coordinator; having written IT policies, including a practice disaster recovery plan; controlling access to different levels of electronic data; doing and testing backups; protecting against viruses and other malicious codes; installing firewalls; undertaking routine maintenance of hardware and software; and securing electronic communication, for example via encryption. This information led to the production of computer security guidelines, including a one-page summary checklist, which were subsequently distributed to all GPs in Australia. This paper maps out a process for developing computer security guidelines for general practice. The specific content will vary in different countries according to their levels of adoption of IT, and cultural, technical and other health service factors. Making these guidelines relevant to local contexts should help maximise their uptake.

  2. Coupling Computer-Aided Process Simulation and ...

    EPA Pesticide Factsheets

    A methodology is described for developing a gate-to-gate life cycle inventory (LCI) of a chemical manufacturing process to support the application of life cycle assessment in the design and regulation of sustainable chemicals. The inventories were derived by first applying process design and simulation of develop a process flow diagram describing the energy and basic material flows of the system. Additional techniques developed by the U.S. Environmental Protection Agency for estimating uncontrolled emissions from chemical processing equipment were then applied to obtain a detailed emission profile for the process. Finally, land use for the process was estimated using a simple sizing model. The methodology was applied to a case study of acetic acid production based on the Cativa tm process. The results reveal improvements in the qualitative LCI for acetic acid production compared to commonly used databases and top-down methodologies. The modeling techniques improve the quantitative LCI results for inputs and uncontrolled emissions. With provisions for applying appropriate emission controls, the proposed method can provide an estimate of the LCI that can be used for subsequent life cycle assessments. As part of its mission, the Agency is tasked with overseeing the use of chemicals in commerce. This can include consideration of a chemical's potential impact on health and safety, resource conservation, clean air and climate change, clean water, and sustainable

  3. An experiment in hurricane track prediction using parallel computing methods

    NASA Technical Reports Server (NTRS)

    Song, Chang G.; Jwo, Jung-Sing; Lakshmivarahan, S.; Dhall, S. K.; Lewis, John M.; Velden, Christopher S.

    1994-01-01

    The barotropic model is used to explore the advantages of parallel processing in deterministic forecasting. We apply this model to the track forecasting of hurricane Elena (1985). In this particular application, solutions to systems of elliptic equations are the essence of the computational mechanics. One set of equations is associated with the decomposition of the wind into irrotational and nondivergent components - this determines the initial nondivergent state. Another set is associated with recovery of the streamfunction from the forecasted vorticity. We demonstrate that direct parallel methods based on accelerated block cyclic reduction (BCR) significantly reduce the computational time required to solve the elliptic equations germane to this decomposition and forecast problem. A 72-h track prediction was made using incremental time steps of 16 min on a network of 3000 grid points nominally separated by 100 km. The prediction took 30 sec on the 8-processor Alliant FX/8 computer. This was a speed-up of 3.7 when compared to the one-processor version. The 72-h prediction of Elena's track was made as the storm moved toward Florida's west coast. Approximately 200 km west of Tampa Bay, Elena executed a dramatic recurvature that ultimately changed its course toward the northwest. Although the barotropic track forecast was unable to capture the hurricane's tight cycloidal looping maneuver, the subsequent northwesterly movement was accurately forecasted as was the location and timing of landfall near Mobile Bay.

  4. Exploring Issues about Computational Thinking in Higher Education

    ERIC Educational Resources Information Center

    Czerkawski, Betul C.; Lyman, Eugene W., III

    2015-01-01

    The term computational thinking (CT) has been in academic discourse for decades, but gained new currency in 2006, when Jeanette Wing used it to describe a set of thinking skills that students in all fields may require in order to succeed. Wing's initial article and subsequent writings on CT have been broadly influential; experts in…

  5. Theta synchronization networks emerge during human object-place memory encoding.

    PubMed

    Sato, Naoyuki; Yamaguchi, Yoko

    2007-03-26

    Recent rodent hippocampus studies have suggested that theta rhythm-dependent neural dynamics ('theta phase precession') is essential for an on-line memory formation. A computational study indicated that the phase precession enables a human object-place association memory with voluntary eye movements, although it is still an open question whether the human brain uses the dynamics. Here we elucidated subsequent memory-correlated activities in human scalp electroencephalography in an object-place association memory designed according the former computational study. Our results successfully demonstrated that subsequent memory recall is characterized by an increase in theta power and coherence, and further, that multiple theta synchronization networks emerge. These findings suggest the human theta dynamics in common with rodents in episodic memory formation.

  6. 'Tagger' - a Mac OS X Interactive Graphical Application for Data Inference and Analysis of N-Dimensional Datasets in the Natural Physical Sciences.

    NASA Astrophysics Data System (ADS)

    Morse, P. E.; Reading, A. M.; Lueg, C.

    2014-12-01

    Pattern-recognition in scientific data is not only a computational problem but a human-observer problem as well. Human observation of - and interaction with - data visualization software can augment, select, interrupt and modify computational routines and facilitate processes of pattern and significant feature recognition for subsequent human analysis, machine learning, expert and artificial intelligence systems.'Tagger' is a Mac OS X interactive data visualisation tool that facilitates Human-Computer interaction for the recognition of patterns and significant structures. It is a graphical application developed using the Quartz Composer framework. 'Tagger' follows a Model-View-Controller (MVC) software architecture: the application problem domain (the model) is to facilitate novel ways of abstractly representing data to a human interlocutor, presenting these via different viewer modalities (e.g. chart representations, particle systems, parametric geometry) to the user (View) and enabling interaction with the data (Controller) via a variety of Human Interface Devices (HID). The software enables the user to create an arbitrary array of tags that may be appended to the visualised data, which are then saved into output files as forms of semantic metadata. Three fundamental problems that are not strongly supported by conventional scientific visualisation software are addressed:1] How to visually animate data over time, 2] How to rapidly deploy unconventional parametrically driven data visualisations, 3] How to construct and explore novel interaction models that capture the activity of the end-user as semantic metadata that can be used to computationally enhance subsequent interrogation. Saved tagged data files may be loaded into Tagger, so that tags may be tagged, if desired. Recursion opens up the possibility of refining or overlapping different types of tags, tagging a variety of different POIs or types of events, and of capturing different types of specialist observations of important or noticeable events. Other visualisations and modes of interaction will also be demonstrated, with the aim of discovering knowledge in large datasets in the natural, physical sciences. Fig.1 Wave height data from an oceanographic Wave Rider Buoy. Colors/radii are driven by wave height data.

  7. The measurement of boundary layers on a compressor blade in cascade. Volume 1: Experimental technique, analysis and results

    NASA Technical Reports Server (NTRS)

    Zierke, William C.; Deutsch, Steven

    1989-01-01

    Measurements were made of the boundary layers and wakes about a highly loaded, double-circular-arc compressor blade in cascade. These laser Doppler velocimetry measurements have yielded a very detailed and precise data base with which to test the application of viscous computational codes to turbomachinery. In order to test the computational codes at off-design conditions, the data were acquired at a chord Reynolds number of 500,000 and at three incidence angles. Moreover, these measurements have supplied some physical insight into these very complex flows. Although some natural transition is evident, laminar boundary layers usually detach and subsequently reattach as either fully or intermittently turbulent boundary layers. These transitional separation bubbles play an important role in the development of most of the boundary layers and wakes measured in this cascade and the modeling or computing of these bubbles should prove to be the key aspect in computing the entire cascade flow field. In addition, the nonequilibrium turbulent boundary layers on these highly loaded blades always have some region of separation near the trailing edge of the suction surface. These separated flows, as well as the subsequent near wakes, show no similarity and should prove to be a challenging test for the viscous computational codes.

  8. Possibilities in optical monitoring of laser welding process

    NASA Astrophysics Data System (ADS)

    Horník, Petr; Mrňa, Libor; Pavelka, Jan

    2016-11-01

    Laser welding is a modern, widely used but still not really common method of welding. With increasing demands on the quality of the welds, it is usual to apply automated machine welding and with on-line monitoring of the welding process. The resulting quality of the weld is largely affected by the behavior of keyhole. However, its direct observation during the welding process is practically impossible and it is necessary to use indirect methods. At ISI we have developed optical methods of monitoring the process. Most advanced is an analysis of radiation of laser-induced plasma plume forming in the keyhole where changes in the frequency of the plasma bursts are monitored and evaluated using Fourier and autocorrelation analysis. Another solution, robust and suitable for industry, is based on the observation of the keyhole inlet opening through a coaxial camera mounted in the welding head and the subsequent image processing by computer vision methods. A high-speed camera is used to understand the dynamics of the plasma plume. Through optical spectroscopy of the plume, we can study the excitation of elements in a material. It is also beneficial to monitor the gas flow of shielding gas using schlieren method.

  9. PPM Receiver Implemented in Software

    NASA Technical Reports Server (NTRS)

    Gray, Andrew; Kang, Edward; Lay, Norman; Vilnrotter, Victor; Srinivasan, Meera; Lee, Clement

    2010-01-01

    A computer program has been written as a tool for developing optical pulse-position- modulation (PPM) receivers in which photodetector outputs are fed to analog-to-digital converters (ADCs) and all subsequent signal processing is performed digitally. The program can be used, for example, to simulate an all-digital version of the PPM receiver described in Parallel Processing of Broad-Band PPM Signals (NPO-40711), which appears elsewhere in this issue of NASA Tech Briefs. The program can also be translated into a design for digital PPM receiver hardware. The most notable innovation embodied in the software and the underlying PPM-reception concept is a digital processing subsystem that performs synchronization of PPM time slots, even though the digital processing is, itself, asynchronous in the sense that no attempt is made to synchronize it with the incoming optical signal a priori and there is no feedback to analog signal processing subsystems or ADCs. Functions performed by the software receiver include time-slot synchronization, symbol synchronization, coding preprocessing, and diagnostic functions. The program is written in the MATLAB and Simulink software system. The software receiver is highly parameterized and, hence, programmable: for example, slot- and symbol-synchronization filters have programmable bandwidths.

  10. A vertical-energy-thresholding procedure for data reduction with multiple complex curves.

    PubMed

    Jung, Uk; Jeong, Myong K; Lu, Jye-Chyi

    2006-10-01

    Due to the development of sensing and computer technology, measurements of many process variables are available in current manufacturing processes. It is very challenging, however, to process a large amount of information in a limited time in order to make decisions about the health of the processes and products. This paper develops a "preprocessing" procedure for multiple sets of complicated functional data in order to reduce the data size for supporting timely decision analyses. The data type studied has been used for fault detection, root-cause analysis, and quality improvement in such engineering applications as automobile and semiconductor manufacturing and nanomachining processes. The proposed vertical-energy-thresholding (VET) procedure balances the reconstruction error against data-reduction efficiency so that it is effective in capturing key patterns in the multiple data signals. The selected wavelet coefficients are treated as the "reduced-size" data in subsequent analyses for decision making. This enhances the ability of the existing statistical and machine-learning procedures to handle high-dimensional functional data. A few real-life examples demonstrate the effectiveness of our proposed procedure compared to several ad hoc techniques extended from single-curve-based data modeling and denoising procedures.

  11. Improved Discrete Approximation of Laplacian of Gaussian

    NASA Technical Reports Server (NTRS)

    Shuler, Robert L., Jr.

    2004-01-01

    An improved method of computing a discrete approximation of the Laplacian of a Gaussian convolution of an image has been devised. The primary advantage of the method is that without substantially degrading the accuracy of the end result, it reduces the amount of information that must be processed and thus reduces the amount of circuitry needed to perform the Laplacian-of- Gaussian (LOG) operation. Some background information is necessary to place the method in context. The method is intended for application to the LOG part of a process of real-time digital filtering of digitized video data that represent brightnesses in pixels in a square array. The particular filtering process of interest is one that converts pixel brightnesses to binary form, thereby reducing the amount of information that must be performed in subsequent correlation processing (e.g., correlations between images in a stereoscopic pair for determining distances or correlations between successive frames of the same image for detecting motions). The Laplacian is often included in the filtering process because it emphasizes edges and textures, while the Gaussian is often included because it smooths out noise that might not be consistent between left and right images or between successive frames of the same image.

  12. Heterogeneous Distributed Computing for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Sunderam, Vaidy S.

    1998-01-01

    The research supported under this award focuses on heterogeneous distributed computing for high-performance applications, with particular emphasis on computational aerosciences. The overall goal of this project was to and investigate issues in, and develop solutions to, efficient execution of computational aeroscience codes in heterogeneous concurrent computing environments. In particular, we worked in the context of the PVM[1] system and, subsequent to detailed conversion efforts and performance benchmarking, devising novel techniques to increase the efficacy of heterogeneous networked environments for computational aerosciences. Our work has been based upon the NAS Parallel Benchmark suite, but has also recently expanded in scope to include the NAS I/O benchmarks as specified in the NHT-1 document. In this report we summarize our research accomplishments under the auspices of the grant.

  13. DInSAR time series generation within a cloud computing environment: from ERS to Sentinel-1 scenario

    NASA Astrophysics Data System (ADS)

    Casu, Francesco; Elefante, Stefano; Imperatore, Pasquale; Lanari, Riccardo; Manunta, Michele; Zinno, Ivana; Mathot, Emmanuel; Brito, Fabrice; Farres, Jordi; Lengert, Wolfgang

    2013-04-01

    One of the techniques that will strongly benefit from the advent of the Sentinel-1 system is Differential SAR Interferometry (DInSAR), which has successfully demonstrated to be an effective tool to detect and monitor ground displacements with centimetre accuracy. The geoscience communities (volcanology, seismicity, …), as well as those related to hazard monitoring and risk mitigation, make extensively use of the DInSAR technique and they will take advantage from the huge amount of SAR data acquired by Sentinel-1. Indeed, such an information will successfully permit the generation of Earth's surface displacement maps and time series both over large areas and long time span. However, the issue of managing, processing and analysing the large Sentinel data stream is envisaged by the scientific community to be a major bottleneck, particularly during crisis phases. The emerging need of creating a common ecosystem in which data, results and processing tools are shared, is envisaged to be a successful way to address such a problem and to contribute to the information and knowledge spreading. The Supersites initiative as well as the ESA SuperSites Exploitation Platform (SSEP) and the ESA Cloud Computing Operational Pilot (CIOP) projects provide effective answers to this need and they are pushing towards the development of such an ecosystem. It is clear that all the current and existent tools for querying, processing and analysing SAR data are required to be not only updated for managing the large data stream of Sentinel-1 satellite, but also reorganized for quickly replying to the simultaneous and highly demanding user requests, mainly during emergency situations. This translates into the automatic and unsupervised processing of large amount of data as well as the availability of scalable, widely accessible and high performance computing capabilities. The cloud computing environment permits to achieve all of these objectives, particularly in case of spike and peak requests of processing resources linked to disaster events. This work aims at presenting a parallel computational model for the widely used DInSAR algorithm named as Small BAseline Subset (SBAS), which has been implemented within the cloud computing environment provided by the ESA-CIOP platform. This activity has resulted in developing a scalable, unsupervised, portable, and widely accessible (through a web portal) parallel DInSAR computational tool. The activity has rewritten and developed the SBAS application algorithm within a parallel system environment, i.e., in a form that allows us to benefit from multiple processing units. This requires the devising a parallel version of the SBAS algorithm and its subsequent implementation, implying additional complexity in algorithm designing and an efficient multi processor programming, with the final aim of a parallel performance optimization. Although the presented algorithm has been designed to work with Sentinel-1 data, it can also process other satellite SAR data (ERS, ENVISAT, CSK, TSX, ALOS). Indeed, the performance analysis of the implemented SBAS parallel version has been tested on the full ASAR archive (64 acquisitions) acquired over the Napoli Bay, a volcanic and densely urbanized area in Southern Italy. The full processing - from the raw data download to the generation of DInSAR time series - has been carried out by engaging 4 nodes, each one with 2 cores and 16 GB of RAM, and has taken about 36 hours, with respect to about 135 hours of the sequential version. Extensive analysis on other test areas significant from DInSAR and geophysical viewpoint will be presented. Finally, preliminary performance evaluation of the presented approach within the Sentinel-1 scenario will be provided.

  14. Processing, Cataloguing and Distribution of Uas Images in Near Real Time

    NASA Astrophysics Data System (ADS)

    Runkel, I.

    2013-08-01

    Why are UAS such a hype? UAS make the data capture flexible, fast and easy. For many applications this is more important than a perfect photogrammetric aerial image block. To ensure, that the advantage of a fast data capturing will be valid up to the end of the processing chain, all intermediate steps like data processing and data dissemination to the customer need to be flexible and fast as well. GEOSYSTEMS has established the whole processing workflow as server/client solution. This is the focus of the presentation. Depending on the image acquisition system the image data can be down linked during the flight to the data processing computer or it is stored on a mobile device and hooked up to the data processing computer after the flight campaign. The image project manager reads the data from the device and georeferences the images according to the position data. The meta data is converted into an ISO conform format and subsequently all georeferenced images are catalogued in the raster data management System ERDAS APOLLO. APOLLO provides the data, respectively the images as an OGC-conform services to the customer. Within seconds the UAV-images are ready to use for GIS application, image processing or direct interpretation via web applications - where ever you want. The whole processing chain is built in a generic manner. It can be adapted to a magnitude of applications. The UAV imageries can be processed and catalogued as single ortho imges or as image mosaic. Furthermore, image data of various cameras can be fusioned. By using WPS (web processing services) image enhancement, image analysis workflows like change detection layers can be calculated and provided to the image analysts. The processing of the WPS runs direct on the raster data management server. The image analyst has no data and no software on his local computer. This workflow is proven to be fast, stable and accurate. It is designed to support time critical applications for security demands - the images can be checked and interpreted in near real-time. For sensible areas it gives you the possibility to inform remote decision makers or interpretation experts in order to provide them situations awareness, wherever they are. For monitoring and inspection tasks it speeds up the process of data capture and data interpretation. The fully automated workflow of data pre-processing, data georeferencing, data cataloguing and data dissemination in near real time was developed based on the Intergraph products ERDAS IMAGINE, ERDAS APOLLO and GEOSYSTEMS METAmorph!IT. It is offered as adaptable solution by GEOSYSTEMS GmbH.

  15. Fault Diagnosis for Rotating Machinery: A Method based on Image Processing

    PubMed Central

    Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie

    2016-01-01

    Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery. PMID:27711246

  16. Monitoring of services with non-relational databases and map-reduce framework

    NASA Astrophysics Data System (ADS)

    Babik, M.; Souto, F.

    2012-12-01

    Service Availability Monitoring (SAM) is a well-established monitoring framework that performs regular measurements of the core site services and reports the corresponding availability and reliability of the Worldwide LHC Computing Grid (WLCG) infrastructure. One of the existing extensions of SAM is Site Wide Area Testing (SWAT), which gathers monitoring information from the worker nodes via instrumented jobs. This generates quite a lot of monitoring data to process, as there are several data points for every job and several million jobs are executed every day. The recent uptake of non-relational databases opens a new paradigm in the large-scale storage and distributed processing of systems with heavy read-write workloads. For SAM this brings new possibilities to improve its model, from performing aggregation of measurements to storing raw data and subsequent re-processing. Both SAM and SWAT are currently tuned to run at top performance, reaching some of the limits in storage and processing power of their existing Oracle relational database. We investigated the usability and performance of non-relational storage together with its distributed data processing capabilities. For this, several popular systems have been compared. In this contribution we describe our investigation of the existing non-relational databases suited for monitoring systems covering Cassandra, HBase and MongoDB. Further, we present our experiences in data modeling and prototyping map-reduce algorithms focusing on the extension of the already existing availability and reliability computations. Finally, possible future directions in this area are discussed, analyzing the current deficiencies of the existing Grid monitoring systems and proposing solutions to leverage the benefits of the non-relational databases to get more scalable and flexible frameworks.

  17. Fault Diagnosis for Rotating Machinery: A Method based on Image Processing.

    PubMed

    Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie

    2016-01-01

    Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery.

  18. microRNAs Databases: Developmental Methodologies, Structural and Functional Annotations.

    PubMed

    Singh, Nagendra Kumar

    2017-09-01

    microRNA (miRNA) is an endogenous and evolutionary conserved non-coding RNA, involved in post-transcriptional process as gene repressor and mRNA cleavage through RNA-induced silencing complex (RISC) formation. In RISC, miRNA binds in complementary base pair with targeted mRNA along with Argonaut proteins complex, causes gene repression or endonucleolytic cleavage of mRNAs and results in many diseases and syndromes. After the discovery of miRNA lin-4 and let-7, subsequently large numbers of miRNAs were discovered by low-throughput and high-throughput experimental techniques along with computational process in various biological and metabolic processes. The miRNAs are important non-coding RNA for understanding the complex biological phenomena of organism because it controls the gene regulation. This paper reviews miRNA databases with structural and functional annotations developed by various researchers. These databases contain structural and functional information of animal, plant and virus miRNAs including miRNAs-associated diseases, stress resistance in plant, miRNAs take part in various biological processes, effect of miRNAs interaction on drugs and environment, effect of variance on miRNAs, miRNAs gene expression analysis, sequence of miRNAs, structure of miRNAs. This review focuses on the developmental methodology of miRNA databases such as computational tools and methods used for extraction of miRNAs annotation from different resources or through experiment. This study also discusses the efficiency of user interface design of every database along with current entry and annotations of miRNA (pathways, gene ontology, disease ontology, etc.). Here, an integrated schematic diagram of construction process for databases is also drawn along with tabular and graphical comparison of various types of entries in different databases. Aim of this paper is to present the importance of miRNAs-related resources at a single place.

  19. Effects of a history of differential reinforcement on preference for choice.

    PubMed

    Karsina, Allen; Thompson, Rachel H; Rodriguez, Nicole M

    2011-03-01

    The effects of a history of differential reinforcement for selecting a free-choice versus a restricted-choice stimulus arrangement on the subsequent responding of 7 undergraduates in a computer-based game of chance were examined using a concurrent-chains arrangement and a multiple-baseline-across-participants design. In the free-choice arrangement, participants selected three numbers, in any order, from an array of eight numbers presented on the computer screen. In the restricted-choice arrangement, participants selected the order of three numbers preselected from the array of eight by a computer program. In initial sessions, all participants demonstrated no consistent preference or preference for restricted choice. Differential reinforcement of free-choice selections resulted in increased preference for free choice immediately and in subsequent sessions in the absence of programmed differential outcomes. For 5 participants, changes in preference for choice were both robust and lasting, suggesting that a history of differential reinforcement for choice may affect preference for choice.

  20. Effects of a History of Differential Reinforcement on Preference for Choice

    PubMed Central

    Karsina, Allen; Thompson, Rachel H; Rodriguez, Nicole M

    2011-01-01

    The effects of a history of differential reinforcement for selecting a free-choice versus a restricted-choice stimulus arrangement on the subsequent responding of 7 undergraduates in a computer-based game of chance were examined using a concurrent-chains arrangement and a multiple-baseline-across-participants design. In the free-choice arrangement, participants selected three numbers, in any order, from an array of eight numbers presented on the computer screen. In the restricted-choice arrangement, participants selected the order of three numbers preselected from the array of eight by a computer program. In initial sessions, all participants demonstrated no consistent preference or preference for restricted choice. Differential reinforcement of free-choice selections resulted in increased preference for free choice immediately and in subsequent sessions in the absence of programmed differential outcomes. For 5 participants, changes in preference for choice were both robust and lasting, suggesting that a history of differential reinforcement for choice may affect preference for choice. PMID:21541125

  1. A novel method for landslide displacement prediction by integrating advanced computational intelligence algorithms.

    PubMed

    Zhou, Chao; Yin, Kunlong; Cao, Ying; Ahmed, Bayes; Fu, Xiaolin

    2018-05-08

    Landslide displacement prediction is considered as an essential component for developing early warning systems. The modelling of conventional forecast methods requires enormous monitoring data that limit its application. To conduct accurate displacement prediction with limited data, a novel method is proposed and applied by integrating three computational intelligence algorithms namely: the wavelet transform (WT), the artificial bees colony (ABC), and the kernel-based extreme learning machine (KELM). At first, the total displacement was decomposed into several sub-sequences with different frequencies using the WT. Next each sub-sequence was predicted separately by the KELM whose parameters were optimized by the ABC. Finally the predicted total displacement was obtained by adding all the predicted sub-sequences. The Shuping landslide in the Three Gorges Reservoir area in China was taken as a case study. The performance of the new method was compared with the WT-ELM, ABC-KELM, ELM, and the support vector machine (SVM) methods. Results show that the prediction accuracy can be improved by decomposing the total displacement into sub-sequences with various frequencies and by predicting them separately. The ABC-KELM algorithm shows the highest prediction capacity followed by the ELM and SVM. Overall, the proposed method achieved excellent performance both in terms of accuracy and stability.

  2. Development of Metal Plate with Internal Structure Utilizing the Metal Injection Molding (MIM) Process.

    PubMed

    Shin, Kwangho; Heo, Youngmoo; Park, Hyungpil; Chang, Sungho; Rhee, Byungohk

    2013-12-12

    In this study, we focus on making a double-sided metal plate with an internal structure, such as honeycomb. The stainless steel powder was used in the metal injection molding (MIM) process. The preliminary studies were carried out for the measurement of the viscosity of the stainless steel feedstock and for the prediction of the filling behavior through Computer Aided Engineering (CAE) simulation. PE (high density polyethylene (HDPE) and low density polyethylene (LDPE)) and polypropylene (PP) resins were used to make the sacrificed insert with a honeycomb structure using a plastic injection molding process. Additionally, these sacrificed insert parts were inserted in the metal injection mold, and the metal injection molding process was carried out to build a green part with rectangular shape. Subsequently, debinding and sintering processes were adopted to remove the sacrificed polymer insert. The insert had a suitable rigidity that was able to endure the filling pressure. The core shift analysis was conducted to predict the deformation of the insert part. The 17-4PH feedstock with a low melting temperature was applied. The glass transition temperature of the sacrificed polymer insert would be of a high grade, and this insert should be maintained during the MIM process. Through these processes, a square metal plate with a honeycomb structure was made.

  3. Development of Metal Plate with Internal Structure Utilizing the Metal Injection Molding (MIM) Process

    PubMed Central

    Shin, Kwangho; Heo, Youngmoo; Park, Hyungpil; Chang, Sungho; Rhee, Byungohk

    2013-01-01

    In this study, we focus on making a double-sided metal plate with an internal structure, such as honeycomb. The stainless steel powder was used in the metal injection molding (MIM) process. The preliminary studies were carried out for the measurement of the viscosity of the stainless steel feedstock and for the prediction of the filling behavior through Computer Aided Engineering (CAE) simulation. PE (high density polyethylene (HDPE) and low density polyethylene (LDPE)) and polypropylene (PP) resins were used to make the sacrificed insert with a honeycomb structure using a plastic injection molding process. Additionally, these sacrificed insert parts were inserted in the metal injection mold, and the metal injection molding process was carried out to build a green part with rectangular shape. Subsequently, debinding and sintering processes were adopted to remove the sacrificed polymer insert. The insert had a suitable rigidity that was able to endure the filling pressure. The core shift analysis was conducted to predict the deformation of the insert part. The 17-4PH feedstock with a low melting temperature was applied. The glass transition temperature of the sacrificed polymer insert would be of a high grade, and this insert should be maintained during the MIM process. Through these processes, a square metal plate with a honeycomb structure was made. PMID:28788427

  4. A discrete element and ray framework for rapid simulation of acoustical dispersion of microscale particulate agglomerations

    NASA Astrophysics Data System (ADS)

    Zohdi, T. I.

    2016-03-01

    In industry, particle-laden fluids, such as particle-functionalized inks, are constructed by adding fine-scale particles to a liquid solution, in order to achieve desired overall properties in both liquid and (cured) solid states. However, oftentimes undesirable particulate agglomerations arise due to some form of mutual-attraction stemming from near-field forces, stray electrostatic charges, process ionization and mechanical adhesion. For proper operation of industrial processes involving particle-laden fluids, it is important to carefully breakup and disperse these agglomerations. One approach is to target high-frequency acoustical pressure-pulses to breakup such agglomerations. The objective of this paper is to develop a computational model and corresponding solution algorithm to enable rapid simulation of the effect of acoustical pulses on an agglomeration composed of a collection of discrete particles. Because of the complex agglomeration microstructure, containing gaps and interfaces, this type of system is extremely difficult to mesh and simulate using continuum-based methods, such as the finite difference time domain or the finite element method. Accordingly, a computationally-amenable discrete element/discrete ray model is developed which captures the primary physical events in this process, such as the reflection and absorption of acoustical energy, and the induced forces on the particulate microstructure. The approach utilizes a staggered, iterative solution scheme to calculate the power transfer from the acoustical pulse to the particles and the subsequent changes (breakup) of the pulse due to the particles. Three-dimensional examples are provided to illustrate the approach.

  5. Computational prediction of the refinement of oxide agglomerates in a physical conditioning process for molten aluminium alloy

    NASA Astrophysics Data System (ADS)

    Tong, M.; Jagarlapudi, S. C.; Patel, J. B.; Stone, I. C.; Fan, Z.; Browne, D. J.

    2015-06-01

    Physically conditioning molten scrap aluminium alloys using high shear processing (HSP) was recently found to be a promising technology for purification of contaminated alloys. HSP refines the solid oxide agglomerates in molten alloys, so that they can act as sites for the nucleation of Fe-rich intermetallic phases which can subsequently be removed by the downstream de-drossing process. In this paper, a computational modelling for predicting the evolution of size of oxide clusters during HSP is presented. We used CFD to predict the macroscopic flow features of the melt, and the resultant field predictions of temperature and melt shear rate were transferred to a population balance model (PBM) as its key inputs. The PBM is a macroscopic model that formulates the microscopic agglomeration and breakage of a population of a dispersed phase. Although it has been widely used to study conventional deoxidation of liquid metal, this is the first time that PBM has been used to simulate the melt conditioning process within a rotor/stator HSP device. We employed a method which discretizes the continuous profile of size of the dispersed phase into a collection of discrete bins of size, to solve the governing population balance equation for the size of agglomerates. A finite volume method was used to solve the continuity equation, the energy equation and the momentum equation. The overall computation was implemented mainly using the FLUENT module of ANSYS. The simulations showed that there is a relatively high melt shear rate between the stator and sweeping tips of the rotor blades. This high shear rate leads directly to significant fragmentation of the initially large oxide aggregates. Because the process of agglomeration is significantly slower than the breakage processes at the beginning of HSP, the mean size of oxide clusters decreases very rapidly. As the process of agglomeration gradually balances the process of breakage, the mean size of oxide clusters converges to a steady value. The model enables formulation of the quantitative relationship between the macroscopic flow features of liquid metal and the change of size of dispersed oxide clusters, during HSP. It predicted the variation in size of the dispersed phased with operational parameters (including the geometry and, particularly, the speed of the rotor), which is of direct use to experimentalists optimising the design of the HSP device and its implementation.

  6. F-18-Fluorodeoxyglucose Positron Emission Tomography/Computed Tomography Appearance of Extramedullary Hematopoesis in a Case of Primary Myelofibrosis

    PubMed Central

    Mukherjee, Anirban; Bal, Chandrasekhar; Tripathi, Madhavi; Das, Chandan Jyoti; Shamim, Shamim Ahmed

    2017-01-01

    A 44-year-old female with known primary myelofibrosis presented with shortness of breath. High Resolution Computed Tomography thorax revealed large heterogeneously enhancing extraparenchymal soft tissue density mass involving bilateral lung fields. F-18-fluorodeoxyglucose (FDG) positron emission tomography/computed tomography revealed mildly FDG avid soft tissue density mass with specks of calcification involving bilateral lung fields, liver, and spleen. Subsequent histopathologic evaluation from the right lung mass was suggestive of extramedullary hematopoesis. PMID:28533647

  7. New Flutter Analysis Technique for Time-Domain Computational Aeroelasticity

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi; Lung, Shun-Fat

    2017-01-01

    A new time-domain approach for computing flutter speed is presented. Based on the time-history result of aeroelastic simulation, the unknown unsteady aerodynamics model is estimated using a system identification technique. The full aeroelastic model is generated via coupling the estimated unsteady aerodynamic model with the known linear structure model. The critical dynamic pressure is computed and used in the subsequent simulation until the convergence of the critical dynamic pressure is achieved. The proposed method is applied to a benchmark cantilevered rectangular wing.

  8. Optimization of thermal protection systems for the space vehicle. Volume 2: User's manual

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The development of the computational techniques for the design optimization of thermal protection systems for the space shuttle vehicle are discussed. The resulting computer program was then used to perform initial optimization and sensitivity studies on a typical thermal protection system (TPS) to demonstrate its application to the space shuttle TPS design. The program was developed in FORTRAN IV for CDC 6400 computer, but it was subsequently converted to the FORTRAN V language to be used on the Univac 1108.

  9. Comparison of two matrix data structures for advanced CSM testbed applications

    NASA Technical Reports Server (NTRS)

    Regelbrugge, M. E.; Brogan, F. A.; Nour-Omid, B.; Rankin, C. C.; Wright, M. A.

    1989-01-01

    The first section describes data storage schemes presently used by the Computational Structural Mechanics (CSM) testbed sparse matrix facilities and similar skyline (profile) matrix facilities. The second section contains a discussion of certain features required for the implementation of particular advanced CSM algorithms, and how these features might be incorporated into the data storage schemes described previously. The third section presents recommendations, based on the discussions of the prior sections, for directing future CSM testbed development to provide necessary matrix facilities for advanced algorithm implementation and use. The objective is to lend insight into the matrix structures discussed and to help explain the process of evaluating alternative matrix data structures and utilities for subsequent use in the CSM testbed.

  10. On splice site prediction using weight array models: a comparison of smoothing techniques

    NASA Astrophysics Data System (ADS)

    Taher, Leila; Meinicke, Peter; Morgenstern, Burkhard

    2007-11-01

    In most eukaryotic genes, protein-coding exons are separated by non-coding introns which are removed from the primary transcript by a process called "splicing". The positions where introns are cut and exons are spliced together are called "splice sites". Thus, computational prediction of splice sites is crucial for gene finding in eukaryotes. Weight array models are a powerful probabilistic approach to splice site detection. Parameters for these models are usually derived from m-tuple frequencies in trusted training data and subsequently smoothed to avoid zero probabilities. In this study we compare three different ways of parameter estimation for m-tuple frequencies, namely (a) non-smoothed probability estimation, (b) standard pseudo counts and (c) a Gaussian smoothing procedure that we recently developed.

  11. Changes and challenges in the Software Engineering Laboratory

    NASA Technical Reports Server (NTRS)

    Pajerski, Rose

    1994-01-01

    Since 1976, the Software Engineering Laboratory (SEL) has been dedicated to understanding and improving the way in which one NASA organization, the Flight Dynamics Division (FDD), develops, maintains, and manages complex flight dynamics systems. The SEL is composed of three member organizations: NASA/GSFC, the University of Maryland, and Computer Sciences Corporation. During the past 18 years, the SEL's overall goal has remained the same: to improve the FDD's software products and processes in a measured manner. This requires that each development and maintenance effort be viewed, in part, as a SEL experiment which examines a specific technology or builds a model of interest for use on subsequent efforts. The SEL has undertaken many technology studies while developing operational support systems for numerous NASA spacecraft missions.

  12. Preliminary Sizing and Performance Evaluation of Supersonic Cruise Aircraft

    NASA Technical Reports Server (NTRS)

    Fetterman, D. E., Jr.

    1976-01-01

    The basic processes of a method that performs sizing operations on a baseline aircraft and determines their subsequent effects on aerodynamics, propulsion, weights, and mission performance are described. The input requirements of the associated computer program are defined and its output listings explained. Results obtained by applying the method to an advanced supersonic technology concept are discussed. These results include the effects of wing loading, thrust-to-weight ratio, and technology improvements on range performance, and possible gains in both range and payload capability that become available through growth versions of the baseline aircraft. The results from an in depth contractual study that confirm the range gain predicted for a particular wing loading, thrust-to-weight ratio combination are also included.

  13. NeuroTessMesh: A Tool for the Generation and Visualization of Neuron Meshes and Adaptive On-the-Fly Refinement.

    PubMed

    Garcia-Cantero, Juan J; Brito, Juan P; Mata, Susana; Bayona, Sofia; Pastor, Luis

    2017-01-01

    Gaining a better understanding of the human brain continues to be one of the greatest challenges for science, largely because of the overwhelming complexity of the brain and the difficulty of analyzing the features and behavior of dense neural networks. Regarding analysis, 3D visualization has proven to be a useful tool for the evaluation of complex systems. However, the large number of neurons in non-trivial circuits, together with their intricate geometry, makes the visualization of a neuronal scenario an extremely challenging computational problem. Previous work in this area dealt with the generation of 3D polygonal meshes that approximated the cells' overall anatomy but did not attempt to deal with the extremely high storage and computational cost required to manage a complex scene. This paper presents NeuroTessMesh, a tool specifically designed to cope with many of the problems associated with the visualization of neural circuits that are comprised of large numbers of cells. In addition, this method facilitates the recovery and visualization of the 3D geometry of cells included in databases, such as NeuroMorpho, and provides the tools needed to approximate missing information such as the soma's morphology. This method takes as its only input the available compact, yet incomplete, morphological tracings of the cells as acquired by neuroscientists. It uses a multiresolution approach that combines an initial, coarse mesh generation with subsequent on-the-fly adaptive mesh refinement stages using tessellation shaders. For the coarse mesh generation, a novel approach, based on the Finite Element Method, allows approximation of the 3D shape of the soma from its incomplete description. Subsequently, the adaptive refinement process performed in the graphic card generates meshes that provide good visual quality geometries at a reasonable computational cost, both in terms of memory and rendering time. All the described techniques have been integrated into NeuroTessMesh, available to the scientific community, to generate, visualize, and save the adaptive resolution meshes.

  14. Early classification of pathological heartbeats on wireless body sensor nodes.

    PubMed

    Braojos, Rubén; Beretta, Ivan; Ansaloni, Giovanni; Atienza, David

    2014-11-27

    Smart Wireless Body Sensor Nodes (WBSNs) are a novel class of unobtrusive, battery-powered devices allowing the continuous monitoring and real-time interpretation of a subject's bio-signals, such as the electrocardiogram (ECG). These low-power platforms, while able to perform advanced signal processing to extract information on heart conditions, are usually constrained in terms of computational power and transmission bandwidth. It is therefore essential to identify in the early stages which parts of an ECG are critical for the diagnosis and, only in these cases, activate on demand more detailed and computationally intensive analysis algorithms. In this work, we present a comprehensive framework for real-time automatic classification of normal and abnormal heartbeats, targeting embedded and resource-constrained WBSNs. In particular, we provide a comparative analysis of different strategies to reduce the heartbeat representation dimensionality, and therefore the required computational effort. We then combine these techniques with a neuro-fuzzy classification strategy, which effectively discerns normal and pathological heartbeats with a minimal run time and memory overhead. We prove that, by performing a detailed analysis only on the heartbeats that our classifier identifies as abnormal, a WBSN system can drastically reduce its overall energy consumption. Finally, we assess the choice of neuro-fuzzy classification by comparing its performance and workload with respect to other state-of-the-art strategies. Experimental results using the MIT-BIH Arrhythmia database show energy savings of as much as 60% in the signal processing stage, and 63% in the subsequent wireless transmission, when a neuro-fuzzy classification structure is employed, coupled with a dimensionality reduction technique based on random projections.

  15. Early Classification of Pathological Heartbeats on Wireless Body Sensor Nodes

    PubMed Central

    Braojos, Rubén; Beretta, Ivan; Ansaloni, Giovanni; Atienza, David

    2014-01-01

    Smart Wireless Body Sensor Nodes (WBSNs) are a novel class of unobtrusive, battery-powered devices allowing the continuous monitoring and real-time interpretation of a subject's bio-signals, such as the electrocardiogram (ECG). These low-power platforms, while able to perform advanced signal processing to extract information on heart conditions, are usually constrained in terms of computational power and transmission bandwidth. It is therefore essential to identify in the early stages which parts of an ECG are critical for the diagnosis and, only in these cases, activate on demand more detailed and computationally intensive analysis algorithms. In this work, we present a comprehensive framework for real-time automatic classification of normal and abnormal heartbeats, targeting embedded and resource-constrained WBSNs. In particular, we provide a comparative analysis of different strategies to reduce the heartbeat representation dimensionality, and therefore the required computational effort. We then combine these techniques with a neuro-fuzzy classification strategy, which effectively discerns normal and pathological heartbeats with a minimal run time and memory overhead. We prove that, by performing a detailed analysis only on the heartbeats that our classifier identifies as abnormal, a WBSN system can drastically reduce its overall energy consumption. Finally, we assess the choice of neuro-fuzzy classification by comparing its performance and workload with respect to other state-of-the-art strategies. Experimental results using the MIT-BIH Arrhythmia database show energy savings of as much as 60% in the signal processing stage, and 63% in the subsequent wireless transmission, when a neuro-fuzzy classification structure is employed, coupled with a dimensionality reduction technique based on random projections. PMID:25436654

  16. Brain-computer interface controlled functional electrical stimulation device for foot drop due to stroke.

    PubMed

    Do, An H; Wang, Po T; King, Christine E; Schombs, Andrew; Cramer, Steven C; Nenadic, Zoran

    2012-01-01

    Gait impairment due to foot drop is a common outcome of stroke, and current physiotherapy provides only limited restoration of gait function. Gait function can also be aided by orthoses, but these devices may be cumbersome and their benefits disappear upon removal. Hence, new neuro-rehabilitative therapies are being sought to generate permanent improvements in motor function beyond those of conventional physiotherapies through positive neural plasticity processes. Here, the authors describe an electroencephalogram (EEG) based brain-computer interface (BCI) controlled functional electrical stimulation (FES) system that enabled a stroke subject with foot drop to re-establish foot dorsiflexion. To this end, a prediction model was generated from EEG data collected as the subject alternated between periods of idling and attempted foot dorsiflexion. This prediction model was then used to classify online EEG data into either "idling" or "dorsiflexion" states, and this information was subsequently used to control an FES device to elicit effective foot dorsiflexion. The performance of the system was assessed in online sessions, where the subject was prompted by a computer to alternate between periods of idling and dorsiflexion. The subject demonstrated purposeful operation of the BCI-FES system, with an average cross-correlation between instructional cues and BCI-FES response of 0.60 over 3 sessions. In addition, analysis of the prediction model indicated that non-classical brain areas were activated in the process, suggesting post-stroke cortical re-organization. In the future, these systems may be explored as a potential therapeutic tool that can help promote positive plasticity and neural repair in chronic stroke patients.

  17. Melody Alignment and Similarity Metric for Content-Based Music Retrieval

    NASA Astrophysics Data System (ADS)

    Zhu, Yongwei; Kankanhalli, Mohan S.

    2003-01-01

    Music query-by-humming has attracted much research interest recently. It is a challenging problem since the hummed query inevitably contains much variation and inaccuracy. Furthermore, the similarity computation between the query tune and the reference melody is not easy due to the difficulty in ensuring proper alignment. This is because the query tune can be rendered at an unknown speed and it is usually an arbitrary subsequence of the target reference melody. Many of the previous methods, which adopt note segmentation and string matching, suffer drastically from the errors in the note segmentation, which affects retrieval accuracy and efficiency. Some methods solve the alignment issue by controlling the speed of the articulation of queries, which is inconvenient because it forces users to hum along a metronome. Some other techniques introduce arbitrary rescaling in time but this is computationally very inefficient. In this paper, we introduce a melody alignment technique, which addresses the robustness and efficiency issues. We also present a new melody similarity metric, which is performed directly on melody contours of the query data. This approach cleanly separates the alignment and similarity measurement in the search process. We show how to robustly and efficiently align the query melody with the reference melodies and how to measure the similarity subsequently. We have carried out extensive experiments. Our melody alignment method can reduce the matching candidate to 1.7% with 95% correct alignment rate. The overall retrieval system achieved 80% recall in the top 10 rank list. The results demonstrate the robustness and effectiveness the proposed methods.

  18. Performing process migration with allreduce operations

    DOEpatents

    Archer, Charles Jens; Peters, Amanda; Wallenfelt, Brian Paul

    2010-12-14

    Compute nodes perform allreduce operations that swap processes at nodes. A first allreduce operation generates a first result and uses a first process from a first compute node, a second process from a second compute node, and zeros from other compute nodes. The first compute node replaces the first process with the first result. A second allreduce operation generates a second result and uses the first result from the first compute node, the second process from the second compute node, and zeros from others. The second compute node replaces the second process with the second result, which is the first process. A third allreduce operation generates a third result and uses the first result from first compute node, the second result from the second compute node, and zeros from others. The first compute node replaces the first result with the third result, which is the second process.

  19. Daily online testing in large classes: boosting college performance while reducing achievement gaps.

    PubMed

    Pennebaker, James W; Gosling, Samuel D; Ferrell, Jason D

    2013-01-01

    An in-class computer-based system, that included daily online testing, was introduced to two large university classes. We examined subsequent improvements in academic performance and reductions in the achievement gaps between lower- and upper-middle class students in academic performance. Students (N = 901) brought laptop computers to classes and took daily quizzes that provided immediate and personalized feedback. Student performance was compared with the same data for traditional classes taught previously by the same instructors (N = 935). Exam performance was approximately half a letter grade above previous semesters, based on comparisons of identical questions asked from earlier years. Students in the experimental classes performed better in other classes, both in the semester they took the course and in subsequent semester classes. The new system resulted in a 50% reduction in the achievement gap as measured by grades among students of different social classes. These findings suggest that frequent consequential quizzing should be used routinely in large lecture courses to improve performance in class and in other concurrent and subsequent courses.

  20. NNLO QCD corrections to associated W H production and H →b b ¯ decay

    NASA Astrophysics Data System (ADS)

    Caola, Fabrizio; Luisoni, Gionata; Melnikov, Kirill; Röntsch, Raoul

    2018-04-01

    We present a computation of the next-to-next-to-leading-order (NNLO) QCD corrections to the production of a Higgs boson in association with a W boson at the LHC and the subsequent decay of the Higgs boson into a b b ¯ pair, treating the b quarks as massless. We consider various kinematic distributions and find significant corrections to observables that resolve the Higgs decay products. We also find that a cut on the transverse momentum of the W boson, important for experimental analyses, may have a significant impact on kinematic distributions and radiative corrections. We show that some of these effects can be adequately described by simulating QCD radiation in Higgs boson decays to b quarks using parton showers. We also describe contributions to Higgs decay to a b b ¯ pair that first appear at NNLO and that were not considered in previous fully differential computations. The calculation of NNLO QCD corrections to production and decay sub-processes is carried out within the nested soft-collinear subtraction scheme presented by some of us earlier this year. We demonstrate that this subtraction scheme performs very well, allowing a computation of the coefficient of the second-order QCD corrections at the level of a few per mill.

  1. Modern Methods for fast generation of digital holograms

    NASA Astrophysics Data System (ADS)

    Tsang, P. W. M.; Liu, J. P.; Cheung, K. W. K.; Poon, T.-C.

    2010-06-01

    With the advancement of computers, digital holography (DH) has become an area of interest that has gained much popularity. Research findings derived from this technology enables holograms representing three dimensional (3-D) scenes to be acquired with optical means, or generated with numerical computation. In both cases, the holograms are in the form of numerical data that can be recorded, transmitted, and processed with digital techniques. On top of that, the availability of high capacity digital storage and wide-band communication technologies also cast light on the emergence of real time video holographic systems, enabling animated 3-D contents to be encoded as holographic data, and distributed via existing medium. At present, development in DH has reached a reasonable degree of maturity, but at the same time the heavy computation involved also imposes difficulty in practical applications. In this paper, a summary on a number of successful accomplishments that have been made recently in overcoming this problem is presented. Subsequently, we shall propose an economical framework that is suitable for real time generation and transmission of holographic video signals over existing distribution media. The proposed framework includes an aspect of extending the depth range of the object scene, which is important for the display of large-scale objects. [Figure not available: see fulltext.

  2. Exploratory analysis regarding the domain definitions for computer based analytical models

    NASA Astrophysics Data System (ADS)

    Raicu, A.; Oanta, E.; Barhalescu, M.

    2017-08-01

    Our previous computer based studies dedicated to structural problems using analytical methods defined the composite cross section of a beam as a result of Boolean operations with so-called ‘simple’ shapes. Using generalisations, in the class of the ‘simple’ shapes were included areas bounded by curves approximated using spline functions and areas approximated as polygons. However, particular definitions lead to particular solutions. In order to ascend above the actual limitations, we conceived a general definition of the cross sections that are considered now calculus domains consisting of several subdomains. The according set of input data use complex parameterizations. This new vision allows us to naturally assign a general number of attributes to the subdomains. In this way there may be modelled new phenomena that use map-wise information, such as the metal alloys equilibrium diagrams. The hierarchy of the input data text files that use the comma-separated-value format and their structure are also presented and discussed in the paper. This new approach allows us to reuse the concepts and part of the data processing software instruments already developed. The according software to be subsequently developed will be modularised and generalised in order to be used in the upcoming projects that require rapid development of computer based models.

  3. Hybrid method to estimate two-layered superficial tissue optical properties from simulated data of diffuse reflectance spectroscopy.

    PubMed

    Hsieh, Hong-Po; Ko, Fan-Hua; Sung, Kung-Bin

    2018-04-20

    An iterative curve fitting method has been applied in both simulation [J. Biomed. Opt.17, 107003 (2012)JBOPFO1083-366810.1117/1.JBO.17.10.107003] and phantom [J. Biomed. Opt.19, 077002 (2014)JBOPFO1083-366810.1117/1.JBO.19.7.077002] studies to accurately extract optical properties and the top layer thickness of a two-layered superficial tissue model from diffuse reflectance spectroscopy (DRS) data. This paper describes a hybrid two-step parameter estimation procedure to address two main issues of the previous method, including (1) high computational intensity and (2) converging to local minima. The parameter estimation procedure contained a novel initial estimation step to obtain an initial guess, which was used by a subsequent iterative fitting step to optimize the parameter estimation. A lookup table was used in both steps to quickly obtain reflectance spectra and reduce computational intensity. On simulated DRS data, the proposed parameter estimation procedure achieved high estimation accuracy and a 95% reduction of computational time compared to previous studies. Furthermore, the proposed initial estimation step led to better convergence of the following fitting step. Strategies used in the proposed procedure could benefit both the modeling and experimental data processing of not only DRS but also related approaches such as near-infrared spectroscopy.

  4. Linear Subpixel Learning Algorithm for Land Cover Classification from WELD using High Performance Computing

    NASA Technical Reports Server (NTRS)

    Kumar, Uttam; Nemani, Ramakrishna R.; Ganguly, Sangram; Kalia, Subodh; Michaelis, Andrew

    2017-01-01

    In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS-national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91 percent was achieved, which is a 6 percent improvement in unmixing based classification relative to per-pixel-based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.

  5. Linear Subpixel Learning Algorithm for Land Cover Classification from WELD using High Performance Computing

    NASA Astrophysics Data System (ADS)

    Ganguly, S.; Kumar, U.; Nemani, R. R.; Kalia, S.; Michaelis, A.

    2017-12-01

    In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS - national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91% was achieved, which is a 6% improvement in unmixing based classification relative to per-pixel based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.

  6. 7 CFR 1781.22 - Subsequent loans.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 12 2010-01-01 2010-01-01 false Subsequent loans. 1781.22 Section 1781.22 Agriculture... (CONTINUED) RESOURCE CONSERVATION AND DEVELOPMENT (RCD) LOANS AND WATERSHED (WS) LOANS AND ADVANCES § 1781.22 Subsequent loans. Subsequent loans will be processed in accordance with this part. ...

  7. A Big Data Approach to Analyzing Market Volatility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Kesheng; Bethel, E. Wes; Gu, Ming

    2013-06-05

    Understanding the microstructure of the financial market requires the processing of a vast amount of data related to individual trades, and sometimes even multiple levels of quotes. Analyzing such a large volume of data requires tremendous computing power that is not easily available to financial academics and regulators. Fortunately, public funded High Performance Computing (HPC) power is widely available at the National Laboratories in the US. In this paper we demonstrate that the HPC resource and the techniques for data-intensive sciences can be used to greatly accelerate the computation of an early warning indicator called Volume-synchronized Probability of Informed tradingmore » (VPIN). The test data used in this study contains five and a half year's worth of trading data for about 100 most liquid futures contracts, includes about 3 billion trades, and takes 140GB as text files. By using (1) a more efficient file format for storing the trading records, (2) more effective data structures and algorithms, and (3) parallelizing the computations, we are able to explore 16,000 different ways of computing VPIN in less than 20 hours on a 32-core IBM DataPlex machine. Our test demonstrates that a modest computer is sufficient to monitor a vast number of trading activities in real-time – an ability that could be valuable to regulators. Our test results also confirm that VPIN is a strong predictor of liquidity-induced volatility. With appropriate parameter choices, the false positive rates are about 7% averaged over all the futures contracts in the test data set. More specifically, when VPIN values rise above a threshold (CDF > 0.99), the volatility in the subsequent time windows is higher than the average in 93% of the cases.« less

  8. Post processing of protein-compound docking for fragment-based drug discovery (FBDD): in-silico structure-based drug screening and ligand-binding pose prediction.

    PubMed

    Fukunishi, Yoshifumi

    2010-01-01

    For fragment-based drug development, both hit (active) compound prediction and docking-pose (protein-ligand complex structure) prediction of the hit compound are important, since chemical modification (fragment linking, fragment evolution) subsequent to the hit discovery must be performed based on the protein-ligand complex structure. However, the naïve protein-compound docking calculation shows poor accuracy in terms of docking-pose prediction. Thus, post-processing of the protein-compound docking is necessary. Recently, several methods for the post-processing of protein-compound docking have been proposed. In FBDD, the compounds are smaller than those for conventional drug screening. This makes it difficult to perform the protein-compound docking calculation. A method to avoid this problem has been reported. Protein-ligand binding free energy estimation is useful to reduce the procedures involved in the chemical modification of the hit fragment. Several prediction methods have been proposed for high-accuracy estimation of protein-ligand binding free energy. This paper summarizes the various computational methods proposed for docking-pose prediction and their usefulness in FBDD.

  9. Analytic hierarchy process helps select site for limestone quarry expansion in Barbados.

    PubMed

    Dey, Prasanta Kumar; Ramcharan, Eugene K

    2008-09-01

    Site selection is a key activity for quarry expansion to support cement production, and is governed by factors such as resource availability, logistics, costs, and socio-economic-environmental factors. Adequate consideration of all the factors facilitates both industrial productivity and sustainable economic growth. This study illustrates the site selection process that was undertaken for the expansion of limestone quarry operations to support cement production in Barbados. First, alternate sites with adequate resources to support a 25-year development horizon were identified. Second, technical and socio-economic-environmental factors were then identified. Third, a database was developed for each site with respect to each factor. Fourth, a hierarchical model in analytic hierarchy process (AHP) framework was then developed. Fifth, the relative ranking of the alternate sites was then derived through pair wise comparison in all the levels and through subsequent synthesizing of the results across the hierarchy through computer software (Expert Choice). The study reveals that an integrated framework using the AHP can help select a site for the quarry expansion project in Barbados.

  10. A preliminary model of work during initial examination and treatment planning appointments.

    PubMed

    Irwin, J Y; Torres-Urquidy, M H; Schleyer, T; Monaco, V

    2009-01-10

    Objective This study's objective was to formally describe the work process for charting and treatment planning in general dental practice to inform the design of a new clinical computing environment.Methods Using a process called contextual inquiry, researchers observed 23 comprehensive examination and treatment planning sessions during 14 visits to 12 general US dental offices. For each visit, field notes were analysed and reformulated as formalised models. Subsequently, each model type was consolidated across all offices and visits. Interruptions to the workflow, called breakdowns, were identified.Results Clinical work during dental examination and treatment planning appointments is a highly collaborative activity involving dentists, hygienists and assistants. Personnel with multiple overlapping roles complete complex multi-step tasks supported by a large and varied collection of equipment, artifacts and technology. Most of the breakdowns were related to technology which interrupted the workflow, caused rework and increased the number of steps in work processes.Conclusion Current dental software could be significantly improved with regard to its support for communication and collaboration, workflow, information design and presentation, information content, and data entry.

  11. Computational characterization of fracture healing under reduced gravity loading conditions.

    PubMed

    Gadomski, Benjamin C; Lerner, Zachary F; Browning, Raymond C; Easley, Jeremiah T; Palmer, Ross H; Puttlitz, Christian M

    2016-07-01

    The literature is deficient with regard to how the localized mechanical environment of skeletal tissue is altered during reduced gravitational loading and how these alterations affect fracture healing. Thus, a finite element model of the ovine hindlimb was created to characterize the local mechanical environment responsible for the inhibited fracture healing observed under experimental simulated hypogravity conditions. Following convergence and verification studies, hydrostatic pressure and strain within a diaphyseal fracture of the metatarsus were evaluated for models under both 1 and 0.25 g loading environments and compared to results of a related in vivo study. Results of the study suggest that reductions in hydrostatic pressure and strain of the healing fracture for animals exposed to reduced gravitational loading conditions contributed to an inhibited healing process, with animals exposed to the simulated hypogravity environment subsequently initiating an intramembranous bone formation process rather than the typical endochondral ossification healing process experienced by animals healing in a 1 g gravitational environment. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 34:1206-1215, 2016. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  12. Hydrographic Basins Analysis Using Digital Terrain Modelling

    NASA Astrophysics Data System (ADS)

    Mihaela, Pişleagă; -Minda Codruţa, Bădăluţă; Gabriel, Eleş; Daniela, Popescu

    2017-10-01

    The paper, emphasis the link between digital terrain modelling and studies of hydrographic basins, concerning the hydrological processes analysis. Given the evolution of computing techniques but also of the software digital terrain modelling made its presence felt increasingly, and established itself as a basic concept in many areas, due to many advantages. At present, most digital terrain modelling is derived from three alternative sources such as ground surveys, photogrammetric data capture or from digitized cartographic sources. A wide range of features may be extracted from digital terrain models, such as surface, specific points and landmarks, linear features but also areal futures like drainage basins, hills or hydrological basins. The paper highlights how the use appropriate software for the preparation of a digital terrain model, a model which is subsequently used to study hydrographic basins according to various geomorphological parameters. As a final goal, it shows the link between digital terrain modelling and hydrographic basins study that can be used to optimize the correlation between digital model terrain and hydrological processes in order to obtain results as close to the real field processes.

  13. Propagation of registration uncertainty during multi-fraction cervical cancer brachytherapy

    NASA Astrophysics Data System (ADS)

    Amir-Khalili, A.; Hamarneh, G.; Zakariaee, R.; Spadinger, I.; Abugharbieh, R.

    2017-10-01

    Multi-fraction cervical cancer brachytherapy is a form of image-guided radiotherapy that heavily relies on 3D imaging during treatment planning, delivery, and quality control. In this context, deformable image registration can increase the accuracy of dosimetric evaluations, provided that one can account for the uncertainties associated with the registration process. To enable such capability, we propose a mathematical framework that first estimates the registration uncertainty and subsequently propagates the effects of the computed uncertainties from the registration stage through to the visualizations, organ segmentations, and dosimetric evaluations. To ensure the practicality of our proposed framework in real world image-guided radiotherapy contexts, we implemented our technique via a computationally efficient and generalizable algorithm that is compatible with existing deformable image registration software. In our clinical context of fractionated cervical cancer brachytherapy, we perform a retrospective analysis on 37 patients and present evidence that our proposed methodology for computing and propagating registration uncertainties may be beneficial during therapy planning and quality control. Specifically, we quantify and visualize the influence of registration uncertainty on dosimetric analysis during the computation of the total accumulated radiation dose on the bladder wall. We further show how registration uncertainty may be leveraged into enhanced visualizations that depict the quality of the registration and highlight potential deviations from the treatment plan prior to the delivery of radiation treatment. Finally, we show that we can improve the transfer of delineated volumetric organ segmentation labels from one fraction to the next by encoding the computed registration uncertainties into the segmentation labels.

  14. Impact of singular excessive computer game and television exposure on sleep patterns and memory performance of school-aged children.

    PubMed

    Dworak, Markus; Schierl, Thomas; Bruns, Thomas; Strüder, Heiko Klaus

    2007-11-01

    Television and computer game consumption are a powerful influence in the lives of most children. Previous evidence has supported the notion that media exposure could impair a variety of behavioral characteristics. Excessive television viewing and computer game playing have been associated with many psychiatric symptoms, especially emotional and behavioral symptoms, somatic complaints, attention problems such as hyperactivity, and family interaction problems. Nevertheless, there is insufficient knowledge about the relationship between singular excessive media consumption on sleep patterns and linked implications on children. The aim of this study was to investigate the effects of singular excessive television and computer game consumption on sleep patterns and memory performance of children. Eleven school-aged children were recruited for this polysomnographic study. Children were exposed to voluntary excessive television and computer game consumption. In the subsequent night, polysomnographic measurements were conducted to measure sleep-architecture and sleep-continuity parameters. In addition, a visual and verbal memory test was conducted before media stimulation and after the subsequent sleeping period to determine visuospatial and verbal memory performance. Only computer game playing resulted in significant reduced amounts of slow-wave sleep as well as significant declines in verbal memory performance. Prolonged sleep-onset latency and more stage 2 sleep were also detected after previous computer game consumption. No effects on rapid eye movement sleep were observed. Television viewing reduced sleep efficiency significantly but did not affect sleep patterns. The results suggest that television and computer game exposure affect children's sleep and deteriorate verbal cognitive performance, which supports the hypothesis of the negative influence of media consumption on children's sleep, learning, and memory.

  15. Impact of varying lidar measurement and data processing techniques in evaluating cirrus cloud and aerosol direct radiative effects

    NASA Astrophysics Data System (ADS)

    Lolli, Simone; Madonna, Fabio; Rosoldi, Marco; Campbell, James R.; Welton, Ellsworth J.; Lewis, Jasper R.; Gu, Yu; Pappalardo, Gelsomina

    2018-03-01

    In the past 2 decades, ground-based lidar networks have drastically increased in scope and relevance, thanks primarily to the advent of lidar observations from space and their need for validation. Lidar observations of aerosol and cloud geometrical, optical and microphysical atmospheric properties are subsequently used to evaluate their direct radiative effects on climate. However, the retrievals are strongly dependent on the lidar instrument measurement technique and subsequent data processing methodologies. In this paper, we evaluate the discrepancies between the use of Raman and elastic lidar measurement techniques and corresponding data processing methods for two aerosol layers in the free troposphere and for two cirrus clouds with different optical depths. Results show that the different lidar techniques are responsible for discrepancies in the model-derived direct radiative effects for biomass burning (0.05 W m-2 at surface and 0.007 W m-2 at top of the atmosphere) and dust aerosol layers (0.7 W m-2 at surface and 0.85 W m-2 at top of the atmosphere). Data processing is further responsible for discrepancies in both thin (0.55 W m-2 at surface and 2.7 W m-2 at top of the atmosphere) and opaque (7.7 W m-2 at surface and 11.8 W m-2 at top of the atmosphere) cirrus clouds. Direct radiative effect discrepancies can be attributed to the larger variability of the lidar ratio for aerosols (20-150 sr) than for clouds (20-35 sr). For this reason, the influence of the applied lidar technique plays a more fundamental role in aerosol monitoring because the lidar ratio must be retrieved with relatively high accuracy. In contrast, for cirrus clouds, with the lidar ratio being much less variable, the data processing is critical because smoothing it modifies the aerosol and cloud vertically resolved extinction profile that is used as input to compute direct radiative effect calculations.

  16. Free energy decomposition of protein-protein interactions.

    PubMed

    Noskov, S Y; Lim, C

    2001-08-01

    A free energy decomposition scheme has been developed and tested on antibody-antigen and protease-inhibitor binding for which accurate experimental structures were available for both free and bound proteins. Using the x-ray coordinates of the free and bound proteins, the absolute binding free energy was computed assuming additivity of three well-defined, physical processes: desolvation of the x-ray structures, isomerization of the x-ray conformation to a nearby local minimum in the gas-phase, and subsequent noncovalent complex formation in the gas phase. This free energy scheme, together with the Generalized Born model for computing the electrostatic solvation free energy, yielded binding free energies in remarkable agreement with experimental data. Two assumptions commonly used in theoretical treatments; viz., the rigid-binding approximation (which assumes no conformational change upon complexation) and the neglect of vdW interactions, were found to yield large errors in the binding free energy. Protein-protein vdW and electrostatic interactions between complementary surfaces over a relatively large area (1400--1700 A(2)) were found to drive antibody-antigen and protease-inhibitor binding.

  17. Vander Lugt correlation of DNA sequence data

    NASA Astrophysics Data System (ADS)

    Christens-Barry, William A.; Hawk, James F.; Martin, James C.

    1990-12-01

    DNA, the molecule containing the genetic code of an organism, is a linear chain of subunits. It is the sequence of subunits, of which there are four kinds, that constitutes the unique blueprint of an individual. This sequence is the focus of a large number of analyses performed by an army of geneticists, biologists, and computer scientists. Most of these analyses entail searches for specific subsequences within the larger set of sequence data. Thus, most analyses are essentially pattern recognition or correlation tasks. Yet, there are special features to such analysis that influence the strategy and methods of an optical pattern recognition approach. While the serial processing employed in digital electronic computers remains the main engine of sequence analyses, there is no fundamental reason that more efficient parallel methods cannot be used. We describe an approach using optical pattern recognition (OPR) techniques based on matched spatial filtering. This allows parallel comparison of large blocks of sequence data. In this study we have simulated a Vander Lugt1 architecture implementing our approach. Searches for specific target sequence strings within a block of DNA sequence from the Co/El plasmid2 are performed.

  18. Highlights of the Workshop

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1997-01-01

    Economic stresses are forcing many industries to reduce cost and time-to-market, and to insert emerging technologies into their products. Engineers are asked to design faster, ever more complex systems. Hence, there is a need for novel design paradigms and effective design tools to reduce the design and development times. Several computational tools and facilities have been developed to support the design process. Some of these are described in subsequent presentations. The focus of the workshop is on the computational tools and facilities which have high potential for use in future design environment for aerospace systems. The outline for the introductory remarks is given. First, the characteristics and design drivers for future aerospace systems are outlined; second, simulation-based design environment, and some of its key modules are described; third, the vision for the next-generation design environment being planned by NASA, the UVA ACT Center and JPL is presented. The anticipated major benefits of the planned environment are listed; fourth, some of the government-supported programs related to simulation-based design are listed; and fifth, the objectives and format of the workshop are presented.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, Canhai; Xu, Zhijie; Li, Tingwen

    In virtual design and scale up of pilot-scale carbon capture systems, the coupled reactive multiphase flow problem must be solved to predict the adsorber’s performance and capture efficiency under various operation conditions. This paper focuses on the detailed computational fluid dynamics (CFD) modeling of a pilot-scale fluidized bed adsorber equipped with vertical cooling tubes. Multiphase Flow with Interphase eXchanges (MFiX), an open-source multiphase flow CFD solver, is used for the simulations with custom code to simulate the chemical reactions and filtered models to capture the effect of the unresolved details in the coarser mesh for simulations with reasonable simulations andmore » manageable computational effort. Previously developed two filtered models for horizontal cylinder drag, heat transfer, and reaction kinetics have been modified to derive the 2D filtered models representing vertical cylinders in the coarse-grid CFD simulations. The effects of the heat exchanger configurations (i.e., horizontal or vertical) on the adsorber’s hydrodynamics and CO2 capture performance are then examined. The simulation result subsequently is compared and contrasted with another predicted by a one-dimensional three-region process model.« less

  20. Angle-domain common imaging gather extraction via Kirchhoff prestack depth migration based on a traveltime table in transversely isotropic media

    NASA Astrophysics Data System (ADS)

    Liu, Shaoyong; Gu, Hanming; Tang, Yongjie; Bingkai, Han; Wang, Huazhong; Liu, Dingjin

    2018-04-01

    Angle-domain common image-point gathers (ADCIGs) can alleviate the limitations of common image-point gathers in an offset domain, and have been widely used for velocity inversion and amplitude variation with angle (AVA) analysis. We propose an effective algorithm for generating ADCIGs in transversely isotropic (TI) media based on the gradient of traveltime by Kirchhoff pre-stack depth migration (KPSDM), as the dynamic programming method for computing the traveltime in TI media would not suffer from the limitation of shadow zones and traveltime interpolation. Meanwhile, we present a specific implementation strategy for ADCIG extraction via KPSDM. Three major steps are included in the presented strategy: (1) traveltime computation using a dynamic programming approach in TI media; (2) slowness vector calculation by the gradient of a traveltime table calculated previously; (3) construction of illumination vectors and subsurface angles in the migration process. Numerical examples are included to demonstrate the effectiveness of our approach, which henceforce shows its potential application for subsequent tomographic velocity inversion and AVA.

  1. The Effect of Task Planning on L2 Performance and L2 Development in Text-Based Synchronous Computer-Mediated Communication

    ERIC Educational Resources Information Center

    Hsu, Hsiu-Chen

    2017-01-01

    This study explored the effect of two planning conditions [the simultaneous use of rehearsal and careful online planning (ROP), and the careful online planning alone (OP)] on L2 production complexity and accuracy and the subsequent development of these two linguistic areas in the context of text-based synchronous computer-mediated communication.…

  2. Discussion of "Computational Electrocardiography: Revisiting Holter ECG Monitoring".

    PubMed

    Baumgartner, Christian; Caiani, Enrico G; Dickhaus, Hartmut; Kulikowski, Casimir A; Schiecke, Karin; van Bemmel, Jan H; Witte, Herbert

    2016-08-05

    This article is part of a For-Discussion-Section of Methods of Information in Medicine about the paper "Computational Electrocardiography: Revisiting Holter ECG Monitoring" written by Thomas M. Deserno and Nikolaus Marx. It is introduced by an editorial. This article contains the combined commentaries invited to independently comment on the paper of Deserno and Marx. In subsequent issues the discussion can continue through letters to the editor.

  3. The medial frontal cortex contributes to but does not organize rat exploratory behavior.

    PubMed

    Blankenship, Philip A; Stuebing, Sarah L; Winter, Shawn S; Cheatwood, Joseph L; Benson, James D; Whishaw, Ian Q; Wallace, Douglas G

    2016-11-12

    Animals use multiple strategies to maintain spatial orientation. Dead reckoning is a form of spatial navigation that depends on self-movement cue processing. During dead reckoning, the generation of self-movement cues from a starting position to an animal's current position allow for the estimation of direction and distance to the position movement originated. A network of brain structures has been implicated in dead reckoning. Recent work has provided evidence that the medial frontal cortex may contribute to dead reckoning in this network of brain structures. The current study investigated the organization of rat exploratory behavior subsequent to medial frontal cortex aspiration lesions under light and dark conditions. Disruptions in exploratory behavior associated with medial frontal lesions were consistent with impaired motor coordination, response inhibition, or egocentric reference frame. These processes are necessary for spatial orientation; however, they are not sufficient for self-movement cue processing. Therefore it is possible that the medial frontal cortex provides processing resources that support dead reckoning in other brain structures but does not of itself compute the kinematic details of dead reckoning. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  4. Consequence modeling using the fire dynamics simulator.

    PubMed

    Ryder, Noah L; Sutula, Jason A; Schemel, Christopher F; Hamer, Andrew J; Van Brunt, Vincent

    2004-11-11

    The use of Computational Fluid Dynamics (CFD) and in particular Large Eddy Simulation (LES) codes to model fires provides an efficient tool for the prediction of large-scale effects that include plume characteristics, combustion product dispersion, and heat effects to adjacent objects. This paper illustrates the strengths of the Fire Dynamics Simulator (FDS), an LES code developed by the National Institute of Standards and Technology (NIST), through several small and large-scale validation runs and process safety applications. The paper presents two fire experiments--a small room fire and a large (15 m diameter) pool fire. The model results are compared to experimental data and demonstrate good agreement between the models and data. The validation work is then extended to demonstrate applicability to process safety concerns by detailing a model of a tank farm fire and a model of the ignition of a gaseous fuel in a confined space. In this simulation, a room was filled with propane, given time to disperse, and was then ignited. The model yields accurate results of the dispersion of the gas throughout the space. This information can be used to determine flammability and explosive limits in a space and can be used in subsequent models to determine the pressure and temperature waves that would result from an explosion. The model dispersion results were compared to an experiment performed by Factory Mutual. Using the above examples, this paper will demonstrate that FDS is ideally suited to build realistic models of process geometries in which large scale explosion and fire failure risks can be evaluated with several distinct advantages over more traditional CFD codes. Namely transient solutions to fire and explosion growth can be produced with less sophisticated hardware (lower cost) than needed for traditional CFD codes (PC type computer verses UNIX workstation) and can be solved for longer time histories (on the order of hundreds of seconds of computed time) with minimal computer resources and length of model run. Additionally results that are produced can be analyzed, viewed, and tabulated during and following a model run within a PC environment. There are some tradeoffs, however, as rapid computations in PC's may require a sacrifice in the grid resolution or in the sub-grid modeling, depending on the size of the geometry modeled.

  5. Electrophysiological evidence of statistical learning of long-distance dependencies in 8-month-old preterm and full-term infants.

    PubMed

    Kabdebon, C; Pena, M; Buiatti, M; Dehaene-Lambertz, G

    2015-09-01

    Using electroencephalography, we examined 8-month-old infants' ability to discover a systematic dependency between the first and third syllables of successive words, concatenated into a monotonous speech stream, and to subsequently generalize this regularity to new items presented in isolation. Full-term and preterm infants, while exposed to the stream, displayed a significant entrainment (phase-locking) to the syllabic and word frequencies, demonstrating that they were sensitive to the word unit. The acquisition of the systematic dependency defining words was confirmed by the significantly different neural responses to rule-words and part-words subsequently presented during the test phase. Finally, we observed a correlation between syllabic entrainment during learning and the difference in phase coherence between the test conditions (rule-words vs part-words) suggesting that temporal processing of the syllable unit might be crucial in linguistic learning. No group difference was observed suggesting that non-adjacent statistical computations are already robust at 8 months, even in preterm infants, and thus develop during the first year of life, earlier than expected from behavioral studies. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Automatic control system for uniformly paving iron ore pellets

    NASA Astrophysics Data System (ADS)

    Wang, Bowen; Qian, Xiaolong

    2014-05-01

    In iron and steelmaking industry, iron ore pellet qualities are crucial to end-product properties, manufacturing costs and waste emissions. Uniform pellet pavements on the grate machine are a fundamental prerequisite to ensure even heat-transfer and pellet induration successively influences performance of the following metallurgical processes. This article presents an automatic control system for uniformly paving green pellets on the grate, via a mechanism mainly constituted of a mechanical linkage, a swinging belt, a conveyance belt and a grate. Mechanism analysis illustrates that uniform pellet pavements demand the frontend of the swinging belt oscillate at a constant angular velocity. Subsequently, kinetic models are formulated to relate oscillatory movements of the swinging belt's frontend to rotations of a crank link driven by a motor. On basis of kinetic analysis of the pellet feeding mechanism, a cubic B-spline model is built for numerically computing discrete frequencies to be modulated during a motor rotation. Subsequently, the pellet feeding control system is presented in terms of compositional hardware and software components, and their functional relationships. Finally, pellet feeding experiments are carried out to demonstrate that the control system is effective, reliable and superior to conventional methods.

  7. Mathematical Learning, the Unseen and the Unforseen

    ERIC Educational Resources Information Center

    Roth, Wolff-Michael

    2012-01-01

    To learn means coming to know something new at the end of, or subsequent to, a (learning) process. Because students do not yet know at the beginning of the process what they will know subsequent to the process, they cannot actively orient towards the object of learning. In this article, I propose a phenomenological perspective that theorizes…

  8. Verification of ARMA identification for modelling temporal correlation of GPS observations using the toolbox ARMASA

    NASA Astrophysics Data System (ADS)

    Luo, Xiaoguang; Mayer, Michael; Heck, Bernhard

    2010-05-01

    One essential deficiency of the stochastic model used in many GNSS (Global Navigation Satellite Systems) software products consists in neglecting temporal correlation of GNSS observations. Analysing appropriately detrended time series of observation residuals resulting from GPS (Global Positioning System) data processing, the temporal correlation behaviour of GPS observations can be sufficiently described by means of so-called autoregressive moving average (ARMA) processes. Using the toolbox ARMASA which is available free of charge in MATLAB® Central (open exchange platform for the MATLAB® and SIMULINK® user community), a well-fitting time series model can be identified automatically in three steps. Firstly, AR, MA, and ARMA models are computed up to some user-specified maximum order. Subsequently, for each model type, the best-fitting model is selected using the combined (for AR processes) resp. generalised (for MA and ARMA processes) information criterion. The final model identification among the best-fitting AR, MA, and ARMA models is performed based on the minimum prediction error characterising the discrepancies between the given data and the fitted model. The ARMA coefficients are computed using Burg's maximum entropy algorithm (for AR processes), Durbin's first (for MA processes) and second (for ARMA processes) methods, respectively. This paper verifies the performance of the automated ARMA identification using the toolbox ARMASA. For this purpose, a representative data base is generated by means of ARMA simulation with respect to sample size, correlation level, and model complexity. The model error defined as a transform of the prediction error is used as measure for the deviation between the true and the estimated model. The results of the study show that the recognition rates of underlying true processes increase with increasing sample sizes and decrease with rising model complexity. Considering large sample sizes, the true underlying processes can be correctly recognised for nearly 80% of the analysed data sets. Additionally, the model errors of first-order AR resp. MA processes converge clearly more rapidly to the corresponding asymptotical values than those of high-order ARMA processes.

  9. Unrealistic optimism in advice taking: A computational account.

    PubMed

    Leong, Yuan Chang; Zaki, Jamil

    2018-02-01

    Expert advisors often make surprisingly inaccurate predictions about the future, yet people heed their suggestions nonetheless. Here we provide a novel, computational account of this unrealistic optimism in advice taking. Across 3 studies, participants observed as advisors predicted the performance of a stock. Advisors varied in their accuracy, performing reliably above, at, or below chance. Despite repeated feedback, participants exhibited inflated perceptions of advisors' accuracy, and reliably "bet" on advisors' predictions more than their performance warranted. Participants' decisions tightly tracked a computational model that makes 2 assumptions: (a) people hold optimistic initial expectations about advisors, and (b) people preferentially incorporate information that adheres to their expectations when learning about advisors. Consistent with model predictions, explicitly manipulating participants' initial expectations altered their optimism bias and subsequent advice-taking. With well-calibrated initial expectations, participants no longer exhibited an optimism bias. We then explored crowdsourced ratings as a strategy to curb unrealistic optimism in advisors. Star ratings for each advisor were collected from an initial group of participants, which were then shown to a second group of participants. Instead of calibrating expectations, these ratings propagated and exaggerated the unrealistic optimism. Our results provide a computational account of the cognitive processes underlying inflated perceptions of expertise, and explore the boundary conditions under which they occur. We discuss the adaptive value of this optimism bias, and how our account can be extended to explain unrealistic optimism in other domains. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  10. Quantification of substrate and cellular strains in stretchable 3D cell cultures: an experimental and computational framework.

    PubMed

    González-Avalos, P; Mürnseer, M; Deeg, J; Bachmann, A; Spatz, J; Dooley, S; Eils, R; Gladilin, E

    2017-05-01

    The mechanical cell environment is a key regulator of biological processes . In living tissues, cells are embedded into the 3D extracellular matrix and permanently exposed to mechanical forces. Quantification of the cellular strain state in a 3D matrix is therefore the first step towards understanding how physical cues determine single cell and multicellular behaviour. The majority of cell assays are, however, based on 2D cell cultures that lack many essential features of the in vivo cellular environment. Furthermore, nondestructive measurement of substrate and cellular mechanics requires appropriate computational tools for microscopic image analysis and interpretation. Here, we present an experimental and computational framework for generation and quantification of the cellular strain state in 3D cell cultures using a combination of 3D substrate stretcher, multichannel microscopic imaging and computational image analysis. The 3D substrate stretcher enables deformation of living cells embedded in bead-labelled 3D collagen hydrogels. Local substrate and cell deformations are determined by tracking displacement of fluorescent beads with subsequent finite element interpolation of cell strains over a tetrahedral tessellation. In this feasibility study, we debate diverse aspects of deformable 3D culture construction, quantification and evaluation, and present an example of its application for quantitative analysis of a cellular model system based on primary mouse hepatocytes undergoing transforming growth factor (TGF-β) induced epithelial-to-mesenchymal transition. © 2017 The Authors. Journal of Microscopy published by JohnWiley & Sons Ltd on behalf of Royal Microscopical Society.

  11. Impact of computer-assisted data collection, evaluation and management on the cancer genetic counselor's time providing patient care.

    PubMed

    Cohen, Stephanie A; McIlvried, Dawn E

    2011-06-01

    Cancer genetic counseling sessions traditionally encompass collecting medical and family history information, evaluating that information for the likelihood of a genetic predisposition for a hereditary cancer syndrome, conveying that information to the patient, offering genetic testing when appropriate, obtaining consent and subsequently documenting the encounter with a clinic note and pedigree. Software programs exist to collect family and medical history information electronically, intending to improve efficiency and simplicity of collecting, managing and storing this data. This study compares the genetic counselor's time spent in cancer genetic counseling tasks in a traditional model and one using computer-assisted data collection, which is then used to generate a pedigree, risk assessment and consult note. Genetic counselor time spent collecting family and medical history and providing face-to-face counseling for a new patient session decreased from an average of 85-69 min when using the computer-assisted data collection. However, there was no statistically significant change in overall genetic counselor time on all aspects of the genetic counseling process, due to an increased amount of time spent generating an electronic pedigree and consult note. Improvements in the computer program's technical design would potentially minimize data manipulation. Certain aspects of this program, such as electronic collection of family history and risk assessment, appear effective in improving cancer genetic counseling efficiency while others, such as generating an electronic pedigree and consult note, do not.

  12. A Simple Graphical Method for Quantification of Disaster Management Surge Capacity Using Computer Simulation and Process-control Tools.

    PubMed

    Franc, Jeffrey Michael; Ingrassia, Pier Luigi; Verde, Manuela; Colombo, Davide; Della Corte, Francesco

    2015-02-01

    Surge capacity, or the ability to manage an extraordinary volume of patients, is fundamental for hospital management of mass-casualty incidents. However, quantification of surge capacity is difficult and no universal standard for its measurement has emerged, nor has a standardized statistical method been advocated. As mass-casualty incidents are rare, simulation may represent a viable alternative to measure surge capacity. Hypothesis/Problem The objective of the current study was to develop a statistical method for the quantification of surge capacity using a combination of computer simulation and simple process-control statistical tools. Length-of-stay (LOS) and patient volume (PV) were used as metrics. The use of this method was then demonstrated on a subsequent computer simulation of an emergency department (ED) response to a mass-casualty incident. In the derivation phase, 357 participants in five countries performed 62 computer simulations of an ED response to a mass-casualty incident. Benchmarks for ED response were derived from these simulations, including LOS and PV metrics for triage, bed assignment, physician assessment, and disposition. In the application phase, 13 students of the European Master in Disaster Medicine (EMDM) program completed the same simulation scenario, and the results were compared to the standards obtained in the derivation phase. Patient-volume metrics included number of patients to be triaged, assigned to rooms, assessed by a physician, and disposed. Length-of-stay metrics included median time to triage, room assignment, physician assessment, and disposition. Simple graphical methods were used to compare the application phase group to the derived benchmarks using process-control statistical tools. The group in the application phase failed to meet the indicated standard for LOS from admission to disposition decision. This study demonstrates how simulation software can be used to derive values for objective benchmarks of ED surge capacity using PV and LOS metrics. These objective metrics can then be applied to other simulation groups using simple graphical process-control tools to provide a numeric measure of surge capacity. Repeated use in simulations of actual EDs may represent a potential means of objectively quantifying disaster management surge capacity. It is hoped that the described statistical method, which is simple and reusable, will be useful for investigators in this field to apply to their own research.

  13. SU-E-T-61: A Practical Process for Fabricating Passive Scatter Proton Beam Modulation Compensation Filters Using 3D Printing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, T; Drzymala, R

    Purpose: The purpose of this project was to devise a practical fabrication process for passive scatter proton beam compensation filters (CF) that is competitive in time, cost and effort using 3D printing. Methods: DICOM compensator filter files for a proton beam were generated by our Eclipse (Varian, Inc.) treatment planning system. The compensator thickness specifications were extracted with in-house software written in Matlab (MathWorks, Inc.) code and written to a text file which could be read by the Rhinoceros 5, computer-aided design (CAD) package (Robert McNeel and Associates), which subsequently generated a smoothed model in a STereoLithographic also known asmore » a Standard Tesselation Language file (STL). The model in the STL file was subsequently refined using Netfabb software and then converted to printing instructions using Cura. version 15.02.1. for our 3D printer. The Airwolf3D, model HD2x, fused filament fabrication (FFF) 3D printer (Airwolf3D.com) was used for our fabrication system with a print speed of 150mm per second. It can print in over 22 different plastic filament materials in a build volume of 11” x 8” x 12”. We choose ABS plastic to print the 3D model of the imprint for our CFs. Results: Prints of the CF could be performed at a print speed of 70mm per second. The time to print the 3D topology for the CF for the 14 cm diameter snout of our Mevion 250 proton accelerator was less than 3 hours. The printed model is intended to subsequently be used as a mold to imprint a molten wax cylindrical to form the compensation after cooling. The whole process should be performed for a typical 3 beam treatment plan within a day. Conclusion: Use of 3D printing is practical and can be used to print a 3D model of a CF within a few hours.« less

  14. A novel decoding algorithm based on the hierarchical reliable strategy for SCG-LDPC codes in optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Tong, Qing-zhen; Huang, Sheng; Wang, Yong

    2013-11-01

    An effective hierarchical reliable belief propagation (HRBP) decoding algorithm is proposed according to the structural characteristics of systematically constructed Gallager low-density parity-check (SCG-LDPC) codes. The novel decoding algorithm combines the layered iteration with the reliability judgment, and can greatly reduce the number of the variable nodes involved in the subsequent iteration process and accelerate the convergence rate. The result of simulation for SCG-LDPC(3969,3720) code shows that the novel HRBP decoding algorithm can greatly reduce the computing amount at the condition of ensuring the performance compared with the traditional belief propagation (BP) algorithm. The bit error rate (BER) of the HRBP algorithm is considerable at the threshold value of 15, but in the subsequent iteration process, the number of the variable nodes for the HRBP algorithm can be reduced by about 70% at the high signal-to-noise ratio (SNR) compared with the BP algorithm. When the threshold value is further increased, the HRBP algorithm will gradually degenerate into the layered-BP algorithm, but at the BER of 10-7 and the maximal iteration number of 30, the net coding gain (NCG) of the HRBP algorithm is 0.2 dB more than that of the BP algorithm, and the average iteration times can be reduced by about 40% at the high SNR. Therefore, the novel HRBP decoding algorithm is more suitable for optical communication systems.

  15. A novel vortex tube-based N2-expander liquefaction process for enhancing the energy efficiency of natural gas liquefaction

    NASA Astrophysics Data System (ADS)

    Qyyum, Muhammad Abdul; Wei, Feng; Hussain, Arif; Ali, Wahid; Sehee, Oh; Lee, Moonyong

    2017-11-01

    This research work unfolds a simple, safe, and environment-friendly energy efficient novel vortex tube-based natural gas liquefaction process (LNG). A vortex tube was introduced to the popular N2-expander liquefaction process to enhance the liquefaction efficiency. The process structure and condition were modified and optimized to take a potential advantage of the vortex tube on the natural gas liquefaction cycle. Two commercial simulators ANSYS® and Aspen HYSYS® were used to investigate the application of vortex tube in the refrigeration cycle of LNG process. The Computational fluid dynamics (CFD) model was used to simulate the vortex tube with nitrogen (N2) as a working fluid. Subsequently, the results of the CFD model were embedded in the Aspen HYSYS® to validate the proposed LNG liquefaction process. The proposed natural gas liquefaction process was optimized using the knowledge-based optimization (KBO) approach. The overall energy consumption was chosen as an objective function for optimization. The performance of the proposed liquefaction process was compared with the conventional N2-expander liquefaction process. The vortex tube-based LNG process showed a significant improvement of energy efficiency by 20% in comparison with the conventional N2-expander liquefaction process. This high energy efficiency was mainly due to the isentropic expansion of the vortex tube. It turned out that the high energy efficiency of vortex tube-based process is totally dependent on the refrigerant cold fraction, operating conditions as well as refrigerant cycle configurations.

  16. Image segmentation using hidden Markov Gauss mixture models.

    PubMed

    Pyun, Kyungsuk; Lim, Johan; Won, Chee Sun; Gray, Robert M

    2007-07-01

    Image segmentation is an important tool in image processing and can serve as an efficient front end to sophisticated algorithms and thereby simplify subsequent processing. We develop a multiclass image segmentation method using hidden Markov Gauss mixture models (HMGMMs) and provide examples of segmentation of aerial images and textures. HMGMMs incorporate supervised learning, fitting the observation probability distribution given each class by a Gauss mixture estimated using vector quantization with a minimum discrimination information (MDI) distortion. We formulate the image segmentation problem using a maximum a posteriori criteria and find the hidden states that maximize the posterior density given the observation. We estimate both the hidden Markov parameter and hidden states using a stochastic expectation-maximization algorithm. Our results demonstrate that HMGMM provides better classification in terms of Bayes risk and spatial homogeneity of the classified objects than do several popular methods, including classification and regression trees, learning vector quantization, causal hidden Markov models (HMMs), and multiresolution HMMs. The computational load of HMGMM is similar to that of the causal HMM.

  17. Imaging of the interaction of low frequency electric fields with biological tissues by optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Peña, Adrian F.; Devine, Jack; Doronin, Alexander; Meglinski, Igor

    2014-03-01

    We report the use of conventional Optical Coherence Tomography (OCT) for visualization of propagation of low frequency electric field in soft biological tissues ex vivo. To increase the overall quality of the experimental images an adaptive Wiener filtering technique has been employed. Fourier domain correlation has been subsequently applied to enhance spatial resolution of images of biological tissues influenced by low frequency electric field. Image processing has been performed on Graphics Processing Units (GPUs) utilizing Compute Unified Device Architecture (CUDA) framework in the frequencydomain. The results show that variation in voltage and frequency of the applied electric field relates exponentially to the magnitude of its influence on biological tissue. The magnitude of influence is about twice more for fresh tissue samples in comparison to non-fresh ones. The obtained results suggest that OCT can be used for observation and quantitative evaluation of the electro-kinetic changes in biological tissues under different physiological conditions, functional electrical stimulation, and potentially can be used non-invasively for food quality control.

  18. Einsatz hydrogeochemischer Modelle in der Wasseraufbereitung

    NASA Astrophysics Data System (ADS)

    Wisotzky, Frank

    2012-09-01

    As part of a model-data investigation project, results of several water treatment studies were compared with hydrochemical models. The models used comparable hydrogeological reaction types that occur within aquifers and water treatment processes. They were tested on 3 different examples of water softening, de-nitrification and iron removal. Comparison of simulated and measured water chemical dynamics showed good agreement. In addition to mixing and formation of complexes, de-acidification and de-carbonisation processes were reproduced in the first example. The second example investigated de-nitrification in a straw filter and in a water plant filter with subsequent aeration. The third example showed iron removal where reactions with partially combusted dolomite were simulated with a computer model. All simulations showed good agreement with the observed data. The models have the advantage of yielding parameter results that are difficult to measure. This includes nitrogen gas release and the content of reacted and degradable organic substances. These tools may help to provide better insights into water treatment reactions.

  19. Hydrology of the Bonneville Salt Flats, northwestern Utah, and simulation of ground-water flow and solute transport in the shallow-brine aquifer

    USGS Publications Warehouse

    Mason, James L.; Kipp, Kenneth L.

    1998-01-01

    This report describes the hydrologic system of the Bonneville Salt Flats with emphasis on the mechanisms of solute transport. Variable-density, three-dimensional computer simulations of the near-surface part of the ground-water system were done to quantify both the transport of salt dissolved in subsurface brine that leaves the salt-crust area and the salt dissolved and precipitated on the land surface. The study was designed to define the hydrology of the brine ground-water system and the natural and anthropogenic processes causing salt loss, and where feasible, to quantify these processes. Specific areas of study include the transport of salt in solution by ground-water flow and the transport of salt in solution by wind-driven ponds and the subsequent salt precipitation on the surface of the playa upon evaporation or seepage into the subsurface. In addition, hydraulic and chemical changes in the hydrologic system since previous studies were documented.

  20. Synthesis, anti-HIV activity studies, and in silico rationalization of cyclobutane-fused nucleosides.

    PubMed

    Figueras, Antoni; Miralles-Llumà, Rosa; Flores, Ramon; Rustullet, Albert; Busqué, Félix; Figueredo, Marta; Font, Josep; Alibés, Ramon; Maréchal, Jean-Didier

    2012-06-01

    The present work describes some recent approaches to novel 3-oxabicyclo[3.2.0]heptane-type nucleosides structurally similar to the potent anti-HIV agent stavudine (d4T). To gain knowledge at the molecular level relevant for further synthetic designs, the lack of activity of these compounds was investigated by computational approaches accounting for three main physiological requirements of anti-HIV nucleosides: their drug-likeness, their activation process, and their subsequent interaction with HIV reverse transcriptase (HIV-RT). Our results show that the inclusion of the fused cyclobutane at the 2'- and 3'-positions of the sugar portion provides drug-like compounds. Nonetheless, the presence of this cyclobutane moiety prevents binding orientations consistent with the catalytic activation for at least one of the enzymes known to activate d4T. To the best of our knowledge, this is the first study to explicitly consider the simulation of the entire activation process to rationalize anti-HIV activities. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Weakly Nonlinear Analysis of Vortex Formation in a Dissipative Variant of the Gross--Pitaevskii Equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tzou, J. C.; Kevrekidis, P. G.; Kolokolnikov, T.

    2016-05-10

    For a dissipative variant of the two-dimensional Gross--Pitaevskii equation with a parabolic trap under rotation, we study a symmetry breaking process that leads to the formation of vortices. The first symmetry breaking leads to the formation of many small vortices distributed uniformly near the Thomas$-$Fermi radius. The instability occurs as a result of a linear instability of a vortex-free steady state as the rotation is increased above a critical threshold. We focus on the second subsequent symmetry breaking, which occurs in the weakly nonlinear regime. At slightly above threshold, we derive a one-dimensional amplitude equation that describes the slow evolutionmore » of the envelope of the initial instability. Here, we show that the mechanism responsible for initiating vortex formation is a modulational instability of the amplitude equation. We also illustrate the role of dissipation in the symmetry breaking process. All analyses are confirmed by detailed numerical computations« less

  2. Autonomous Navigation for Deep Space Missions

    NASA Technical Reports Server (NTRS)

    Bhaskaran, Shyam

    2012-01-01

    Navigation (determining where the spacecraft is at any given time, controlling its path to achieve desired targets), performed using ground-in- the-loop techniques: (1) Data includes 2-way radiometric (Doppler, range), interferometric (Delta- Differential One-way Range), and optical (images of natural bodies taken by onboard camera) (2) Data received on the ground, processed to determine orbit, commands sent to execute maneuvers to control orbit. A self-contained, onboard, autonomous navigation system can: (1) Eliminate delays due to round-trip light time (2) Eliminate the human factors in ground-based processing (3) Reduce turnaround time from navigation update to minutes, down to seconds (4) React to late-breaking data. At JPL, we have developed the framework and computational elements of an autonomous navigation system, called AutoNav. It was originally developed as one of the technologies for the Deep Space 1 mission, launched in 1998; subsequently used on three other spacecraft, for four different missions. The primary use has been on comet missions to track comets during flybys, and impact one comet.

  3. Verification and Validation in a Rapid Software Development Process

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Easterbrook, Steve M.

    1997-01-01

    The high cost of software production is driving development organizations to adopt more automated design and analysis methods such as rapid prototyping, computer-aided software engineering (CASE) tools, and high-level code generators. Even developers of safety-critical software system have adopted many of these new methods while striving to achieve high levels Of quality and reliability. While these new methods may enhance productivity and quality in many cases, we examine some of the risks involved in the use of new methods in safety-critical contexts. We examine a case study involving the use of a CASE tool that automatically generates code from high-level system designs. We show that while high-level testing on the system structure is highly desirable, significant risks exist in the automatically generated code and in re-validating releases of the generated code after subsequent design changes. We identify these risks and suggest process improvements that retain the advantages of rapid, automated development methods within the quality and reliability contexts of safety-critical projects.

  4. Edge reactivity and water-assisted dissociation on cobalt oxide nanoislands

    DOE PAGES

    Fester, J.; García-Melchor, M.; Walton, A. S.; ...

    2017-01-30

    Here, transition metal oxides show great promise as Earth-abundant catalysts for the oxygen evolution reaction in electrochemical water splitting. However, progress in the development of highly active oxide nanostructures is hampered by a lack of knowledge of the location and nature of the active sites. Here we show, through atom-resolved scanning tunnelling microscopy, X-ray spectroscopy and computational modelling, how hydroxyls form from water dissociation at under coordinated cobalt edge sites of cobalt oxide nanoislands. Surprisingly, we find that an additional water molecule acts to promote all the elementary steps of the dissociation process and subsequent hydrogen migration, revealing the importantmore » assisting role of a water molecule in its own dissociation process on a metal oxide. Inspired by the experimental findings, we theoretically model the oxygen evolution reaction activity of cobalt oxide nanoislands and show that the nanoparticle metal edges also display favourable adsorption energetics for water oxidation under electrochemical conditions.« less

  5. A discovery of novel microRNAs in the silkworm (Bombyx mori) genome.

    PubMed

    Yu, Xiaomin; Zhou, Qing; Cai, Yimei; Luo, Qibin; Lin, Hongbin; Hu, Songnian; Yu, Jun

    2009-12-01

    MicroRNAs (miRNAs) are pivotal regulators involved in various physiological and pathological processes via their post-transcriptional regulation of gene expressions. We sequenced 14 libraries of small RNAs constructed from samples spanning the life cycle of silkworms, and discovered 50 novel miRNAs previously not known in animals and verified 43 of them using stem-loop RT-PCR. Our genome-wide analyses of 27 species-specific miRNAs suggest they arise from transposable elements, protein-coding genes duplication/transposition and random foldback sequences; which is consistent with the idea that novel animal miRNAs may evolve from incomplete self-complementary transcripts and become fixed in the process of co-adaptation with their targets. Computational prediction suggests that the silkworm-specific miRNAs may have a preference of regulating genes that are related to life-cycle-associated traits, and these genes can serve as potential targets for subsequent studies of the modulating networks in the development of Bombyx mori.

  6. Selective exposure to information: how different modes of decision making affect subsequent confirmatory information processing.

    PubMed

    Fischer, Peter; Fischer, Julia; Weisweiler, Silke; Frey, Dieter

    2010-12-01

    We investigated whether different modes of decision making (deliberate, intuitive, distracted) affect subsequent confirmatory processing of decision-consistent and inconsistent information. Participants showed higher levels of confirmatory information processing when they made a deliberate or an intuitive decision versus a decision under distraction (Studies 1 and 2). As soon as participants have a cognitive (i.e., deliberate cognitive analysis) or affective (i.e., intuitive and gut feeling) reason for their decision, the subjective confidence in the validity of their decision increases, which results in increased levels of confirmatory information processing (Study 2). In contrast, when participants are distracted during decision making, they are less certain about the validity of their decision and thus are subsequently more balanced in the processing of decision-relevant information.

  7. Extending the Stabilized Supralinear Network model for binocular image processing.

    PubMed

    Selby, Ben; Tripp, Bryan

    2017-06-01

    The visual cortex is both extensive and intricate. Computational models are needed to clarify the relationships between its local mechanisms and high-level functions. The Stabilized Supralinear Network (SSN) model was recently shown to account for many receptive field phenomena in V1, and also to predict subtle receptive field properties that were subsequently confirmed in vivo. In this study, we performed a preliminary exploration of whether the SSN is suitable for incorporation into large, functional models of the visual cortex, considering both its extensibility and computational tractability. First, whereas the SSN receives abstract orientation signals as input, we extended it to receive images (through a linear-nonlinear stage), and found that the extended version behaved similarly. Secondly, whereas the SSN had previously been studied in a monocular context, we found that it could also reproduce data on interocular transfer of surround suppression. Finally, we reformulated the SSN as a convolutional neural network, and found that it scaled well on parallel hardware. These results provide additional support for the plausibility of the SSN as a model of lateral interactions in V1, and suggest that the SSN is well suited as a component of complex vision models. Future work will use the SSN to explore relationships between local network interactions and sophisticated vision processes in large networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Training presence: the importance of virtual reality experience on the "sense of being there".

    PubMed

    Gamito, Pedro; Oliveira, Jorge; Morais, Diogo; Baptista, André; Santos, Nuno; Soares, Fábio; Saraiva, Tomaz; Rosa, Pedro

    2010-01-01

    Nature and origin of presence are still unclear. Although it can be characterized, under a neurophysiological perspective, as a process resulting from a synchrony between cognitive and perceptive systems, the multitude of associated processes reduces the chances of brain mapping presence. In this way, our study was designed in order to understand the possible role of VR experience on presence in a virtual environment. For our study, 16 participants (M=28.39 years; SD=13.44) of both genders without computer experience were selected. The study design consisted of two assessments (initial and final), where the participants were evaluated with BFI, PQ, ITQ, QC, MCSDS-SF, STAI, visual attention and behavioral measures after playing an first person shooter (FPS) game. In order to manipulate the level of VR experience the participants were trained on, a different FPS was used during the 12 weekly sessions of 30 minutes. Results revealed significant differences between the first and final assessment for presence (F(1,15)=11.583; MSE=775.538; p<01) and immersion scores (F(1,15)=6.234; MSE=204.962; p<05), indicating higher levels of presence and immersion in the final assessment. No statistical significant results were obtained for cybersickness or the behavioral measures. In summary, our results showed that training and the subsequent higher computer experience levels can increase immersion and presence.

  9. Design and implementation of highly parallel pipelined VLSI systems

    NASA Astrophysics Data System (ADS)

    Delange, Alphonsus Anthonius Jozef

    A methodology and its realization as a prototype CAD (Computer Aided Design) system for the design and analysis of complex multiprocessor systems is presented. The design is an iterative process in which the behavioral specifications of the system components are refined into structural descriptions consisting of interconnections and lower level components etc. A model for the representation and analysis of multiprocessor systems at several levels of abstraction and an implementation of a CAD system based on this model are described. A high level design language, an object oriented development kit for tool design, a design data management system, and design and analysis tools such as a high level simulator and graphics design interface which are integrated into the prototype system and graphics interface are described. Procedures for the synthesis of semiregular processor arrays, and to compute the switching of input/output signals, memory management and control of processor array, and sequencing and segmentation of input/output data streams due to partitioning and clustering of the processor array during the subsequent synthesis steps, are described. The architecture and control of a parallel system is designed and each component mapped to a module or module generator in a symbolic layout library, compacted for design rules of VLSI (Very Large Scale Integration) technology. An example of the design of a processor that is a useful building block for highly parallel pipelined systems in the signal/image processing domains is given.

  10. Cholinergic stimulation enhances Bayesian belief updating in the deployment of spatial attention.

    PubMed

    Vossel, Simone; Bauer, Markus; Mathys, Christoph; Adams, Rick A; Dolan, Raymond J; Stephan, Klaas E; Friston, Karl J

    2014-11-19

    The exact mechanisms whereby the cholinergic neurotransmitter system contributes to attentional processing remain poorly understood. Here, we applied computational modeling to psychophysical data (obtained from a spatial attention task) under a psychopharmacological challenge with the cholinesterase inhibitor galantamine (Reminyl). This allowed us to characterize the cholinergic modulation of selective attention formally, in terms of hierarchical Bayesian inference. In a placebo-controlled, within-subject, crossover design, 16 healthy human subjects performed a modified version of Posner's location-cueing task in which the proportion of validly and invalidly cued targets (percentage of cue validity, % CV) changed over time. Saccadic response speeds were used to estimate the parameters of a hierarchical Bayesian model to test whether cholinergic stimulation affected the trial-wise updating of probabilistic beliefs that underlie the allocation of attention or whether galantamine changed the mapping from those beliefs to subsequent eye movements. Behaviorally, galantamine led to a greater influence of probabilistic context (% CV) on response speed than placebo. Crucially, computational modeling suggested this effect was due to an increase in the rate of belief updating about cue validity (as opposed to the increased sensitivity of behavioral responses to those beliefs). We discuss these findings with respect to cholinergic effects on hierarchical cortical processing and in relation to the encoding of expected uncertainty or precision. Copyright © 2014 the authors 0270-6474/14/3415735-08$15.00/0.

  11. An Algorithm and R Program for Fitting and Simulation of Pharmacokinetic and Pharmacodynamic Data.

    PubMed

    Li, Jijie; Yan, Kewei; Hou, Lisha; Du, Xudong; Zhu, Ping; Zheng, Li; Zhu, Cairong

    2017-06-01

    Pharmacokinetic/pharmacodynamic link models are widely used in dose-finding studies. By applying such models, the results of initial pharmacokinetic/pharmacodynamic studies can be used to predict the potential therapeutic dose range. This knowledge can improve the design of later comparative large-scale clinical trials by reducing the number of participants and saving time and resources. However, the modeling process can be challenging, time consuming, and costly, even when using cutting-edge, powerful pharmacological software. Here, we provide a freely available R program for expediently analyzing pharmacokinetic/pharmacodynamic data, including data importation, parameter estimation, simulation, and model diagnostics. First, we explain the theory related to the establishment of the pharmacokinetic/pharmacodynamic link model. Subsequently, we present the algorithms used for parameter estimation and potential therapeutic dose computation. The implementation of the R program is illustrated by a clinical example. The software package is then validated by comparing the model parameters and the goodness-of-fit statistics generated by our R package with those generated by the widely used pharmacological software WinNonlin. The pharmacokinetic and pharmacodynamic parameters as well as the potential recommended therapeutic dose can be acquired with the R package. The validation process shows that the parameters estimated using our package are satisfactory. The R program developed and presented here provides pharmacokinetic researchers with a simple and easy-to-access tool for pharmacokinetic/pharmacodynamic analysis on personal computers.

  12. Hemispheric dissociation of reward processing in humans: insights from deep brain stimulation.

    PubMed

    Palminteri, Stefano; Serra, Giulia; Buot, Anne; Schmidt, Liane; Welter, Marie-Laure; Pessiglione, Mathias

    2013-01-01

    Rewards have various effects on human behavior and multiple representations in the human brain. Behaviorally, rewards notably enhance response vigor in incentive motivation paradigms and bias subsequent choices in instrumental learning paradigms. Neurally, rewards affect activity in different fronto-striatal regions attached to different motor effectors, for instance in left and right hemispheres for the two hands. Here we address the question of whether manipulating reward-related brain activity has local or general effects, with respect to behavioral paradigms and motor effectors. Neuronal activity was manipulated in a single hemisphere using unilateral deep brain stimulation (DBS) in patients with Parkinson's disease. Results suggest that DBS amplifies the representation of reward magnitude within the targeted hemisphere, so as to affect the behavior of the contralateral hand specifically. These unilateral DBS effects on behavior include both boosting incentive motivation and biasing instrumental choices. Furthermore, using computational modeling we show that DBS effects on incentive motivation can predict DBS effects on instrumental learning (or vice versa). Thus, we demonstrate the feasibility of causally manipulating reward-related neuronal activity in humans, in a manner that is specific to a class of motor effectors but that generalizes to different computational processes. As these findings proved independent from therapeutic effects on parkinsonian motor symptoms, they might provide insight into DBS impact on non-motor disorders, such as apathy or hypomania. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Multiscale Modeling of Damage Processes in Aluminum Alloys: Grain-Scale Mechanisms

    NASA Technical Reports Server (NTRS)

    Hochhalter, J. D.; Veilleux, M. G.; Bozek, J. E.; Glaessgen, E. H.; Ingraffea, A. R.

    2008-01-01

    This paper has two goals related to the development of a physically-grounded methodology for modeling the initial stages of fatigue crack growth in an aluminum alloy. The aluminum alloy, AA 7075-T651, is susceptible to fatigue cracking that nucleates from cracked second phase iron-bearing particles. Thus, the first goal of the paper is to validate an existing framework for the prediction of the conditions under which the particles crack. The observed statistics of particle cracking (defined as incubation for this alloy) must be accurately predicted to simulate the stochastic nature of microstructurally small fatigue crack (MSFC) formation. Also, only by simulating incubation of damage in a statistically accurate manner can subsequent stages of crack growth be accurately predicted. To maintain fidelity and computational efficiency, a filtering procedure was developed to eliminate particles that were unlikely to crack. The particle filter considers the distributions of particle sizes and shapes, grain texture, and the configuration of the surrounding grains. This filter helps substantially reduce the number of particles that need to be included in the microstructural models and forms the basis of the future work on the subsequent stages of MSFC, crack nucleation and microstructurally small crack propagation. A physics-based approach to simulating fracture should ultimately begin at nanometer length scale, in which atomistic simulation is used to predict the fundamental damage mechanisms of MSFC. These mechanisms include dislocation formation and interaction, interstitial void formation, and atomic diffusion. However, atomistic simulations quickly become computationally intractable as the system size increases, especially when directly linking to the already large microstructural models. Therefore, the second goal of this paper is to propose a method that will incorporate atomistic simulation and small-scale experimental characterization into the existing multiscale framework. At the microscale, the nanoscale mechanics are represented within cohesive zones where appropriate, i.e. where the mechanics observed at the nanoscale can be represented as occurring on a plane such as at grain boundaries or slip planes at a crack front. Important advancements that are yet to be made include: 1. an increased fidelity in cohesive zone modeling; 2. a means to understand how atomistic simulation scales with time; 3. a new experimental methodology for generating empirical models for CZMs and emerging materials; and 4. a validation of simulations of the damage processes at the nano-micro scale. With ever-increasing computer power, the long-term ability to employ atomistic simulation for the prognosis of structural components will not be limited by computation power, but by our lack of knowledge in incorporating atomistic models into simulations of MSFC into a multiscale framework.

  14. CUDA-based acceleration and BPN-assisted automation of bilateral filtering for brain MR image restoration.

    PubMed

    Chang, Herng-Hua; Chang, Yu-Ning

    2017-04-01

    Bilateral filters have been substantially exploited in numerous magnetic resonance (MR) image restoration applications for decades. Due to the deficiency of theoretical basis on the filter parameter setting, empirical manipulation with fixed values and noise variance-related adjustments has generally been employed. The outcome of these strategies is usually sensitive to the variation of the brain structures and not all the three parameter values are optimal. This article is in an attempt to investigate the optimal setting of the bilateral filter, from which an accelerated and automated restoration framework is developed. To reduce the computational burden of the bilateral filter, parallel computing with the graphics processing unit (GPU) architecture is first introduced. The NVIDIA Tesla K40c GPU with the compute unified device architecture (CUDA) functionality is specifically utilized to emphasize thread usages and memory resources. To correlate the filter parameters with image characteristics for automation, optimal image texture features are subsequently acquired based on the sequential forward floating selection (SFFS) scheme. Subsequently, the selected features are introduced into the back propagation network (BPN) model for filter parameter estimation. Finally, the k-fold cross validation method is adopted to evaluate the accuracy of the proposed filter parameter prediction framework. A wide variety of T1-weighted brain MR images with various scenarios of noise levels and anatomic structures were utilized to train and validate this new parameter decision system with CUDA-based bilateral filtering. For a common brain MR image volume of 256 × 256 × 256 pixels, the speed-up gain reached 284. Six optimal texture features were acquired and associated with the BPN to establish a "high accuracy" parameter prediction system, which achieved a mean absolute percentage error (MAPE) of 5.6%. Automatic restoration results on 2460 brain MR images received an average relative error in terms of peak signal-to-noise ratio (PSNR) less than 0.1%. In comparison with many state-of-the-art filters, the proposed automation framework with CUDA-based bilateral filtering provided more favorable results both quantitatively and qualitatively. Possessing unique characteristics and demonstrating exceptional performances, the proposed CUDA-based bilateral filter adequately removed random noise in multifarious brain MR images for further study in neurosciences and radiological sciences. It requires no prior knowledge of the noise variance and automatically restores MR images while preserving fine details. The strategy of exploiting the CUDA to accelerate the computation and incorporating texture features into the BPN to completely automate the bilateral filtering process is achievable and validated, from which the best performance is reached. © 2017 American Association of Physicists in Medicine.

  15. International Collaboration Activities on Engineered Barrier Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jove-Colon, Carlos F.

    The Used Fuel Disposition Campaign (UFDC) within the DOE Fuel Cycle Technologies (FCT) program has been engaging in international collaborations between repository R&D programs for high-level waste (HLW) disposal to leverage on gathered knowledge and laboratory/field data of near- and far-field processes from experiments at underground research laboratories (URL). Heater test experiments at URLs provide a unique opportunity to mimetically study the thermal effects of heat-generating nuclear waste in subsurface repository environments. Various configurations of these experiments have been carried out at various URLs according to the disposal design concepts of the hosting country repository program. The FEBEX (Full-scale Engineeredmore » Barrier Experiment in Crystalline Host Rock) project is a large-scale heater test experiment originated by the Spanish radioactive waste management agency (Empresa Nacional de Residuos Radiactivos S.A. – ENRESA) at the Grimsel Test Site (GTS) URL in Switzerland. The project was subsequently managed by CIEMAT. FEBEX-DP is a concerted effort of various international partners working on the evaluation of sensor data and characterization of samples obtained during the course of this field test and subsequent dismantling. The main purpose of these field-scale experiments is to evaluate feasibility for creation of an engineered barrier system (EBS) with a horizontal configuration according to the Spanish concept of deep geological disposal of high-level radioactive waste in crystalline rock. Another key aspect of this project is to improve the knowledge of coupled processes such as thermal-hydro-mechanical (THM) and thermal-hydro-chemical (THC) operating in the near-field environment. The focus of these is on model development and validation of predictions through model implementation in computational tools to simulate coupled THM and THC processes.« less

  16. Letter regarding 'Comparison between low-cost marker-less and high-end marker-based motion capture systems for the computer-aided assessment of working ergonomics' by Patrizi et al. and research reproducibility.

    PubMed

    2017-04-01

    The reporting of research in a manner that allows reproduction in subsequent investigations is important for scientific progress. Several details of the recent study by Patrizi et al., 'Comparison between low-cost marker-less and high-end marker-based motion capture systems for the computer-aided assessment of working ergonomics', are absent from the published manuscript and make reproduction of findings impossible. As new and complex technologies with great promise for ergonomics develop, new but surmountable challenges for reporting investigations using these technologies in a reproducible manner arise. Practitioner Summary: As with traditional methods, scientific reporting of new and complex ergonomics technologies should be performed in a manner that allows reproduction in subsequent investigations and supports scientific advancement.

  17. Experimentally validated multiphysics computational model of focusing and shock wave formation in an electromagnetic lithotripter.

    PubMed

    Fovargue, Daniel E; Mitran, Sorin; Smith, Nathan B; Sankin, Georgy N; Simmons, Walter N; Zhong, Pei

    2013-08-01

    A multiphysics computational model of the focusing of an acoustic pulse and subsequent shock wave formation that occurs during extracorporeal shock wave lithotripsy is presented. In the electromagnetic lithotripter modeled in this work the focusing is achieved via a polystyrene acoustic lens. The transition of the acoustic pulse through the solid lens is modeled by the linear elasticity equations and the subsequent shock wave formation in water is modeled by the Euler equations with a Tait equation of state. Both sets of equations are solved simultaneously in subsets of a single computational domain within the BEARCLAW framework which uses a finite-volume Riemann solver approach. This model is first validated against experimental measurements with a standard (or original) lens design. The model is then used to successfully predict the effects of a lens modification in the form of an annular ring cut. A second model which includes a kidney stone simulant in the domain is also presented. Within the stone the linear elasticity equations incorporate a simple damage model.

  18. Experimentally validated multiphysics computational model of focusing and shock wave formation in an electromagnetic lithotripter

    PubMed Central

    Fovargue, Daniel E.; Mitran, Sorin; Smith, Nathan B.; Sankin, Georgy N.; Simmons, Walter N.; Zhong, Pei

    2013-01-01

    A multiphysics computational model of the focusing of an acoustic pulse and subsequent shock wave formation that occurs during extracorporeal shock wave lithotripsy is presented. In the electromagnetic lithotripter modeled in this work the focusing is achieved via a polystyrene acoustic lens. The transition of the acoustic pulse through the solid lens is modeled by the linear elasticity equations and the subsequent shock wave formation in water is modeled by the Euler equations with a Tait equation of state. Both sets of equations are solved simultaneously in subsets of a single computational domain within the BEARCLAW framework which uses a finite-volume Riemann solver approach. This model is first validated against experimental measurements with a standard (or original) lens design. The model is then used to successfully predict the effects of a lens modification in the form of an annular ring cut. A second model which includes a kidney stone simulant in the domain is also presented. Within the stone the linear elasticity equations incorporate a simple damage model. PMID:23927200

  19. Information-computational system for storage, search and analytical processing of environmental datasets based on the Semantic Web technologies

    NASA Astrophysics Data System (ADS)

    Titov, A.; Gordov, E.; Okladnikov, I.

    2009-04-01

    In this report the results of the work devoted to the development of working model of the software system for storage, semantically-enabled search and retrieval along with processing and visualization of environmental datasets containing results of meteorological and air pollution observations and mathematical climate modeling are presented. Specially designed metadata standard for machine-readable description of datasets related to meteorology, climate and atmospheric pollution transport domains is introduced as one of the key system components. To provide semantic interoperability the Resource Description Framework (RDF, http://www.w3.org/RDF/) technology means have been chosen for metadata description model realization in the form of RDF Schema. The final version of the RDF Schema is implemented on the base of widely used standards, such as Dublin Core Metadata Element Set (http://dublincore.org/), Directory Interchange Format (DIF, http://gcmd.gsfc.nasa.gov/User/difguide/difman.html), ISO 19139, etc. At present the system is available as a Web server (http://climate.risks.scert.ru/metadatabase/) based on the web-portal ATMOS engine [1] and is implementing dataset management functionality including SeRQL-based semantic search as well as statistical analysis and visualization of selected data archives [2,3]. The core of the system is Apache web server in conjunction with Tomcat Java Servlet Container (http://jakarta.apache.org/tomcat/) and Sesame Server (http://www.openrdf.org/) used as a database for RDF and RDF Schema. At present statistical analysis of meteorological and climatic data with subsequent visualization of results is implemented for such datasets as NCEP/NCAR Reanalysis, Reanalysis NCEP/DOE AMIP II, JMA/CRIEPI JRA-25, ECMWF ERA-40 and local measurements obtained from meteorological stations on the territory of Russia. This functionality is aimed primarily at finding of main characteristics of regional climate dynamics. The proposed system represents a step in the process of development of a distributed collaborative information-computational environment to support multidisciplinary investigations of Earth regional environment [4]. Partial support of this work by SB RAS Integration Project 34, SB RAS Basic Program Project 4.5.2.2, APN Project CBA2007-08NSY and FP6 Enviro-RISKS project (INCO-CT-2004-013427) is acknowledged. References 1. E.P. Gordov, V.N. Lykosov, and A.Z. Fazliev. Web portal on environmental sciences "ATMOS" // Advances in Geosciences. 2006. Vol. 8. p. 33 - 38. 2. Gordov E.P., Okladnikov I.G., Titov A.G. Development of elements of web based information-computational system supporting regional environment processes investigations // Journal of Computational Technologies, Vol. 12, Special Issue #3, 2007, pp. 20 - 28. 3. Okladnikov I.G., Titov A.G. Melnikova V.N., Shulgina T.M. Web-system for processing and visualization of meteorological and climatic data // Journal of Computational Technologies, Vol. 13, Special Issue #3, 2008, pp. 64 - 69. 4. Gordov E.P., Lykosov V.N. Development of information-computational infrastructure for integrated study of Siberia environment // Journal of Computational Technologies, Vol. 12, Special Issue #2, 2007, pp. 19 - 30.

  20. SU-F-I-10: Spatially Local Statistics for Adaptive Image Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iliopoulos, AS; Sun, X; Floros, D

    Purpose: To facilitate adaptive image filtering operations, addressing spatial variations in both noise and signal. Such issues are prevalent in cone-beam projections, where physical effects such as X-ray scattering result in spatially variant noise, violating common assumptions of homogeneous noise and challenging conventional filtering approaches to signal extraction and noise suppression. Methods: We present a computational mechanism for probing into and quantifying the spatial variance of noise throughout an image. The mechanism builds a pyramid of local statistics at multiple spatial scales; local statistical information at each scale includes (weighted) mean, median, standard deviation, median absolute deviation, as well asmore » histogram or dynamic range after local mean/median shifting. Based on inter-scale differences of local statistics, the spatial scope of distinguishable noise variation is detected in a semi- or un-supervised manner. Additionally, we propose and demonstrate the incorporation of such information in globally parametrized (i.e., non-adaptive) filters, effectively transforming the latter into spatially adaptive filters. The multi-scale mechanism is materialized by efficient algorithms and implemented in parallel CPU/GPU architectures. Results: We demonstrate the impact of local statistics for adaptive image processing and analysis using cone-beam projections of a Catphan phantom, fitted within an annulus to increase X-ray scattering. The effective spatial scope of local statistics calculations is shown to vary throughout the image domain, necessitating multi-scale noise and signal structure analysis. Filtering results with and without spatial filter adaptation are compared visually, illustrating improvements in imaging signal extraction and noise suppression, and in preserving information in low-contrast regions. Conclusion: Local image statistics can be incorporated in filtering operations to equip them with spatial adaptivity to spatial signal/noise variations. An efficient multi-scale computational mechanism is developed to curtail processing latency. Spatially adaptive filtering may impact subsequent processing tasks such as reconstruction and numerical gradient computations for deformable registration. NIH Grant No. R01-184173.« less

  1. Nanomagnetic Logic

    NASA Astrophysics Data System (ADS)

    Carlton, David Bryan

    The exponential improvements in speed, energy efficiency, and cost that the computer industry has relied on for growth during the last 50 years are in danger of ending within the decade. These improvements all have relied on scaling the size of the silicon-based transistor that is at the heart of every modern CPU down to smaller and smaller length scales. However, as the size of the transistor reaches scales that are measured in the number of atoms that make it up, it is clear that this scaling cannot continue forever. As a result of this, there has been a great deal of research effort directed at the search for the next device that will continue to power the growth of the computer industry. However, due to the billions of dollars of investment that conventional silicon transistors have received over the years, it is unlikely that a technology will emerge that will be able to beat it outright in every performance category. More likely, different devices will possess advantages over conventional transistors for certain applications and uses. One of these emerging computing platforms is nanomagnetic logic (NML). NML-based circuits process information by manipulating the magnetization states of single-domain nanomagnets coupled to their nearest neighbors through magnetic dipole interactions. The state variable is magnetization direction and computations can take place without passing an electric current. This makes them extremely attractive as a replacement for conventional transistor-based computing architectures for certain ultra-low power applications. In most work to date, nanomagnetic logic circuits have used an external magnetic clocking field to reset the system between computations. The clocking field is then subsequently removed very slowly relative to the magnetization dynamics, guiding the nanomagnetic logic circuit adiabatically into its magnetic ground state. In this dissertation, I will discuss the dynamics behind this process and show that it is greatly influenced by thermal fluctuations. The magnetic ground state containing the answer to the computation is reached by a stochastic process very similar to the thermal annealing of crystalline materials. We will discuss how these dynamics affect the expected reliability, speed, and energy dissipation of NML systems operating under these conditions. Next I will show how a slight change in the properties of the nanomagnets that make up a NML circuit can completely alter the dynamics by which computations take place. The addition of biaxial anisotropy to the magnetic energy landscape creates a metastable state along the hard axis of the nanomagnet. This metastability can be used to remove the stochastic nature of the computation and has large implications for reliability, speed, and energy dissipation which will all be discussed. The changes to NML operation by the addition of biaxial anisotropy introduce new challenges to realizing a commercially viable logic architecture. In the final chapter, I will discuss these challenges and talk about the architectural changes that are necessary to make a working NML circuit based on nanomagnets with biaxial anisotropy.

  2. Extraction of brewer's yeasts using different methods of cell disruption for practical biodiesel production.

    PubMed

    Řezanka, Tomáš; Matoulková, Dagmar; Kolouchová, Irena; Masák, Jan; Viden, Ivan; Sigler, Karel

    2015-05-01

    The methods of preparation of fatty acids from brewer's yeast and its use in production of biofuels and in different branches of industry are described. Isolation of fatty acids from cell lipids includes cell disintegration (e.g., with liquid nitrogen, KOH, NaOH, petroleum ether, nitrogenous basic compounds, etc.) and subsequent processing of extracted lipids, including analysis of fatty acid and computing of biodiesel properties such as viscosity, density, cloud point, and cetane number. Methyl esters obtained from brewer's waste yeast are well suited for the production of biodiesel. All 49 samples (7 breweries and 7 methods) meet the requirements for biodiesel quality in both the composition of fatty acids and the properties of the biofuel required by the US and EU standards.

  3. Granular Rayleigh-Taylor instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vinningland, Jan Ludvig; Johnsen, Oistein; Flekkoey, Eirik G.

    2009-06-18

    A granular instability driven by gravity is studied experimentally and numerically. The instability arises as grains fall in a closed Hele-Shaw cell where a layer of dense granular material is positioned above a layer of air. The initially flat front defined by the grains subsequently develops into a pattern of falling granular fingers separated by rising bubbles of air. A transient coarsening of the front is observed right from the start by a finger merging process. The coarsening is later stabilized by new fingers growing from the center of the rising bubbles. The structures are quantified by means of Fouriermore » analysis and quantitative agreement between experiment and computation is shown. This analysis also reveals scale invariance of the flow structures under overall change of spatial scale.« less

  4. Neuroimaging Techniques: a Conceptual Overview of Physical Principles, Contribution and History

    NASA Astrophysics Data System (ADS)

    Minati, Ludovico

    2006-06-01

    This paper is meant to provide a brief overview of the techniques currently used to image the brain and to study non-invasively its anatomy and function. After a historical summary in the first section, general aspects are outlined in the second section. The subsequent six sections survey, in order, computed tomography (CT), morphological magnetic resonance imaging (MRI), functional magnetic resonance imaging (fMRI), diffusion-tensor magnetic resonance imaging (DWI/DTI), positron emission tomography (PET), and electro- and magneto-encephalography (EEG/MEG) based imaging. Underlying physical principles, modelling and data processing approaches, as well as clinical and research relevance are briefly outlined for each technique. Given the breadth of the scope, there has been no attempt to be comprehensive. The ninth and final section outlines some aspects of active research in neuroimaging.

  5. Physical stabilization of low-molecular-weight amorphous drugs in the solid state: a material science approach.

    PubMed

    Qi, Sheng; McAuley, William J; Yang, Ziyi; Tipduangta, Pratchaya

    2014-07-01

    Use of the amorphous state is considered to be one of the most effective approaches for improving the dissolution and subsequent oral bioavailability of poorly water-soluble drugs. However as the amorphous state has much higher physical instability in comparison with its crystalline counterpart, stabilization of amorphous drugs in a solid-dosage form presents a major challenge to formulators. The currently used approaches for stabilizing amorphous drug are discussed in this article with respect to their preparation, mechanism of stabilization and limitations. In order to realize the potential of amorphous formulations, significant efforts are required to enable the prediction of formulation performance. This will facilitate the development of computational tools that can inform a rapid and rational formulation development process for amorphous drugs.

  6. Free energy calculations of short peptide chains using Adaptively Biased Molecular Dynamics

    NASA Astrophysics Data System (ADS)

    Karpusenka, Vadzim; Babin, Volodymyr; Roland, Christopher; Sagui, Celeste

    2008-10-01

    We performed a computational study of monomer peptides composed of methionine, alanine, leucine, glutamate, lysine (all amino acids with a helix-forming propensities); and proline, glycine tyrosine, serine, arginine (which all have poor helix-forming propensities). The free energy landscapes as a function of the handedness and radius of gyration have been calculated using the recently introduced Adaptively Biased Molecular Dynamics (ABMD) method, combined with replica exchange, multiple walkers, and post-processing Umbrella Correction (UC). Minima that correspond to some of the left- and right-handed 310-, α- and π-helixes were identified by secondary structure assignment methods (DSSP, Stride). The resulting free energy surface (FES) and the subsequent steered molecular dynamics (SMD) simulation results are in agreement with the empirical evidence of preferred secondary structures for the peptide chains considered.

  7. Fabrication of a negative PMMA master mold for soft-lithography by MeV ion beam lithography

    NASA Astrophysics Data System (ADS)

    Puttaraksa, Nitipon; Unai, Somrit; Rhodes, Michael W.; Singkarat, Kanda; Whitlow, Harry J.; Singkarat, Somsorn

    2012-02-01

    In this study, poly(methyl methacrylate) (PMMA) was investigated as a negative resist by irradiation with a high-fluence 2 MeV proton beam. The beam from a 1.7 MV Tandetron accelerator at the Plasma and Beam Physics Research Facility (PBP) of Chiang Mai University is shaped by a pair of computer-controlled L-shaped apertures which are used to expose rectangular pattern elements with 1-1000 μm side length. Repeated exposure of rectangular pattern elements allows a complex pattern to be built up. After subsequent development, the negative PMMA microstructure was used as a master mold for casting poly(dimethylsiloxane) (PDMS) following a standard soft-lithography process. The PDMS chip fabricated by this technique was demonstrated to be a microfluidic device.

  8. Automated analysis of clonal cancer cells by intravital imaging

    PubMed Central

    Coffey, Sarah Earley; Giedt, Randy J; Weissleder, Ralph

    2013-01-01

    Longitudinal analyses of single cell lineages over prolonged periods have been challenging particularly in processes characterized by high cell turn-over such as inflammation, proliferation, or cancer. RGB marking has emerged as an elegant approach for enabling such investigations. However, methods for automated image analysis continue to be lacking. Here, to address this, we created a number of different multicolored poly- and monoclonal cancer cell lines for in vitro and in vivo use. To classify these cells in large scale data sets, we subsequently developed and tested an automated algorithm based on hue selection. Our results showed that this method allows accurate analyses at a fraction of the computational time required by more complex color classification methods. Moreover, the methodology should be broadly applicable to both in vitro and in vivo analyses. PMID:24349895

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minati, Ludovico

    This paper is meant to provide a brief overview of the techniques currently used to image the brain and to study non-invasively its anatomy and function. After a historical summary in the first section, general aspects are outlined in the second section. The subsequent six sections survey, in order, computed tomography (CT), morphological magnetic resonance imaging (MRI), functional magnetic resonance imaging (fMRI), diffusion-tensor magnetic resonance imaging (DWI/DTI), positron emission tomography (PET), and electro- and magneto-encephalography (EEG/MEG) based imaging. Underlying physical principles, modelling and data processing approaches, as well as clinical and research relevance are briefly outlined for each technique. Givenmore » the breadth of the scope, there has been no attempt to be comprehensive. The ninth and final section outlines some aspects of active research in neuroimaging.« less

  10. A systematization of spectral data on the methanol molecule

    NASA Astrophysics Data System (ADS)

    Akhlyostin, A. Yu.; Voronina, S. S.; Lavrentiev, N. A.; Privezentsev, A. I.; Rodimova, O. B.; Fazliev, A. Z.

    2015-11-01

    Problems underlying a systematization of spectral data on the methanol molecule are formulated. Data on the energy levels and vacuum wavenumbers acquired from the published literature are presented in the form of information sources imported into the W@DIS information system. Sets of quantum numbers and labels used to describe the CH3OH molecular states are analyzed. The set of labels is different from universally accepted sets. A system of importing the data sources into W@DIS is outlined. The structure of databases characterizing transitions in an isolated CH3OH molecule is introduced and a digital library of the relevant published literature is discussed. A brief description is given of an imported data quality analysis and representation of the results obtained in the form of ontologies for subsequent computer processing.

  11. Rapidly Progressive Maxillary Atelectasis.

    PubMed

    Elkhatib, Ahmad; McMullen, Kyle; Hachem, Ralph Abi; Carrau, Ricardo L; Mastros, Nicholas

    2017-07-01

    Report of a patient with rapidly progressive maxillary atelectasis documented by sequential imaging. A 51-year-old man, presented with left periorbital and retro-orbital pain associated with left nasal obstruction. An initial computed tomographic (CT) scan of the paranasal sinuses failed to reveal any significant abnormality. A subsequent CT scan, indicated for recurrence of symptoms 11 months later, showed significant maxillary atelectasis. An uncinectomy, maxillary antrostomy, and anterior ethmoidectomy resulted in a complete resolution of the symptoms. Chronic maxillary atelectasis is most commonly a consequence of chronic rhinosinusitis. All previous reports have indicated a chronic process but lacked documentation of the course of the disease. This report documents a patient of rapidly progressive chronic maxillary atelectasis with CT scans that demonstrate changes in the maxillary sinus (from normal to atelectatic) within 11 months.

  12. Multiple Mechanisms for the Thermal Decomposition of Metallaisoxazolin-5-ones from Computational Investigations.

    PubMed

    Zhou, Chen-Chen; Hawthorne, M Frederick; Houk, K N; Jiménez-Osés, Gonzalo

    2017-08-18

    The thermal decompositions of metallaisoxazolin-5-ones containing Ir, Rh, or Co are investigated using density functional theory. The experimentally observed decarboxylations of these molecules are found to proceed through retro-(3+2)-cycloaddition reactions, generating the experimentally reported η 2 side-bonded nitrile complexes. These intermediates can isomerize in situ to yield a η 1 nitrile complex. A competitive alternative pathway is also found where the decarboxylation happens concertedly with an aryl migration process, producing a η 1 isonitrile complex. Despite their comparable stability, these η 1 bonded species were not detected experimentally. The experimentally detected η 2 side bound species are likely involved in the subsequent C-H activation reactions with hydrocarbon solvents reported for some of these metallaisoxazolin-5-ones.

  13. Buffered coscheduling for parallel programming and enhanced fault tolerance

    DOEpatents

    Petrini, Fabrizio [Los Alamos, NM; Feng, Wu-chun [Los Alamos, NM

    2006-01-31

    A computer implemented method schedules processor jobs on a network of parallel machine processors or distributed system processors. Control information communications generated by each process performed by each processor during a defined time interval is accumulated in buffers, where adjacent time intervals are separated by strobe intervals for a global exchange of control information. A global exchange of the control information communications at the end of each defined time interval is performed during an intervening strobe interval so that each processor is informed by all of the other processors of the number of incoming jobs to be received by each processor in a subsequent time interval. The buffered coscheduling method of this invention also enhances the fault tolerance of a network of parallel machine processors or distributed system processors

  14. Customization of user interfaces to reduce errors and enhance user acceptance.

    PubMed

    Burkolter, Dina; Weyers, Benjamin; Kluge, Annette; Luther, Wolfram

    2014-03-01

    Customization is assumed to reduce error and increase user acceptance in the human-machine relation. Reconfiguration gives the operator the option to customize a user interface according to his or her own preferences. An experimental study with 72 computer science students using a simulated process control task was conducted. The reconfiguration group (RG) interactively reconfigured their user interfaces and used the reconfigured user interface in the subsequent test whereas the control group (CG) used a default user interface. Results showed significantly lower error rates and higher acceptance of the RG compared to the CG while there were no significant differences between the groups regarding situation awareness and mental workload. Reconfiguration seems to be promising and therefore warrants further exploration. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  15. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOEpatents

    Faraj, Ahmad [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.

  16. How and When Accentuation Influences Temporally Selective Attention and Subsequent Semantic Processing during On-Line Spoken Language Comprehension: An ERP Study

    ERIC Educational Resources Information Center

    Li, Xiao-qing; Ren, Gui-qin

    2012-01-01

    An event-related brain potentials (ERP) experiment was carried out to investigate how and when accentuation influences temporally selective attention and subsequent semantic processing during on-line spoken language comprehension, and how the effect of accentuation on attention allocation and semantic processing changed with the degree of…

  17. Optimization of soaking stage in technological process of wheat germination by hydroponic method when objective function is defined implicitly

    NASA Astrophysics Data System (ADS)

    Koneva, M. S.; Rudenko, O. V.; Usatikov, S. V.; Bugayets, N. A.; Tamova, M. Yu; Fedorova, M. A.

    2018-05-01

    The increase in the efficiency of the "numerical" technology for solving computational problems of parametric optimization of the technological process of hydroponic germination of wheat grains is considered. In this situation, the quality criteria are contradictory and a part of them is given by implicit functions of many variables. One of the main stages, soaking, determining the time and quality of germinated wheat grain is studied, when grain receives the required amount of moisture and air oxygen for germination and subsequently accumulates enzymes. A solution algorithm for this problem is suggested implemented by means of software packages Statistica v.10 and MathCAD v.15. The use of the proposed mathematical models describing the processes of hydroponic soaking of spring soft wheat varieties made it possible to determine optimal conditions of germination. The results of investigations show that the type of aquatic environment used for soaking has a great influence on the process of water absorption, especially the chemical composition of the germinated material. The use of the anolyte of electrochemically activated water (ECHA-water) intensifies the process from 5.83 to 4 hours for wheat variety «Altayskaya 105» and from 13 to 8.8 hours - for «Pobla Runo».

  18. Method and apparatus of parallel computing with simultaneously operating stream prefetching and list prefetching engines

    DOEpatents

    Boyle, Peter A.; Christ, Norman H.; Gara, Alan; Mawhinney, Robert D.; Ohmacht, Martin; Sugavanam, Krishnan

    2012-12-11

    A prefetch system improves a performance of a parallel computing system. The parallel computing system includes a plurality of computing nodes. A computing node includes at least one processor and at least one memory device. The prefetch system includes at least one stream prefetch engine and at least one list prefetch engine. The prefetch system operates those engines simultaneously. After the at least one processor issues a command, the prefetch system passes the command to a stream prefetch engine and a list prefetch engine. The prefetch system operates the stream prefetch engine and the list prefetch engine to prefetch data to be needed in subsequent clock cycles in the processor in response to the passed command.

  19. Managing internode data communications for an uninitialized process in a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R

    2014-05-20

    A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior tomore » initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory.« less

  20. Managing internode data communications for an uninitialized process in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E

    2014-05-20

    A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior to initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory.

  1. 3D Printout Models vs. 3D-Rendered Images: Which Is Better for Preoperative Planning?

    PubMed

    Zheng, Yi-xiong; Yu, Di-fei; Zhao, Jian-gang; Wu, Yu-lian; Zheng, Bin

    2016-01-01

    Correct interpretation of a patient's anatomy and changes that occurs secondary to a disease process are crucial in the preoperative process to ensure optimal surgical treatment. In this study, we presented 3 different pancreatic cancer cases to surgical residents in the form of 3D-rendered images and 3D-printed models to investigate which modality resulted in the most appropriate preoperative plan. We selected 3 cases that would require significantly different preoperative plans based on key features identifiable in the preoperative computed tomography imaging. 3D volume rendering and 3D printing were performed respectively to create 2 different training ways. A total of 30, year 1 surgical residents were randomly divided into 2 groups. Besides traditional 2D computed tomography images, residents in group A (n = 15) reviewed 3D computer models, whereas in group B, residents (n = 15) reviewed 3D-printed models. Both groups subsequently completed an examination, designed in-house, to assess the appropriateness of their preoperative plan and provide a numerical score of the quality of the surgical plan. Residents in group B showed significantly higher quality of the surgical plan scores compared with residents in group A (76.4 ± 10.5 vs. 66.5 ± 11.2, p = 0.018). This difference was due in large part to a significant difference in knowledge of key surgical steps (22.1 ± 2.9 vs. 17.4 ± 4.2, p = 0.004) between each group. All participants reported a high level of satisfaction with the exercise. Results from this study support our hypothesis that 3D-printed models improve the quality of surgical trainee's preoperative plans. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  2. MONTE CARLO SIMULATION OF METASTABLE OXYGEN PHOTOCHEMISTRY IN COMETARY ATMOSPHERES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bisikalo, D. V.; Shematovich, V. I.; Gérard, J.-C.

    2015-01-01

    Cometary atmospheres are produced by the outgassing of material, mainly H{sub 2}O, CO, and CO{sub 2} from the nucleus of the comet under the energy input from the Sun. Subsequent photochemical processes lead to the production of other species generally absent from the nucleus, such as OH. Although all comets are different, they all have a highly rarefied atmosphere, which is an ideal environment for nonthermal photochemical processes to take place and influence the detailed state of the atmosphere. We develop a Monte Carlo model of the coma photochemistry. We compute the energy distribution functions (EDF) of the metastable O({supmore » 1}D) and O({sup 1}S) species and obtain the red (630 nm) and green (557.7 nm) spectral line shapes of the full coma, consistent with the computed EDFs and the expansion velocity. We show that both species have a severely non-Maxwellian EDF, that results in broad spectral lines and the suprathermal broadening dominates due to the expansion motion. We apply our model to the atmosphere of comet C/1996 B2 (Hyakutake) and 103P/Hartley 2. The computed width of the green line, expressed in terms of speed, is lower than that of the red line. This result is comparable to previous theoretical analyses, but in disagreement with observations. We explain that the spectral line shape does not only depend on the exothermicity of the photochemical production mechanisms, but also on thermalization, due to elastic collisions, reducing the width of the emission line coming from the O({sup 1}D) level, which has a longer lifetime.« less

  3. Efficient experimental design for uncertainty reduction in gene regulatory networks.

    PubMed

    Dehghannasiri, Roozbeh; Yoon, Byung-Jun; Dougherty, Edward R

    2015-01-01

    An accurate understanding of interactions among genes plays a major role in developing therapeutic intervention methods. Gene regulatory networks often contain a significant amount of uncertainty. The process of prioritizing biological experiments to reduce the uncertainty of gene regulatory networks is called experimental design. Under such a strategy, the experiments with high priority are suggested to be conducted first. The authors have already proposed an optimal experimental design method based upon the objective for modeling gene regulatory networks, such as deriving therapeutic interventions. The experimental design method utilizes the concept of mean objective cost of uncertainty (MOCU). MOCU quantifies the expected increase of cost resulting from uncertainty. The optimal experiment to be conducted first is the one which leads to the minimum expected remaining MOCU subsequent to the experiment. In the process, one must find the optimal intervention for every gene regulatory network compatible with the prior knowledge, which can be prohibitively expensive when the size of the network is large. In this paper, we propose a computationally efficient experimental design method. This method incorporates a network reduction scheme by introducing a novel cost function that takes into account the disruption in the ranking of potential experiments. We then estimate the approximate expected remaining MOCU at a lower computational cost using the reduced networks. Simulation results based on synthetic and real gene regulatory networks show that the proposed approximate method has close performance to that of the optimal method but at lower computational cost. The proposed approximate method also outperforms the random selection policy significantly. A MATLAB software implementing the proposed experimental design method is available at http://gsp.tamu.edu/Publications/supplementary/roozbeh15a/.

  4. Efficient experimental design for uncertainty reduction in gene regulatory networks

    PubMed Central

    2015-01-01

    Background An accurate understanding of interactions among genes plays a major role in developing therapeutic intervention methods. Gene regulatory networks often contain a significant amount of uncertainty. The process of prioritizing biological experiments to reduce the uncertainty of gene regulatory networks is called experimental design. Under such a strategy, the experiments with high priority are suggested to be conducted first. Results The authors have already proposed an optimal experimental design method based upon the objective for modeling gene regulatory networks, such as deriving therapeutic interventions. The experimental design method utilizes the concept of mean objective cost of uncertainty (MOCU). MOCU quantifies the expected increase of cost resulting from uncertainty. The optimal experiment to be conducted first is the one which leads to the minimum expected remaining MOCU subsequent to the experiment. In the process, one must find the optimal intervention for every gene regulatory network compatible with the prior knowledge, which can be prohibitively expensive when the size of the network is large. In this paper, we propose a computationally efficient experimental design method. This method incorporates a network reduction scheme by introducing a novel cost function that takes into account the disruption in the ranking of potential experiments. We then estimate the approximate expected remaining MOCU at a lower computational cost using the reduced networks. Conclusions Simulation results based on synthetic and real gene regulatory networks show that the proposed approximate method has close performance to that of the optimal method but at lower computational cost. The proposed approximate method also outperforms the random selection policy significantly. A MATLAB software implementing the proposed experimental design method is available at http://gsp.tamu.edu/Publications/supplementary/roozbeh15a/. PMID:26423515

  5. Detection of a gravitropism phenotype in glutamate receptor-like 3.3 mutants of Arabidopsis thaliana using machine vision and computation.

    PubMed

    Miller, Nathan D; Durham Brooks, Tessa L; Assadi, Amir H; Spalding, Edgar P

    2010-10-01

    Gene disruption frequently produces no phenotype in the model plant Arabidopsis thaliana, complicating studies of gene function. Functional redundancy between gene family members is one common explanation but inadequate detection methods could also be responsible. Here, newly developed methods for automated capture and processing of time series of images, followed by computational analysis employing modified linear discriminant analysis (LDA) and wavelet-based differentiation, were employed in a study of mutants lacking the Glutamate Receptor-Like 3.3 gene. Root gravitropism was selected as the process to study with high spatiotemporal resolution because the ligand-gated Ca(2+)-permeable channel encoded by GLR3.3 may contribute to the ion fluxes associated with gravity signal transduction in roots. Time series of root tip angles were collected from wild type and two different glr3.3 mutants across a grid of seed-size and seedling-age conditions previously found to be important to gravitropism. Statistical tests of average responses detected no significant difference between populations, but LDA separated both mutant alleles from the wild type. After projecting the data onto LDA solution vectors, glr3.3 mutants displayed greater population variance than the wild type in all four conditions. In three conditions the projection means also differed significantly between mutant and wild type. Wavelet analysis of the raw response curves showed that the LDA-detected phenotypes related to an early deceleration and subsequent slower-bending phase in glr3.3 mutants. These statistically significant, heritable, computation-based phenotypes generated insight into functions of GLR3.3 in gravitropism. The methods could be generally applicable to the study of phenotypes and therefore gene function.

  6. Detection of a Gravitropism Phenotype in glutamate receptor-like 3.3 Mutants of Arabidopsis thaliana Using Machine Vision and Computation

    PubMed Central

    Miller, Nathan D.; Durham Brooks, Tessa L.; Assadi, Amir H.; Spalding, Edgar P.

    2010-01-01

    Gene disruption frequently produces no phenotype in the model plant Arabidopsis thaliana, complicating studies of gene function. Functional redundancy between gene family members is one common explanation but inadequate detection methods could also be responsible. Here, newly developed methods for automated capture and processing of time series of images, followed by computational analysis employing modified linear discriminant analysis (LDA) and wavelet-based differentiation, were employed in a study of mutants lacking the Glutamate Receptor-Like 3.3 gene. Root gravitropism was selected as the process to study with high spatiotemporal resolution because the ligand-gated Ca2+-permeable channel encoded by GLR3.3 may contribute to the ion fluxes associated with gravity signal transduction in roots. Time series of root tip angles were collected from wild type and two different glr3.3 mutants across a grid of seed-size and seedling-age conditions previously found to be important to gravitropism. Statistical tests of average responses detected no significant difference between populations, but LDA separated both mutant alleles from the wild type. After projecting the data onto LDA solution vectors, glr3.3 mutants displayed greater population variance than the wild type in all four conditions. In three conditions the projection means also differed significantly between mutant and wild type. Wavelet analysis of the raw response curves showed that the LDA-detected phenotypes related to an early deceleration and subsequent slower-bending phase in glr3.3 mutants. These statistically significant, heritable, computation-based phenotypes generated insight into functions of GLR3.3 in gravitropism. The methods could be generally applicable to the study of phenotypes and therefore gene function. PMID:20647506

  7. Computational identification of potential multi-drug combinations for reduction of microglial inflammation in Alzheimer disease

    PubMed Central

    Anastasio, Thomas J.

    2015-01-01

    Like other neurodegenerative diseases, Alzheimer Disease (AD) has a prominent inflammatory component mediated by brain microglia. Reducing microglial inflammation could potentially halt or at least slow the neurodegenerative process. A major challenge in the development of treatments targeting brain inflammation is the sheer complexity of the molecular mechanisms that determine whether microglia become inflammatory or take on a more neuroprotective phenotype. The process is highly multifactorial, raising the possibility that a multi-target/multi-drug strategy could be more effective than conventional monotherapy. This study takes a computational approach in finding combinations of approved drugs that are potentially more effective than single drugs in reducing microglial inflammation in AD. This novel approach exploits the distinct advantages of two different computer programming languages, one imperative and the other declarative. Existing programs written in both languages implement the same model of microglial behavior, and the input/output relationships of both programs agree with each other and with data on microglia over an extensive test battery. Here the imperative program is used efficiently to screen the model for the most efficacious combinations of 10 drugs, while the declarative program is used to analyze in detail the mechanisms of action of the most efficacious combinations. Of the 1024 possible drug combinations, the simulated screen identifies only 7 that are able to move simulated microglia at least 50% of the way from a neurotoxic to a neuroprotective phenotype. Subsequent analysis shows that of the 7 most efficacious combinations, 2 stand out as superior both in strength and reliability. The model offers many experimentally testable and therapeutically relevant predictions concerning effective drug combinations and their mechanisms of action. PMID:26097457

  8. Enriching text with images and colored light

    NASA Astrophysics Data System (ADS)

    Sekulovski, Dragan; Geleijnse, Gijs; Kater, Bram; Korst, Jan; Pauws, Steffen; Clout, Ramon

    2008-01-01

    We present an unsupervised method to enrich textual applications with relevant images and colors. The images are collected by querying large image repositories and subsequently the colors are computed using image processing. A prototype system based on this method is presented where the method is applied to song lyrics. In combination with a lyrics synchronization algorithm the system produces a rich multimedia experience. In order to identify terms within the text that may be associated with images and colors, we select noun phrases using a part of speech tagger. Large image repositories are queried with these terms. Per term representative colors are extracted using the collected images. Hereto, we either use a histogram-based or a mean shift-based algorithm. The representative color extraction uses the non-uniform distribution of the colors found in the large repositories. The images that are ranked best by the search engine are displayed on a screen, while the extracted representative colors are rendered on controllable lighting devices in the living room. We evaluate our method by comparing the computed colors to standard color representations of a set of English color terms. A second evaluation focuses on the distance in color between a queried term in English and its translation in a foreign language. Based on results from three sets of terms, a measure of suitability of a term for color extraction based on KL Divergence is proposed. Finally, we compare the performance of the algorithm using either the automatically indexed repository of Google Images and the manually annotated Flickr.com. Based on the results of these experiments, we conclude that using the presented method we can compute the relevant color for a term using a large image repository and image processing.

  9. Quantum computation for solving linear systems

    NASA Astrophysics Data System (ADS)

    Cao, Yudong

    Quantum computation is a subject born out of the combination between physics and computer science. It studies how the laws of quantum mechanics can be exploited to perform computations much more efficiently than current computers (termed classical computers as oppose to quantum computers). The thesis starts by introducing ideas from quantum physics and theoretical computer science and based on these ideas, introducing the basic concepts in quantum computing. These introductory discussions are intended for non-specialists to obtain the essential knowledge needed for understanding the new results presented in the subsequent chapters. After introducing the basics of quantum computing, we focus on the recently proposed quantum algorithm for linear systems. The new results include i) special instances of quantum circuits that can be implemented using current experimental resources; ii) detailed quantum algorithms that are suitable for a broader class of linear systems. We show that for some particular problems the quantum algorithm is able to achieve exponential speedup over their classical counterparts.

  10. Orientation During Initial Learning and Subsequent Discrimination of Faces

    NASA Technical Reports Server (NTRS)

    Cohen, Malcolm M.; Holton, Emily M. (Technical Monitor)

    1997-01-01

    Discrimination of facial features degrades with stimulus rotation (e.g., the "Margaret Thatcher" effect). Thirty-two observers learned to discriminate between two upright, or two inverted, faces. Images, erect and rotated by +/-45deg, +/-90deg, +/-135deg and 180deg about the line of sight, were presented on a computer screen. Initial discriminative reaction times increased with stimulus rotation only for observers who learned the upright faces. Orientation during learning is critical in identifying faces subsequently seen at different orientations.

  11. Applying a CAD-generated imaging marker to assess short-term breast cancer risk

    NASA Astrophysics Data System (ADS)

    Mirniaharikandehei, Seyedehnafiseh; Zarafshani, Ali; Heidari, Morteza; Wang, Yunzhi; Aghaei, Faranak; Zheng, Bin

    2018-02-01

    Although whether using computer-aided detection (CAD) helps improve radiologists' performance in reading and interpreting mammograms is controversy due to higher false-positive detection rates, objective of this study is to investigate and test a new hypothesis that CAD-generated false-positives, in particular, the bilateral summation of false-positives, is a potential imaging marker associated with short-term breast cancer risk. An image dataset involving negative screening mammograms acquired from 1,044 women was retrospectively assembled. Each case involves 4 images of craniocaudal (CC) and mediolateral oblique (MLO) view of the left and right breasts. In the next subsequent mammography screening, 402 cases were positive for cancer detected and 642 remained negative. A CAD scheme was applied to process all "prior" negative mammograms. Some features from CAD scheme were extracted, which include detection seeds, the total number of false-positive regions, an average of detection scores and the sum of detection scores in CC and MLO view images. Then the features computed from two bilateral images of left and right breasts from either CC or MLO view were combined. In order to predict the likelihood of each testing case being positive in the next subsequent screening, two logistic regression models were trained and tested using a leave-one-case-out based cross-validation method. Data analysis demonstrated the maximum prediction accuracy with an area under a ROC curve of AUC=0.65+/-0.017 and the maximum adjusted odds ratio of 4.49 with a 95% confidence interval of [2.95, 6.83]. The results also illustrated an increasing trend in the adjusted odds ratio and risk prediction scores (p<0.01). Thus, the study showed that CAD-generated false-positives might provide a new quantitative imaging marker to help assess short-term breast cancer risk.

  12. Improved failure prediction in forming simulations through pre-strain mapping

    NASA Astrophysics Data System (ADS)

    Upadhya, Siddharth; Staupendahl, Daniel; Heuse, Martin; Tekkaya, A. Erman

    2018-05-01

    The sensitivity of sheared edges of advanced high strength steel (AHSS) sheets to cracking during subsequent forming operations and the difficulty to predict this failure with any degree of accuracy using conventionally used FLC based failure criteria is a major problem plaguing the manufacturing industry. A possible method that allows for an accurate prediction of edge cracks is the simulation of the shearing operation and carryover of this model into a subsequent forming simulation. But even with an efficient combination of a solid element shearing operation and a shell element forming simulation, the need for a fine mesh, and the resulting high computation time makes this approach not viable from an industry point of view. The crack sensitivity of sheared edges is due to work hardening in the shear-affected zone (SAZ). A method to predict plastic strains induced by the shearing process is to measure the hardness after shearing and calculate the ultimate tensile strength as well as the flow stress. In combination with the flow curve, the relevant strain data can be obtained. To eliminate the time-intensive shearing simulation necessary to obtain the strain data in the SAZ, a new pre-strain mapping approach is proposed. The pre-strains to be mapped are, hereby, determined from hardness values obtained in the proximity of the sheared edge. To investigate the performance of this approach the ISO/TS 16630 hole expansion test was simulated with shell elements for different materials, whereby the pre-strains were mapped onto the edge of the hole. The hole expansion ratios obtained from such pre-strain mapped simulations are in close agreement with the experimental results. Furthermore, the simulations can be carried out with no increase in computation time, making this an interesting and viable solution for predicting edge failure due to shearing.

  13. Applying a machine learning model using a locally preserving projection based feature regeneration algorithm to predict breast cancer risk

    NASA Astrophysics Data System (ADS)

    Heidari, Morteza; Zargari Khuzani, Abolfazl; Danala, Gopichandh; Mirniaharikandehei, Seyedehnafiseh; Qian, Wei; Zheng, Bin

    2018-03-01

    Both conventional and deep machine learning has been used to develop decision-support tools applied in medical imaging informatics. In order to take advantages of both conventional and deep learning approach, this study aims to investigate feasibility of applying a locally preserving projection (LPP) based feature regeneration algorithm to build a new machine learning classifier model to predict short-term breast cancer risk. First, a computer-aided image processing scheme was used to segment and quantify breast fibro-glandular tissue volume. Next, initially computed 44 image features related to the bilateral mammographic tissue density asymmetry were extracted. Then, an LLP-based feature combination method was applied to regenerate a new operational feature vector using a maximal variance approach. Last, a k-nearest neighborhood (KNN) algorithm based machine learning classifier using the LPP-generated new feature vectors was developed to predict breast cancer risk. A testing dataset involving negative mammograms acquired from 500 women was used. Among them, 250 were positive and 250 remained negative in the next subsequent mammography screening. Applying to this dataset, LLP-generated feature vector reduced the number of features from 44 to 4. Using a leave-onecase-out validation method, area under ROC curve produced by the KNN classifier significantly increased from 0.62 to 0.68 (p < 0.05) and odds ratio was 4.60 with a 95% confidence interval of [3.16, 6.70]. Study demonstrated that this new LPP-based feature regeneration approach enabled to produce an optimal feature vector and yield improved performance in assisting to predict risk of women having breast cancer detected in the next subsequent mammography screening.

  14. Internode data communications in a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Miller, Douglas R.; Parker, Jeffrey J.; Ratterman, Joseph D.; Smith, Brian E.

    2013-09-03

    Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.

  15. Internode data communications in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E

    2014-02-11

    Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.

  16. Vertebral Artery Dissection Causing Stroke After Trampoline Use.

    PubMed

    Casserly, Courtney S; Lim, Rodrick K; Prasad, Asuri Narayan

    2015-11-01

    The aim of this study was to report a case of a 4-year-old boy who had been playing on the trampoline and presented to the emergency department (ED) with vomiting and ataxia, and had a vertebral artery dissection with subsequent posterior circulation infarcts. This study is a chart review. The patient presented to the emergency department with a 4-day history of vomiting and gait unsteadiness. A computed tomography scan of his head revealed multiple left cerebellar infarcts. Subsequent magnetic resonance imaging/magnetic resonance angiogram of his head and neck demonstrated multiple infarcts involving the left cerebellum, bilateral thalami, and left occipital lobe. A computed tomography angiogram confirmed the presence of a left vertebral artery dissection. Vertebral artery dissection is a relatively common cause of stroke in the pediatric age group. Trampoline use has been associated with significant risk of injury to the head and neck. Patients who are small and/or young are most at risk. In this case, minor trauma secondary to trampoline use could be a possible mechanism for vertebral artery dissection and subsequent strokes. The association in this case warrants careful consideration because trampoline use could pose a significant risk to pediatric users.

  17. Characterization of ion-assisted induced absorption in A-Si thin-films used for multivariate optical computing

    NASA Astrophysics Data System (ADS)

    Nayak, Aditya B.; Price, James M.; Dai, Bin; Perkins, David; Chen, Ding Ding; Jones, Christopher M.

    2015-06-01

    Multivariate optical computing (MOC), an optical sensing technique for analog calculation, allows direct and robust measurement of chemical and physical properties of complex fluid samples in high-pressure/high-temperature (HP/HT) downhole environments. The core of this MOC technology is the integrated computational element (ICE), an optical element with a wavelength-dependent transmission spectrum designed to allow the detector to respond sensitively and specifically to the analytes of interest. A key differentiator of this technology is it uses all of the information present in the broadband optical spectrum to determine the proportion of the analyte present in a complex fluid mixture. The detection methodology is photometric in nature; therefore, this technology does not require a spectrometer to measure and record a spectrum or a computer to perform calculations on the recorded optical spectrum. The integrated computational element is a thin-film optical element with a specific optical response function designed for each analyte. The optical response function is achieved by fabricating alternating layers of high-index (a-Si) and low-index (SiO2) thin films onto a transparent substrate (BK7 glass) using traditional thin-film manufacturing processes (e.g., ion-assisted e-beam vacuum deposition). A proprietary software and process are used to control the thickness and material properties, including the optical constants of the materials during deposition to achieve the desired optical response function. The ion-assisted deposition is useful for controlling the densification of the film, stoichiometry, and material optical constants as well as to achieve high deposition growth rates and moisture-stable films. However, the ion-source can induce undesirable absorption in the film; and subsequently, modify the optical constants of the material during the ramp-up and stabilization period of the e-gun and ion-source, respectively. This paper characterizes the unwanted absorption in the a-Si thin-film using advanced thin-film metrology methods, including spectroscopic ellipsometry and Fourier transform infrared (FTIR) spectroscopy. The resulting analysis identifies a fundamental mechanism contributing to this absorption and a method for minimizing and accounting for the unwanted absorption in the thin-film such that the exact optical response function can be achieved.

  18. Intranode data communications in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Ratterman, Joseph D; Smith, Brian E

    2014-01-07

    Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a computer node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process.

  19. Intranode data communications in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Ratterman, Joseph D; Smith, Brian E

    2013-07-23

    Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a compute node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process.

  20. Approaches in highly parameterized inversion-PESTCommander, a graphical user interface for file and run management across networks

    USGS Publications Warehouse

    Karanovic, Marinko; Muffels, Christopher T.; Tonkin, Matthew J.; Hunt, Randall J.

    2012-01-01

    Models of environmental systems have become increasingly complex, incorporating increasingly large numbers of parameters in an effort to represent physical processes on a scale approaching that at which they occur in nature. Consequently, the inverse problem of parameter estimation (specifically, model calibration) and subsequent uncertainty analysis have become increasingly computation-intensive endeavors. Fortunately, advances in computing have made computational power equivalent to that of dozens to hundreds of desktop computers accessible through a variety of alternate means: modelers have various possibilities, ranging from traditional Local Area Networks (LANs) to cloud computing. Commonly used parameter estimation software is well suited to take advantage of the availability of such increased computing power. Unfortunately, logistical issues become increasingly important as an increasing number and variety of computers are brought to bear on the inverse problem. To facilitate efficient access to disparate computer resources, the PESTCommander program documented herein has been developed to provide a Graphical User Interface (GUI) that facilitates the management of model files ("file management") and remote launching and termination of "slave" computers across a distributed network of computers ("run management"). In version 1.0 described here, PESTCommander can access and ascertain resources across traditional Windows LANs: however, the architecture of PESTCommander has been developed with the intent that future releases will be able to access computing resources (1) via trusted domains established in Wide Area Networks (WANs) in multiple remote locations and (2) via heterogeneous networks of Windows- and Unix-based operating systems. The design of PESTCommander also makes it suitable for extension to other computational resources, such as those that are available via cloud computing. Version 1.0 of PESTCommander was developed primarily to work with the parameter estimation software PEST; the discussion presented in this report focuses on the use of the PESTCommander together with Parallel PEST. However, PESTCommander can be used with a wide variety of programs and models that require management, distribution, and cleanup of files before or after model execution. In addition to its use with the Parallel PEST program suite, discussion is also included in this report regarding the use of PESTCommander with the Global Run Manager GENIE, which was developed simultaneously with PESTCommander.

Top