Sample records for simple computational approach

  1. Simple and Effective Algorithms: Computer-Adaptive Testing.

    ERIC Educational Resources Information Center

    Linacre, John Michael

    Computer-adaptive testing (CAT) allows improved security, greater scoring accuracy, shorter testing periods, quicker availability of results, and reduced guessing and other undesirable test behavior. Simple approaches can be applied by the classroom teacher, or other content specialist, who possesses simple computer equipment and elementary…

  2. Acceleration and Velocity Sensing from Measured Strain

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi; Truax, Roger

    2015-01-01

    A simple approach for computing acceleration and velocity of a structure from the strain is proposed in this study. First, deflection and slope of the structure are computed from the strain using a two-step theory. Frequencies of the structure are computed from the time histories of strain using a parameter estimation technique together with an autoregressive moving average model. From deflection, slope, and frequencies of the structure, acceleration and velocity of the structure can be obtained using the proposed approach. Simple harmonic motion is assumed for the acceleration computations, and the central difference equation with a linear autoregressive model is used for the computations of velocity. A cantilevered rectangular wing model is used to validate the simple approach. Quality of the computed deflection, acceleration, and velocity values are independent of the number of fibers. The central difference equation with a linear autoregressive model proposed in this study follows the target response with reasonable accuracy. Therefore, the handicap of the backward difference equation, phase shift, is successfully overcome.

  3. Computer Page: Computer Studies for All--A Wider Approach

    ERIC Educational Resources Information Center

    Edens, A. J.

    1975-01-01

    An approach to teaching children aged 12 through 14 a substantial course about computers is described. Topics covered include simple algorithms, information and communication, man-machine communication, the concept of a system, the definition of a system, and the use of files. (SD)

  4. A Novel Shape Parameterization Approach

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    1999-01-01

    This paper presents a novel parameterization approach for complex shapes suitable for a multidisciplinary design optimization application. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft objects animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity analysis tools (e.g., nonlinear computational fluid dynamics and detailed finite element modeling). This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, and camber. The results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, performance, and a simple propulsion module.

  5. Microarray-based cancer prediction using soft computing approach.

    PubMed

    Wang, Xiaosheng; Gotoh, Osamu

    2009-05-26

    One of the difficulties in using gene expression profiles to predict cancer is how to effectively select a few informative genes to construct accurate prediction models from thousands or ten thousands of genes. We screen highly discriminative genes and gene pairs to create simple prediction models involved in single genes or gene pairs on the basis of soft computing approach and rough set theory. Accurate cancerous prediction is obtained when we apply the simple prediction models for four cancerous gene expression datasets: CNS tumor, colon tumor, lung cancer and DLBCL. Some genes closely correlated with the pathogenesis of specific or general cancers are identified. In contrast with other models, our models are simple, effective and robust. Meanwhile, our models are interpretable for they are based on decision rules. Our results demonstrate that very simple models may perform well on cancerous molecular prediction and important gene markers of cancer can be detected if the gene selection approach is chosen reasonably.

  6. Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD)

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    2000-01-01

    This paper presents a multidisciplinary shape parameterization approach. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft object animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in the same manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminate plate structures) and high-fidelity (e.g., nonlinear computational fluid dynamics and detailed finite element modeling) analysis tools. This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, camber, and free-form surface. Results are presented for a multidisciplinary application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, and a simple performance module.

  7. Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD)

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    2000-01-01

    This paper presents a multidisciplinary shape parameterization approach. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft object animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity (e.g., nonlinear computational fluid dynamics and detailed finite element modeling analysis tools. This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, camber, and free-form surface. Results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, and a simple performance module.

  8. Teaching Simulation and Computer-Aided Separation Optimization in Liquid Chromatography by Means of Illustrative Microsoft Excel Spreadsheets

    ERIC Educational Resources Information Center

    Fasoula, S.; Nikitas, P.; Pappa-Louisi, A.

    2017-01-01

    A series of Microsoft Excel spreadsheets were developed to simulate the process of separation optimization under isocratic and simple gradient conditions. The optimization procedure is performed in a stepwise fashion using simple macros for an automatic application of this approach. The proposed optimization approach involves modeling of the peak…

  9. Computational medicinal chemistry in fragment-based drug discovery: what, how and when.

    PubMed

    Rabal, Obdulia; Urbano-Cuadrado, Manuel; Oyarzabal, Julen

    2011-01-01

    The use of fragment-based drug discovery (FBDD) has increased in the last decade due to the encouraging results obtained to date. In this scenario, computational approaches, together with experimental information, play an important role to guide and speed up the process. By default, FBDD is generally considered as a constructive approach. However, such additive behavior is not always present, therefore, simple fragment maturation will not always deliver the expected results. In this review, computational approaches utilized in FBDD are reported together with real case studies, where applicability domains are exemplified, in order to analyze them, and then, maximize their performance and reliability. Thus, a proper use of these computational tools can minimize misleading conclusions, keeping the credit on FBDD strategy, as well as achieve higher impact in the drug-discovery process. FBDD goes one step beyond a simple constructive approach. A broad set of computational tools: docking, R group quantitative structure-activity relationship, fragmentation tools, fragments management tools, patents analysis and fragment-hopping, for example, can be utilized in FBDD, providing a clear positive impact if they are utilized in the proper scenario - what, how and when. An initial assessment of additive/non-additive behavior is a critical point to define the most convenient approach for fragments elaboration.

  10. A Term Project for a Course on Computer Forensics

    ERIC Educational Resources Information Center

    Harrison, Warren

    2006-01-01

    The typical approach to creating an examination disk for exercises and projects in a course on computer forensics is for the instructor to populate a piece of media with evidence to be retrieved. While such an approach supports the simple use of forensic tools, in many cases the use of an instructor-developed examination disk avoids utilizing some…

  11. Jig-Shape Optimization of a Low-Boom Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi

    2018-01-01

    A simple approach for optimizing the jig-shape is proposed in this study. This simple approach is based on an unconstrained optimization problem and applied to a low-boom supersonic aircraft. In this study, the jig-shape optimization is performed using the two-step approach. First, starting design variables are computed using the least squares surface fitting technique. Next, the jig-shape is further tuned using a numerical optimization procedure based on in-house object-oriented optimization tool.

  12. Graphical User Interface Programming in Introductory Computer Science.

    ERIC Educational Resources Information Center

    Skolnick, Michael M.; Spooner, David L.

    Modern computing systems exploit graphical user interfaces for interaction with users; as a result, introductory computer science courses must begin to teach the principles underlying such interfaces. This paper presents an approach to graphical user interface (GUI) implementation that is simple enough for beginning students to understand, yet…

  13. Neuromorphic Computing: A Post-Moore's Law Complementary Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuman, Catherine D; Birdwell, John Douglas; Dean, Mark

    2016-01-01

    We describe our approach to post-Moore's law computing with three neuromorphic computing models that share a RISC philosophy, featuring simple components combined with a flexible and programmable structure. We envision these to be leveraged as co-processors, or as data filters to provide in situ data analysis in supercomputing environments.

  14. New approach in the quantum statistical parton distribution

    NASA Astrophysics Data System (ADS)

    Sohaily, Sozha; Vaziri (Khamedi), Mohammad

    2017-12-01

    An attempt to find simple parton distribution functions (PDFs) based on quantum statistical approach is presented. The PDFs described by the statistical model have very interesting physical properties which help to understand the structure of partons. The longitudinal portion of distribution functions are given by applying the maximum entropy principle. An interesting and simple approach to determine the statistical variables exactly without fitting and fixing parameters is surveyed. Analytic expressions of the x-dependent PDFs are obtained in the whole x region [0, 1], and the computed distributions are consistent with the experimental observations. The agreement with experimental data, gives a robust confirm of our simple presented statistical model.

  15. Teaching Oscillations with a Small Computer.

    ERIC Educational Resources Information Center

    Calvo, J. L.; And Others

    1983-01-01

    Describes a simple, inexpensive electronic circuit used as a small analog computer in an experimental approach to the study of oscillations. Includes circuit diagram and an example of the method using steps followed by students studying underdamped oscillatory motion. (JN)

  16. Analytical model for ion stopping power and range in the therapeutic energy interval for beams of hydrogen and heavier ions

    NASA Astrophysics Data System (ADS)

    Donahue, William; Newhauser, Wayne D.; Ziegler, James F.

    2016-09-01

    Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u-1 to 450 MeV u-1 or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.

  17. Analytical model for ion stopping power and range in the therapeutic energy interval for beams of hydrogen and heavier ions.

    PubMed

    Donahue, William; Newhauser, Wayne D; Ziegler, James F

    2016-09-07

    Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u(-1) to 450 MeV u(-1) or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.

  18. Unsteady Aerodynamic Force Sensing from Strain Data

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi

    2017-01-01

    A simple approach for computing unsteady aerodynamic forces from simulated measured strain data is proposed in this study. First, the deflection and slope of the structure are computed from the unsteady strain using the two-step approach. Velocities and accelerations of the structure are computed using the autoregressive moving average model, on-line parameter estimator, low-pass filter, and a least-squares curve fitting method together with analytical derivatives with respect to time. Finally, aerodynamic forces over the wing are computed using modal aerodynamic influence coefficient matrices, a rational function approximation, and a time-marching algorithm.

  19. Spatiotemporal video deinterlacing using control grid interpolation

    NASA Astrophysics Data System (ADS)

    Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin

    2015-03-01

    With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.

  20. Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach

    NASA Technical Reports Server (NTRS)

    Mak, Victor W. K.

    1986-01-01

    Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.

  1. Hardware-based Artificial Neural Networks for Size, Weight, and Power Constrained Platforms (Preprint)

    DTIC Science & Technology

    2012-11-01

    few sensors/complex computations, and many sensors/simple computation. II. CHALLENGES WITH NANO-ENABLED NEUROMORPHIC CHIPS A wide variety of...scenarios. Neuromorphic processors, which are based on the highly parallelized computing architecture of the mammalian brain, show great promise in...in the brain. This fundamentally different approach, frequently referred to as neuromorphic computing, is thought to be better able to solve fuzzy

  2. Communication: Symmetrical quasi-classical analysis of linear optical spectroscopy

    NASA Astrophysics Data System (ADS)

    Provazza, Justin; Coker, David F.

    2018-05-01

    The symmetrical quasi-classical approach for propagation of a many degree of freedom density matrix is explored in the context of computing linear spectra. Calculations on a simple two state model for which exact results are available suggest that the approach gives a qualitative description of peak positions, relative amplitudes, and line broadening. Short time details in the computed dipole autocorrelation function result in exaggerated tails in the spectrum.

  3. Physics Based Modeling and Rendering of Vegetation in the Thermal Infrared

    NASA Technical Reports Server (NTRS)

    Smith, J. A.; Ballard, J. R., Jr.

    1999-01-01

    We outline a procedure for rendering physically-based thermal infrared images of simple vegetation scenes. Our approach incorporates the biophysical processes that affect the temperature distribution of the elements within a scene. Computer graphics plays a key role in two respects. First, in computing the distribution of scene shaded and sunlit facets and, second, in the final image rendering once the temperatures of all the elements in the scene have been computed. We illustrate our approach for a simple corn scene where the three-dimensional geometry is constructed based on measured morphological attributes of the row crop. Statistical methods are used to construct a representation of the scene in agreement with the measured characteristics. Our results are quite good. The rendered images exhibit realistic behavior in directional properties as a function of view and sun angle. The root-mean-square error in measured versus predicted brightness temperatures for the scene was 2.1 deg C.

  4. Simple, inexpensive computerized rodent activity meters.

    PubMed

    Horton, R M; Karachunski, P I; Kellermann, S A; Conti-Fine, B M

    1995-10-01

    We describe two approaches for using obsolescent computers, either an IBM PC XT or an Apple Macintosh Plus, to accurately quantify spontaneous rodent activity, as revealed by continuous monitoring of the spontaneous usage of running activity wheels. Because such computers can commonly be obtained at little or no expense, and other commonly available materials and inexpensive parts can be used, these meters can be built quite economically. Construction of these meters requires no specialized electronics expertise, and their software requirements are simple. The computer interfaces are potentially of general interest, as they could also be used for monitoring a variety of events in a research setting.

  5. Automating quantum experiment control

    NASA Astrophysics Data System (ADS)

    Stevens, Kelly E.; Amini, Jason M.; Doret, S. Charles; Mohler, Greg; Volin, Curtis; Harter, Alexa W.

    2017-03-01

    The field of quantum information processing is rapidly advancing. As the control of quantum systems approaches the level needed for useful computation, the physical hardware underlying the quantum systems is becoming increasingly complex. It is already becoming impractical to manually code control for the larger hardware implementations. In this chapter, we will employ an approach to the problem of system control that parallels compiler design for a classical computer. We will start with a candidate quantum computing technology, the surface electrode ion trap, and build a system instruction language which can be generated from a simple machine-independent programming language via compilation. We incorporate compile time generation of ion routing that separates the algorithm description from the physical geometry of the hardware. Extending this approach to automatic routing at run time allows for automated initialization of qubit number and placement and additionally allows for automated recovery after catastrophic events such as qubit loss. To show that these systems can handle real hardware, we present a simple demonstration system that routes two ions around a multi-zone ion trap and handles ion loss and ion placement. While we will mainly use examples from transport-based ion trap quantum computing, many of the issues and solutions are applicable to other architectures.

  6. Fitting mechanistic epidemic models to data: A comparison of simple Markov chain Monte Carlo approaches.

    PubMed

    Li, Michael; Dushoff, Jonathan; Bolker, Benjamin M

    2018-07-01

    Simple mechanistic epidemic models are widely used for forecasting and parameter estimation of infectious diseases based on noisy case reporting data. Despite the widespread application of models to emerging infectious diseases, we know little about the comparative performance of standard computational-statistical frameworks in these contexts. Here we build a simple stochastic, discrete-time, discrete-state epidemic model with both process and observation error and use it to characterize the effectiveness of different flavours of Bayesian Markov chain Monte Carlo (MCMC) techniques. We use fits to simulated data, where parameters (and future behaviour) are known, to explore the limitations of different platforms and quantify parameter estimation accuracy, forecasting accuracy, and computational efficiency across combinations of modeling decisions (e.g. discrete vs. continuous latent states, levels of stochasticity) and computational platforms (JAGS, NIMBLE, Stan).

  7. Flowfield computation of entry vehicles

    NASA Technical Reports Server (NTRS)

    Prabhu, Dinesh K.

    1990-01-01

    The equations governing the multidimensional flow of a reacting mixture of thermally perfect gasses were derived. The modeling procedures for the various terms of the conservation laws are discussed. A numerical algorithm, based on the finite-volume approach, to solve these conservation equations was developed. The advantages and disadvantages of the present numerical scheme are discussed from the point of view of accuracy, computer time, and memory requirements. A simple one-dimensional model problem was solved to prove the feasibility and accuracy of the algorithm. A computer code implementing the above algorithm was developed and is presently being applied to simple geometries and conditions. Once the code is completely debugged and validated, it will be used to compute the complete unsteady flow field around the Aeroassist Flight Experiment (AFE) body.

  8. Conifer ovulate cones accumulate pollen principally by simple impaction.

    PubMed

    Cresswell, James E; Henning, Kevin; Pennel, Christophe; Lahoubi, Mohamed; Patrick, Michael A; Young, Phillipe G; Tabor, Gavin R

    2007-11-13

    In many pine species (Family Pinaceae), ovulate cones structurally resemble a turbine, which has been widely interpreted as an adaptation for improving pollination by producing complex aerodynamic effects. We tested the turbine interpretation by quantifying patterns of pollen accumulation on ovulate cones in a wind tunnel and by using simulation models based on computational fluid dynamics. We used computer-aided design and computed tomography to create computational fluid dynamics model cones. We studied three species: Pinus radiata, Pinus sylvestris, and Cedrus libani. Irrespective of the approach or species studied, we found no evidence that turbine-like aerodynamics made a significant contribution to pollen accumulation, which instead occurred primarily by simple impaction. Consequently, we suggest alternative adaptive interpretations for the structure of ovulate cones.

  9. A Simple Technique for Securing Data at Rest Stored in a Computing Cloud

    NASA Astrophysics Data System (ADS)

    Sedayao, Jeff; Su, Steven; Ma, Xiaohao; Jiang, Minghao; Miao, Kai

    "Cloud Computing" offers many potential benefits, including cost savings, the ability to deploy applications and services quickly, and the ease of scaling those application and services once they are deployed. A key barrier for enterprise adoption is the confidentiality of data stored on Cloud Computing Infrastructure. Our simple technique implemented with Open Source software solves this problem by using public key encryption to render stored data at rest unreadable by unauthorized personnel, including system administrators of the cloud computing service on which the data is stored. We validate our approach on a network measurement system implemented on PlanetLab. We then use it on a service where confidentiality is critical - a scanning application that validates external firewall implementations.

  10. Conifer ovulate cones accumulate pollen principally by simple impaction

    PubMed Central

    Cresswell, James E.; Henning, Kevin; Pennel, Christophe; Lahoubi, Mohamed; Patrick, Michael A.; Young, Phillipe G.; Tabor, Gavin R.

    2007-01-01

    In many pine species (Family Pinaceae), ovulate cones structurally resemble a turbine, which has been widely interpreted as an adaptation for improving pollination by producing complex aerodynamic effects. We tested the turbine interpretation by quantifying patterns of pollen accumulation on ovulate cones in a wind tunnel and by using simulation models based on computational fluid dynamics. We used computer-aided design and computed tomography to create computational fluid dynamics model cones. We studied three species: Pinus radiata, Pinus sylvestris, and Cedrus libani. Irrespective of the approach or species studied, we found no evidence that turbine-like aerodynamics made a significant contribution to pollen accumulation, which instead occurred primarily by simple impaction. Consequently, we suggest alternative adaptive interpretations for the structure of ovulate cones. PMID:17986613

  11. Acceleration and Velocity Sensing from Measured Strain

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi; Truax, Roger

    2016-01-01

    A simple approach for computing acceleration and velocity of a structure from the strain is proposed in this study. First, deflection and slope of the structure are computed from the strain using a two-step theory. Frequencies of the structure are computed from the time histories of strain using a parameter estimation technique together with an Autoregressive Moving Average model. From deflection, slope, and frequencies of the structure, acceleration and velocity of the structure can be obtained using the proposed approach. shape sensing, fiber optic strain sensor, system equivalent reduction and expansion process.

  12. Simple and powerful visual stimulus generator.

    PubMed

    Kremlácek, J; Kuba, M; Kubová, Z; Vít, F

    1999-02-01

    We describe a cheap, simple, portable and efficient approach to visual stimulation for neurophysiology which does not need any special hardware equipment. The method based on an animation technique uses the FLI autodesk animator format. This form of the animation is replayed by a special program ('player') providing synchronisation pulses toward recording system via parallel port. The 'player is running on an IBM compatible personal computer under MS-DOS operation system and stimulus is displayed on a VGA computer monitor. Various stimuli created with this technique for visual evoked potentials (VEPs) are presented.

  13. A simple, stable, and accurate linear tetrahedral finite element for transient, nearly, and fully incompressible solid dynamics: A dynamic variational multiscale approach [A simple, stable, and accurate tetrahedral finite element for transient, nearly incompressible, linear and nonlinear elasticity: A dynamic variational multiscale approach

    DOE PAGES

    Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi; ...

    2015-11-12

    Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less

  14. A simple, stable, and accurate linear tetrahedral finite element for transient, nearly, and fully incompressible solid dynamics: A dynamic variational multiscale approach [A simple, stable, and accurate tetrahedral finite element for transient, nearly incompressible, linear and nonlinear elasticity: A dynamic variational multiscale approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi

    Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less

  15. Quantum computing and probability.

    PubMed

    Ferry, David K

    2009-11-25

    Over the past two decades, quantum computing has become a popular and promising approach to trying to solve computationally difficult problems. Missing in many descriptions of quantum computing is just how probability enters into the process. Here, we discuss some simple examples of how uncertainty and probability enter, and how this and the ideas of quantum computing challenge our interpretations of quantum mechanics. It is found that this uncertainty can lead to intrinsic decoherence, and this raises challenges for error correction.

  16. A Simple and Robust Method for Partially Matched Samples Using the P-Values Pooling Approach

    PubMed Central

    Kuan, Pei Fen; Huang, Bo

    2013-01-01

    This paper focuses on statistical analyses in scenarios where some samples from the matched pairs design are missing, resulting in partially matched samples. Motivated by the idea of meta-analysis, we recast the partially matched samples as coming from two experimental designs, and propose a simple yet robust approach based on the weighted Z-test to integrate the p-values computed from these two designs. We show that the proposed approach achieves better operating characteristics in simulations and a case study, compared to existing methods for partially matched samples. PMID:23417968

  17. A Computational Behaviorist Takes Turing's Test

    NASA Astrophysics Data System (ADS)

    Whalen, Thomas E.

    Behaviorism is a school of thought in experimental psychology that has given rise to powerful techniques for managing behavior. Because the Turing Test is a test of linguistic behavior rather than mental processes, approaching the test from a behavioristic perspective is worth examining. A behavioral approach begins by observing the kinds of questions that judges ask, then links the invariant features of those questions to pre-written answers. Because this approach is simple and powerful, it has been more successful in Turing competitions than the more ambitious linguistic approaches. Computational behaviorism may prove successful in other areas of Artificial Intelligence.

  18. Center for Parallel Optimization.

    DTIC Science & Technology

    1996-03-19

    A NEW OPTIMIZATION BASED APPROACH TO IMPROVING GENERALIZATION IN MACHINE LEARNING HAS BEEN PROPOSED AND COMPUTATIONALLY VALIDATED ON SIMPLE LINEAR MODELS AS WELL AS ON HIGHLY NONLINEAR SYSTEMS SUCH AS NEURAL NETWORKS.

  19. Bubble nucleation in simple and molecular liquids via the largest spherical cavity method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonzalez, Miguel A., E-mail: m.gonzalez12@imperial.ac.uk; Department of Chemistry, Imperial College London, London SW7 2AZ; Abascal, José L. F.

    2015-04-21

    In this work, we propose a methodology to compute bubble nucleation free energy barriers using trajectories generated via molecular dynamics simulations. We follow the bubble nucleation process by means of a local order parameter, defined by the volume of the largest spherical cavity (LSC) formed in the nucleating trajectories. This order parameter simplifies considerably the monitoring of the nucleation events, as compared with the previous approaches which require ad hoc criteria to classify the atoms and molecules as liquid or vapor. The combination of the LSC and the mean first passage time technique can then be used to obtain themore » free energy curves. Upon computation of the cavity distribution function the nucleation rate and free-energy barrier can then be computed. We test our method against recent computations of bubble nucleation in simple liquids and water at negative pressures. We obtain free-energy barriers in good agreement with the previous works. The LSC method provides a versatile and computationally efficient route to estimate the volume of critical bubbles the nucleation rate and to compute bubble nucleation free-energies in both simple and molecular liquids.« less

  20. Efficient computation of PDF-based characteristics from diffusion MR signal.

    PubMed

    Assemlal, Haz-Edine; Tschumperlé, David; Brun, Luc

    2008-01-01

    We present a general method for the computation of PDF-based characteristics of the tissue micro-architecture in MR imaging. The approach relies on the approximation of the MR signal by a series expansion based on Spherical Harmonics and Laguerre-Gaussian functions, followed by a simple projection step that is efficiently done in a finite dimensional space. The resulting algorithm is generic, flexible and is able to compute a large set of useful characteristics of the local tissues structure. We illustrate the effectiveness of this approach by showing results on synthetic and real MR datasets acquired in a clinical time-frame.

  1. An Approach to Poiseuille's Law in an Undergraduate Laboratory Experiment

    ERIC Educational Resources Information Center

    Sianoudis, I. A.; Drakaki, E.

    2008-01-01

    The continuous growth of computer and sensor technology allows many researchers to develop simple modifications and/or refinements to standard educational experiments, making them more attractive and comprehensible to students and thus increasing their educational impact. In the framework of this approach, the present study proposes an alternative…

  2. Reducing usage of the computational resources by event driven approach to model predictive control

    NASA Astrophysics Data System (ADS)

    Misik, Stefan; Bradac, Zdenek; Cela, Arben

    2017-08-01

    This paper deals with a real-time and optimal control of dynamic systems while also considers the constraints which these systems might be subject to. Main objective of this work is to propose a simple modification of the existing Model Predictive Control approach to better suit needs of computational resource-constrained real-time systems. An example using model of a mechanical system is presented and the performance of the proposed method is evaluated in a simulated environment.

  3. Instructional Approach to Molecular Electronic Structure Theory

    ERIC Educational Resources Information Center

    Dykstra, Clifford E.; Schaefer, Henry F.

    1977-01-01

    Describes a graduate quantum mechanics projects in which students write a computer program that performs ab initio calculations on the electronic structure of a simple molecule. Theoretical potential energy curves are produced. (MLH)

  4. Different approaches for the texture classification of a remote sensing image bank

    NASA Astrophysics Data System (ADS)

    Durand, Philippe; Brunet, Gerard; Ghorbanzadeh, Dariush; Jaupi, Luan

    2018-04-01

    In this paper, we summarize and compare two different approaches used by the authors, to classify different natural textures. The first approach, which is simple and inexpensive in computing time, uses a data bank image and an expert system able to classify different textures from a number of rules established by discipline specialists. The second method uses the same database and a neural networks approach.

  5. Simple Kinematic Pathway Approach (KPA) to Catchment-scale Travel Time and Water Age Distributions

    NASA Astrophysics Data System (ADS)

    Soltani, S. S.; Cvetkovic, V.; Destouni, G.

    2017-12-01

    The distribution of catchment-scale water travel times is strongly influenced by morphological dispersion and is partitioned between hillslope and larger, regional scales. We explore whether hillslope travel times are predictable using a simple semi-analytical "kinematic pathway approach" (KPA) that accounts for dispersion on two levels of morphological and macro-dispersion. The study gives new insights to shallow (hillslope) and deep (regional) groundwater travel times by comparing numerical simulations of travel time distributions, referred to as "dynamic model", with corresponding KPA computations for three different real catchment case studies in Sweden. KPA uses basic structural and hydrological data to compute transient water travel time (forward mode) and age (backward mode) distributions at the catchment outlet. Longitudinal and morphological dispersion components are reflected in KPA computations by assuming an effective Peclet number and topographically driven pathway length distributions, respectively. Numerical simulations of advective travel times are obtained by means of particle tracking using the fully-integrated flow model MIKE SHE. The comparison of computed cumulative distribution functions of travel times shows significant influence of morphological dispersion and groundwater recharge rate on the compatibility of the "kinematic pathway" and "dynamic" models. Zones of high recharge rate in "dynamic" models are associated with topographically driven groundwater flow paths to adjacent discharge zones, e.g. rivers and lakes, through relatively shallow pathway compartments. These zones exhibit more compatible behavior between "dynamic" and "kinematic pathway" models than the zones of low recharge rate. Interestingly, the travel time distributions of hillslope compartments remain almost unchanged with increasing recharge rates in the "dynamic" models. This robust "dynamic" model behavior suggests that flow path lengths and travel times in shallow hillslope compartments are controlled by topography, and therefore application and further development of the simple "kinematic pathway" approach is promising for their modeling.

  6. A Computationally Efficient Method for Polyphonic Pitch Estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio

    2009-12-01

    This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  7. Accurate Bit Error Rate Calculation for Asynchronous Chaos-Based DS-CDMA over Multipath Channel

    NASA Astrophysics Data System (ADS)

    Kaddoum, Georges; Roviras, Daniel; Chargé, Pascal; Fournier-Prunaret, Daniele

    2009-12-01

    An accurate approach to compute the bit error rate expression for multiuser chaosbased DS-CDMA system is presented in this paper. For more realistic communication system a slow fading multipath channel is considered. A simple RAKE receiver structure is considered. Based on the bit energy distribution, this approach compared to others computation methods existing in literature gives accurate results with low computation charge. Perfect estimation of the channel coefficients with the associated delays and chaos synchronization is assumed. The bit error rate is derived in terms of the bit energy distribution, the number of paths, the noise variance, and the number of users. Results are illustrated by theoretical calculations and numerical simulations which point out the accuracy of our approach.

  8. Dynamical minimalism: why less is more in psychology.

    PubMed

    Nowak, Andrzej

    2004-01-01

    The principle of parsimony, embraced in all areas of science, states that simple explanations are preferable to complex explanations in theory construction. Parsimony, however, can necessitate a trade-off with depth and richness in understanding. The approach of dynamical minimalism avoids this trade-off. The goal of this approach is to identify the simplest mechanisms and fewest variables capable of producing the phenomenon in question. A dynamical model in which change is produced by simple rules repetitively interacting with each other can exhibit unexpected and complex properties. It is thus possible to explain complex psychological and social phenomena with very simple models if these models are dynamic. In dynamical minimalist theories, then, the principle of parsimony can be followed without sacrificing depth in understanding. Computer simulations have proven especially useful for investigating the emergent properties of simple models.

  9. A three-dimensional FEM-DEM technique for predicting the evolution of fracture in geomaterials and concrete

    NASA Astrophysics Data System (ADS)

    Zárate, Francisco; Cornejo, Alejandro; Oñate, Eugenio

    2018-07-01

    This paper extends to three dimensions (3D), the computational technique developed by the authors in 2D for predicting the onset and evolution of fracture in a finite element mesh in a simple manner based on combining the finite element method and the discrete element method (DEM) approach (Zárate and Oñate in Comput Part Mech 2(3):301-314, 2015). Once a crack is detected at an element edge, discrete elements are generated at the adjacent element vertexes and a simple DEM mechanism is considered in order to follow the evolution of the crack. The combination of the DEM with simple four-noded linear tetrahedron elements correctly captures the onset of fracture and its evolution, as shown in several 3D examples of application.

  10. A Heuristic Probabilistic Approach to Estimating Size-Dependent Mobility of Nonuniform Sediment

    NASA Astrophysics Data System (ADS)

    Woldegiorgis, B. T.; Wu, F. C.; van Griensven, A.; Bauwens, W.

    2017-12-01

    Simulating the mechanism of bed sediment mobility is essential for modelling sediment dynamics. Despite the fact that many studies are carried out on this subject, they use complex mathematical formulations that are computationally expensive, and are often not easy for implementation. In order to present a simple and computationally efficient complement to detailed sediment mobility models, we developed a heuristic probabilistic approach to estimating the size-dependent mobilities of nonuniform sediment based on the pre- and post-entrainment particle size distributions (PSDs), assuming that the PSDs are lognormally distributed. The approach fits a lognormal probability density function (PDF) to the pre-entrainment PSD of bed sediment and uses the threshold particle size of incipient motion and the concept of sediment mixture to estimate the PSDs of the entrained sediment and post-entrainment bed sediment. The new approach is simple in physical sense and significantly reduces the complexity and computation time and resource required by detailed sediment mobility models. It is calibrated and validated with laboratory and field data by comparing to the size-dependent mobilities predicted with the existing empirical lognormal cumulative distribution function (CDF) approach. The novel features of the current approach are: (1) separating the entrained and non-entrained sediments by a threshold particle size, which is a modified critical particle size of incipient motion by accounting for the mixed-size effects, and (2) using the mixture-based pre- and post-entrainment PSDs to provide a continuous estimate of the size-dependent sediment mobility.

  11. Autonomic Cluster Management System (ACMS): A Demonstration of Autonomic Principles at Work

    NASA Technical Reports Server (NTRS)

    Baldassari, James D.; Kopec, Christopher L.; Leshay, Eric S.; Truszkowski, Walt; Finkel, David

    2005-01-01

    Cluster computing, whereby a large number of simple processors or nodes are combined together to apparently function as a single powerful computer, has emerged as a research area in its own right. The approach offers a relatively inexpensive means of achieving significant computational capabilities for high-performance computing applications, while simultaneously affording the ability to. increase that capability simply by adding more (inexpensive) processors. However, the task of manually managing and con.guring a cluster quickly becomes impossible as the cluster grows in size. Autonomic computing is a relatively new approach to managing complex systems that can potentially solve many of the problems inherent in cluster management. We describe the development of a prototype Automatic Cluster Management System (ACMS) that exploits autonomic properties in automating cluster management.

  12. Design and Training of Limited-Interconnect Architectures

    DTIC Science & Technology

    1991-07-16

    and signal processing. Neuromorphic (brain like) models, allow an alternative for achieving real-time operation tor such tasks, while having a...compact and robust architecture. Neuromorphic models consist of interconnections of simple computational nodes. In this approach, each node computes a...operational performance. I1. Research Objectives The research objectives were: 1. Development of on- chip local training rules specifically designed for

  13. A general concept for consistent documentation of computational analyses

    PubMed Central

    Müller, Fabian; Nordström, Karl; Lengauer, Thomas; Schulz, Marcel H.

    2015-01-01

    The ever-growing amount of data in the field of life sciences demands standardized ways of high-throughput computational analysis. This standardization requires a thorough documentation of each step in the computational analysis to enable researchers to understand and reproduce the results. However, due to the heterogeneity in software setups and the high rate of change during tool development, reproducibility is hard to achieve. One reason is that there is no common agreement in the research community on how to document computational studies. In many cases, simple flat files or other unstructured text documents are provided by researchers as documentation, which are often missing software dependencies, versions and sufficient documentation to understand the workflow and parameter settings. As a solution we suggest a simple and modest approach for documenting and verifying computational analysis pipelines. We propose a two-part scheme that defines a computational analysis using a Process and an Analysis metadata document, which jointly describe all necessary details to reproduce the results. In this design we separate the metadata specifying the process from the metadata describing an actual analysis run, thereby reducing the effort of manual documentation to an absolute minimum. Our approach is independent of a specific software environment, results in human readable XML documents that can easily be shared with other researchers and allows an automated validation to ensure consistency of the metadata. Because our approach has been designed with little to no assumptions concerning the workflow of an analysis, we expect it to be applicable in a wide range of computational research fields. Database URL: http://deep.mpi-inf.mpg.de/DAC/cmds/pub/pyvalid.zip PMID:26055099

  14. Intelligent RF-Based Gesture Input Devices Implemented Using e-Textiles †

    PubMed Central

    Hughes, Dana; Profita, Halley; Radzihovsky, Sarah; Correll, Nikolaus

    2017-01-01

    We present an radio-frequency (RF)-based approach to gesture detection and recognition, using e-textile versions of common transmission lines used in microwave circuits. This approach allows for easy fabrication of input swatches that can detect a continuum of finger positions and similarly basic gestures, using a single measurement line. We demonstrate that the swatches can perform gesture detection when under thin layers of cloth or when weatherproofed, providing a high level of versatility not present with other types of approaches. Additionally, using small convolutional neural networks, low-level gestures can be identified with a high level of accuracy using a small, inexpensive microcontroller, allowing for an intelligent fabric that reports only gestures of interest, rather than a simple sensor requiring constant surveillance from an external computing device. The resulting e-textile smart composite has applications in controlling wearable devices by providing a simple, eyes-free mechanism to input simple gestures. PMID:28125010

  15. Intelligent RF-Based Gesture Input Devices Implemented Using e-Textiles.

    PubMed

    Hughes, Dana; Profita, Halley; Radzihovsky, Sarah; Correll, Nikolaus

    2017-01-24

    We present an radio-frequency (RF)-based approach to gesture detection and recognition, using e-textile versions of common transmission lines used in microwave circuits. This approach allows for easy fabrication of input swatches that can detect a continuum of finger positions and similarly basic gestures, using a single measurement line. We demonstrate that the swatches can perform gesture detection when under thin layers of cloth or when weatherproofed, providing a high level of versatility not present with other types of approaches. Additionally, using small convolutional neural networks, low-level gestures can be identified with a high level of accuracy using a small, inexpensive microcontroller, allowing for an intelligent fabric that reports only gestures of interest, rather than a simple sensor requiring constant surveillance from an external computing device. The resulting e-textile smart composite has applications in controlling wearable devices by providing a simple, eyes-free mechanism to input simple gestures.

  16. Variance computations for functional of absolute risk estimates.

    PubMed

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  17. Variance computations for functional of absolute risk estimates

    PubMed Central

    Pfeiffer, R.M.; Petracci, E.

    2011-01-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates. PMID:21643476

  18. Gradient optimization and nonlinear control

    NASA Technical Reports Server (NTRS)

    Hasdorff, L.

    1976-01-01

    The book represents an introduction to computation in control by an iterative, gradient, numerical method, where linearity is not assumed. The general language and approach used are those of elementary functional analysis. The particular gradient method that is emphasized and used is conjugate gradient descent, a well known method exhibiting quadratic convergence while requiring very little more computation than simple steepest descent. Constraints are not dealt with directly, but rather the approach is to introduce them as penalty terms in the criterion. General conjugate gradient descent methods are developed and applied to problems in control.

  19. Bayesian design of decision rules for failure detection

    NASA Technical Reports Server (NTRS)

    Chow, E. Y.; Willsky, A. S.

    1984-01-01

    The formulation of the decision making process of a failure detection algorithm as a Bayes sequential decision problem provides a simple conceptualization of the decision rule design problem. As the optimal Bayes rule is not computable, a methodology that is based on the Bayesian approach and aimed at a reduced computational requirement is developed for designing suboptimal rules. A numerical algorithm is constructed to facilitate the design and performance evaluation of these suboptimal rules. The result of applying this design methodology to an example shows that this approach is potentially a useful one.

  20. Detonation product EOS studies: Using ISLS to refine CHEETAH

    NASA Astrophysics Data System (ADS)

    Zaug, Joseph; Fried, Larry; Hansen, Donald

    2001-06-01

    Knowledge of an effective interatomic potential function underlies any effort to predict or rationalize the properties of solids and liquids. The experiments we undertake are directed towards determination of equilibrium and dynamic properties of simple fluids at densities sufficiently high that traditional computational methods and semi-empirical forms successful at ambient conditions may require reconsideration. In this paper we present high-pressure and temperature experimental sound speed data on a suite of non-ideal simple fluids and fluid mixtures. Impulsive Stimulated Light Scattering conducted in the diamond-anvil cell offers an experimental approach to determine cross-pair potential interactions through equation of state determinations. In addition the kinetics of structural relaxation in fluids can be studied. We compare our experimental results with our thermochemical computational model CHEETAH. Computational models are systematically improved with each addition of experimental data. Experimentally grounded computational models provide a good basis to confidently understand the chemical nature of reactions at extreme conditions.

  1. Aggregative Learning Method and Its Application for Communication Quality Evaluation

    NASA Astrophysics Data System (ADS)

    Akhmetov, Dauren F.; Kotaki, Minoru

    2007-12-01

    In this paper, so-called Aggregative Learning Method (ALM) is proposed to improve and simplify the learning and classification abilities of different data processing systems. It provides a universal basis for design and analysis of mathematical models of wide class. A procedure was elaborated for time series model reconstruction and analysis for linear and nonlinear cases. Data approximation accuracy (during learning phase) and data classification quality (during recall phase) are estimated from introduced statistic parameters. The validity and efficiency of the proposed approach have been demonstrated through its application for monitoring of wireless communication quality, namely, for Fixed Wireless Access (FWA) system. Low memory and computation resources were shown to be needed for the procedure realization, especially for data classification (recall) stage. Characterized with high computational efficiency and simple decision making procedure, the derived approaches can be useful for simple and reliable real-time surveillance and control system design.

  2. Neural-network quantum state tomography

    NASA Astrophysics Data System (ADS)

    Torlai, Giacomo; Mazzola, Guglielmo; Carrasquilla, Juan; Troyer, Matthias; Melko, Roger; Carleo, Giuseppe

    2018-05-01

    The experimental realization of increasingly complex synthetic quantum systems calls for the development of general theoretical methods to validate and fully exploit quantum resources. Quantum state tomography (QST) aims to reconstruct the full quantum state from simple measurements, and therefore provides a key tool to obtain reliable analytics1-3. However, exact brute-force approaches to QST place a high demand on computational resources, making them unfeasible for anything except small systems4,5. Here we show how machine learning techniques can be used to perform QST of highly entangled states with more than a hundred qubits, to a high degree of accuracy. We demonstrate that machine learning allows one to reconstruct traditionally challenging many-body quantities—such as the entanglement entropy—from simple, experimentally accessible measurements. This approach can benefit existing and future generations of devices ranging from quantum computers to ultracold-atom quantum simulators6-8.

  3. The Importance of Proving the Null

    PubMed Central

    Gallistel, C. R.

    2010-01-01

    Null hypotheses are simple, precise, and theoretically important. Conventional statistical analysis cannot support them; Bayesian analysis can. The challenge in a Bayesian analysis is to formulate a suitably vague alternative, because the vaguer the alternative is (the more it spreads out the unit mass of prior probability), the more the null is favored. A general solution is a sensitivity analysis: Compute the odds for or against the null as a function of the limit(s) on the vagueness of the alternative. If the odds on the null approach 1 from above as the hypothesized maximum size of the possible effect approaches 0, then the data favor the null over any vaguer alternative to it. The simple computations and the intuitive graphic representation of the analysis are illustrated by the analysis of diverse examples from the current literature. They pose 3 common experimental questions: (a) Are 2 means the same? (b) Is performance at chance? (c) Are factors additive? PMID:19348549

  4. Parallel photonic information processing at gigabyte per second data rates using transient states

    NASA Astrophysics Data System (ADS)

    Brunner, Daniel; Soriano, Miguel C.; Mirasso, Claudio R.; Fischer, Ingo

    2013-01-01

    The increasing demands on information processing require novel computational concepts and true parallelism. Nevertheless, hardware realizations of unconventional computing approaches never exceeded a marginal existence. While the application of optics in super-computing receives reawakened interest, new concepts, partly neuro-inspired, are being considered and developed. Here we experimentally demonstrate the potential of a simple photonic architecture to process information at unprecedented data rates, implementing a learning-based approach. A semiconductor laser subject to delayed self-feedback and optical data injection is employed to solve computationally hard tasks. We demonstrate simultaneous spoken digit and speaker recognition and chaotic time-series prediction at data rates beyond 1Gbyte/s. We identify all digits with very low classification errors and perform chaotic time-series prediction with 10% error. Our approach bridges the areas of photonic information processing, cognitive and information science.

  5. Towards Scalable Graph Computation on Mobile Devices.

    PubMed

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2014-10-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach.

  6. Towards Scalable Graph Computation on Mobile Devices

    PubMed Central

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2015-01-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach. PMID:25859564

  7. A Unified Approach to Motion Control of Motion Robots

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1994-01-01

    This paper presents a simple on-line approach for motion control of mobile robots made up of a manipulator arm mounted on a mobile base. The proposed approach is equally applicable to nonholonomic mobile robots, such as rover-mounted manipulators and to holonomic mobile robots such as tracked robots or compound manipulators. The computational efficiency of the proposed control scheme makes it particularly suitable for real-time implementation.

  8. Unsteady Aerodynamic Force Sensing from Measured Strain

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi

    2016-01-01

    A simple approach for computing unsteady aerodynamic forces from simulated measured strain data is proposed in this study. First, the deflection and slope of the structure are computed from the unsteady strain using the two-step approach. Velocities and accelerations of the structure are computed using the autoregressive moving average model, on-line parameter estimator, low-pass filter, and a least-squares curve fitting method together with analytical derivatives with respect to time. Finally, aerodynamic forces over the wing are computed using modal aerodynamic influence coefficient matrices, a rational function approximation, and a time-marching algorithm. A cantilevered rectangular wing built and tested at the NASA Langley Research Center (Hampton, Virginia, USA) in 1959 is used to validate the simple approach. Unsteady aerodynamic forces as well as wing deflections, velocities, accelerations, and strains are computed using the CFL3D computational fluid dynamics (CFD) code and an MSC/NASTRAN code (MSC Software Corporation, Newport Beach, California, USA), and these CFL3D-based results are assumed as measured quantities. Based on the measured strains, wing deflections, velocities, accelerations, and aerodynamic forces are computed using the proposed approach. These computed deflections, velocities, accelerations, and unsteady aerodynamic forces are compared with the CFL3D/NASTRAN-based results. In general, computed aerodynamic forces based on the lifting surface theory in subsonic speeds are in good agreement with the target aerodynamic forces generated using CFL3D code with the Euler equation. Excellent aeroelastic responses are obtained even with unsteady strain data under the signal to noise ratio of -9.8dB. The deflections, velocities, and accelerations at each sensor location are independent of structural and aerodynamic models. Therefore, the distributed strain data together with the current proposed approaches can be used as distributed deflection, velocity, and acceleration sensors. This research demonstrates the feasibility of obtaining induced drag and lift forces through the use of distributed sensor technology with measured strain data. An active induced drag control system thus can be designed using the two computed aerodynamic forces, induced drag and lift, to improve the fuel efficiency of an aircraft. Interpolation elements between structural finite element grids and the CFD grids and centroids are successfully incorporated with the unsteady aeroelastic computation scheme. The most critical technology for the success of the proposed approach is the robust on-line parameter estimator, since the least-squares curve fitting method depends heavily on aeroelastic system frequencies and damping factors.

  9. A simple implementation of a normal mixture approach to differential gene expression in multiclass microarrays.

    PubMed

    McLachlan, G J; Bean, R W; Jones, L Ben-Tovim

    2006-07-01

    An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.

  10. Generic approach to access barriers in dehydrogenation reactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Liang; Vilella, Laia; Abild-Pedersen, Frank

    The introduction of linear energy correlations, which explicitly relate adsorption energies of reaction intermediates and activation energies in heterogeneous catalysis, has proven to be a key component in the computational search for new and promising catalysts. A simple linear approach to estimate activation energies still requires a significant computational effort. To simplify this process and at the same time incorporate the need for enhanced complexity of reaction intermediates, we generalize a recently proposed approach that evaluates transition state energies based entirely on bond-order conservation arguments. Here, we show that similar variation of the local electronic structure along the reaction coordinatemore » introduces a set of general functions that accurately defines the transition state energy and are transferable to other reactions with similar bonding nature. With such an approach, more complex reaction intermediates can be targeted with an insignificant increase in computational effort and without loss of accuracy.« less

  11. Generic approach to access barriers in dehydrogenation reactions

    DOE PAGES

    Yu, Liang; Vilella, Laia; Abild-Pedersen, Frank

    2018-03-08

    The introduction of linear energy correlations, which explicitly relate adsorption energies of reaction intermediates and activation energies in heterogeneous catalysis, has proven to be a key component in the computational search for new and promising catalysts. A simple linear approach to estimate activation energies still requires a significant computational effort. To simplify this process and at the same time incorporate the need for enhanced complexity of reaction intermediates, we generalize a recently proposed approach that evaluates transition state energies based entirely on bond-order conservation arguments. Here, we show that similar variation of the local electronic structure along the reaction coordinatemore » introduces a set of general functions that accurately defines the transition state energy and are transferable to other reactions with similar bonding nature. With such an approach, more complex reaction intermediates can be targeted with an insignificant increase in computational effort and without loss of accuracy.« less

  12. Mobile Cloud Computing with SOAP and REST Web Services

    NASA Astrophysics Data System (ADS)

    Ali, Mushtaq; Fadli Zolkipli, Mohamad; Mohamad Zain, Jasni; Anwar, Shahid

    2018-05-01

    Mobile computing in conjunction with Mobile web services drives a strong approach where the limitations of mobile devices may possibly be tackled. Mobile Web Services are based on two types of technologies; SOAP and REST, which works with the existing protocols to develop Web services. Both the approaches carry their own distinct features, yet to keep the constraint features of mobile devices in mind, the better in two is considered to be the one which minimize the computation and transmission overhead while offloading. The load transferring of mobile device to remote servers for execution called computational offloading. There are numerous approaches to implement computational offloading a viable solution for eradicating the resources constraints of mobile device, yet a dynamic method of computational offloading is always required for a smooth and simple migration of complex tasks. The intention of this work is to present a distinctive approach which may not engage the mobile resources for longer time. The concept of web services utilized in our work to delegate the computational intensive tasks for remote execution. We tested both SOAP Web services approach and REST Web Services for mobile computing. Two parameters considered in our lab experiments to test; Execution Time and Energy Consumption. The results show that RESTful Web services execution is far better than executing the same application by SOAP Web services approach, in terms of execution time and energy consumption. Conducting experiments with the developed prototype matrix multiplication app, REST execution time is about 200% better than SOAP execution approach. In case of energy consumption REST execution is about 250% better than SOAP execution approach.

  13. A simple, effective and clinically applicable method to compute abdominal aortic aneurysm wall stress.

    PubMed

    Joldes, Grand Roman; Miller, Karol; Wittek, Adam; Doyle, Barry

    2016-05-01

    Abdominal aortic aneurysm (AAA) is a permanent and irreversible dilation of the lower region of the aorta. It is a symptomless condition that if left untreated can expand to the point of rupture. Mechanically-speaking, rupture of an artery occurs when the local wall stress exceeds the local wall strength. It is therefore desirable to be able to non-invasively estimate the AAA wall stress for a given patient, quickly and reliably. In this paper we present an entirely new approach to computing the wall tension (i.e. the stress resultant equal to the integral of the stresses tangent to the wall over the wall thickness) within an AAA that relies on trivial linear elastic finite element computations, which can be performed instantaneously in the clinical environment on the simplest computing hardware. As an input to our calculations we only use information readily available in the clinic: the shape of the aneurysm in-vivo, as seen on a computed tomography (CT) scan, and blood pressure. We demonstrate that tension fields computed with the proposed approach agree well with those obtained using very sophisticated, state-of-the-art non-linear inverse procedures. Using magnetic resonance (MR) images of the same patient, we can approximately measure the local wall thickness and calculate the local wall stress. What is truly exciting about this simple approach is that one does not need any information on material parameters; this supports the development and use of patient-specific modelling (PSM), where uncertainty in material data is recognised as a key limitation. The methods demonstrated in this paper are applicable to other areas of biomechanics where the loads and loaded geometry of the system are known. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Dynamic texture recognition using local binary patterns with an application to facial expressions.

    PubMed

    Zhao, Guoying; Pietikäinen, Matti

    2007-06-01

    Dynamic texture (DT) is an extension of texture to the temporal domain. Description and recognition of DTs have attracted growing attention. In this paper, a novel approach for recognizing DTs is proposed and its simplifications and extensions to facial image analysis are also considered. First, the textures are modeled with volume local binary patterns (VLBP), which are an extension of the LBP operator widely used in ordinary texture analysis, combining motion and appearance. To make the approach computationally simple and easy to extend, only the co-occurrences of the local binary patterns on three orthogonal planes (LBP-TOP) are then considered. A block-based method is also proposed to deal with specific dynamic events such as facial expressions in which local information and its spatial locations should also be taken into account. In experiments with two DT databases, DynTex and Massachusetts Institute of Technology (MIT), both the VLBP and LBP-TOP clearly outperformed the earlier approaches. The proposed block-based method was evaluated with the Cohn-Kanade facial expression database with excellent results. The advantages of our approach include local processing, robustness to monotonic gray-scale changes, and simple computation.

  15. Detonation Product EOS Studies: Using ISLS to Refine Cheetah

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zaug, J M; Howard, W M; Fried, L E

    2001-08-08

    Knowledge of an effective interatomic potential function underlies any effort to predict or rationalize the properties of solids and liquids. The experiments we undertake are directed towards determination of equilibrium and dynamic properties of simple fluids at densities sufficiently high that traditional computational methods and semi-empirical forms successful at ambient conditions may require reconsideration. In this paper we present high-pressure and temperature experimental sound speed data on a simple fluid, methanol. Impulsive Stimulated Light Scattering (ISLS) conducted on diamond-anvil cell (DAC) encapsulated samples offers an experimental approach to determine cross-pair potential interactions through equation of state determinations. In addition themore » kinetics of structural relaxation in fluids can be studied. We compare our experimental results with our thermochemical computational model Cheetah. Computational models are systematically improved with each addition of experimental data.« less

  16. Closed-form confidence intervals for functions of the normal mean and standard deviation.

    PubMed

    Donner, Allan; Zou, G Y

    2012-08-01

    Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.

  17. An Informal Overview of the Unitary Group Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sonnad, V.; Escher, J.; Kruse, M.

    The Unitary Groups Approach (UGA) is an elegant and conceptually unified approach to quantum structure calculations. It has been widely used in molecular structure calculations, and holds the promise of a single computational approach to structure calculations in a variety of different fields. We explore the possibility of extending the UGA to computations in atomic and nuclear structure as a simpler alternative to traditional Racah algebra-based approaches. We provide a simple introduction to the basic UGA and consider some of the issues in using the UGA with spin-dependent, multi-body Hamiltonians requiring multi-shell bases adapted to additional symmetries. While the UGAmore » is perfectly capable of dealing with such problems, it is seen that the complexity rises dramatically, and the UGA is not at this time, a simpler alternative to Racah algebra-based approaches.« less

  18. Simulation methods to estimate design power: an overview for applied research.

    PubMed

    Arnold, Benjamin F; Hogan, Daniel R; Colford, John M; Hubbard, Alan E

    2011-06-20

    Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research.

  19. Simulation methods to estimate design power: an overview for applied research

    PubMed Central

    2011-01-01

    Background Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. Methods We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. Results We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Conclusions Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research. PMID:21689447

  20. Molecular implementation of simple logic programs.

    PubMed

    Ran, Tom; Kaplan, Shai; Shapiro, Ehud

    2009-10-01

    Autonomous programmable computing devices made of biomolecules could interact with a biological environment and be used in future biological and medical applications. Biomolecular implementations of finite automata and logic gates have already been developed. Here, we report an autonomous programmable molecular system based on the manipulation of DNA strands that is capable of performing simple logical deductions. Using molecular representations of facts such as Man(Socrates) and rules such as Mortal(X) <-- Man(X) (Every Man is Mortal), the system can answer molecular queries such as Mortal(Socrates)? (Is Socrates Mortal?) and Mortal(X)? (Who is Mortal?). This biomolecular computing system compares favourably with previous approaches in terms of expressive power, performance and precision. A compiler translates facts, rules and queries into their molecular representations and subsequently operates a robotic system that assembles the logical deductions and delivers the result. This prototype is the first simple programming language with a molecular-scale implementation.

  1. Mechanisms of Neuronal Computation in Mammalian Visual Cortex

    PubMed Central

    Priebe, Nicholas J.; Ferster, David

    2012-01-01

    Orientation selectivity in the primary visual cortex (V1) is a receptive field property that is at once simple enough to make it amenable to experimental and theoretical approaches and yet complex enough to represent a significant transformation in the representation of the visual image. As a result, V1 has become an area of choice for studying cortical computation and its underlying mechanisms. Here we consider the receptive field properties of the simple cells in cat V1—the cells that receive direct input from thalamic relay cells—and explore how these properties, many of which are highly nonlinear, arise. We have found that many receptive field properties of V1 simple cells fall directly out of Hubel and Wiesel’s feedforward model when the model incorporates realistic neuronal and synaptic mechanisms, including threshold, synaptic depression, response variability, and the membrane time constant. PMID:22841306

  2. Computational modeling approaches to quantitative structure-binding kinetics relationships in drug discovery.

    PubMed

    De Benedetti, Pier G; Fanelli, Francesca

    2018-03-21

    Simple comparative correlation analyses and quantitative structure-kinetics relationship (QSKR) models highlight the interplay of kinetic rates and binding affinity as an essential feature in drug design and discovery. The choice of the molecular series, and their structural variations, used in QSKR modeling is fundamental to understanding the mechanistic implications of ligand and/or drug-target binding and/or unbinding processes. Here, we discuss the implications of linear correlations between kinetic rates and binding affinity constants and the relevance of the computational approaches to QSKR modeling. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Simple stochastic simulation.

    PubMed

    Schilstra, Maria J; Martin, Stephen R

    2009-01-01

    Stochastic simulations may be used to describe changes with time of a reaction system in a way that explicitly accounts for the fact that molecules show a significant degree of randomness in their dynamic behavior. The stochastic approach is almost invariably used when small numbers of molecules or molecular assemblies are involved because this randomness leads to significant deviations from the predictions of the conventional deterministic (or continuous) approach to the simulation of biochemical kinetics. Advances in computational methods over the three decades that have elapsed since the publication of Daniel Gillespie's seminal paper in 1977 (J. Phys. Chem. 81, 2340-2361) have allowed researchers to produce highly sophisticated models of complex biological systems. However, these models are frequently highly specific for the particular application and their description often involves mathematical treatments inaccessible to the nonspecialist. For anyone completely new to the field to apply such techniques in their own work might seem at first sight to be a rather intimidating prospect. However, the fundamental principles underlying the approach are in essence rather simple, and the aim of this article is to provide an entry point to the field for a newcomer. It focuses mainly on these general principles, both kinetic and computational, which tend to be not particularly well covered in specialist literature, and shows that interesting information may even be obtained using very simple operations in a conventional spreadsheet.

  4. Electronic Spectra from Molecular Dynamics: A Simple Approach.

    DTIC Science & Technology

    1983-10-01

    82.30.Cr. 33.20K. S2.40.1s The authors provided phototypeset copy for this paper using REFER TlL EON, TOFF On UNIX I ELECTRONIC SPECTRA FROM MOLECULAR...Alamos National Laboratory Los Alamos, NM 87545 I. INTRODUCTION In this paper we show how molecular dynamics can be used in a simple manner to com...could equally use Monte Carlo or explicit integration over coordinates to compute equilibrium electronic absorption bands. How- ever, molecular

  5. A Mechanistic Design Approach for Graphite Nanoplatelet (GNP) Reinforced Asphalt Mixtures for Low-Temperature Applications

    DOT National Transportation Integrated Search

    2018-01-01

    This report explores the application of a discrete computational model for predicting the fracture behavior of asphalt mixtures at low temperatures based on the results of simple laboratory experiments. In this discrete element model, coarse aggregat...

  6. Development of spectral analysis math models and software program and spectral analyzer, digital converter interface equipment design

    NASA Technical Reports Server (NTRS)

    Hayden, W. L.; Robinson, L. H.

    1972-01-01

    Spectral analyses of angle-modulated communication systems is studied by: (1) performing a literature survey of candidate power spectrum computational techniques, determining the computational requirements, and formulating a mathematical model satisfying these requirements; (2) implementing the model on UNIVAC 1230 digital computer as the Spectral Analysis Program (SAP); and (3) developing the hardware specifications for a data acquisition system which will acquire an input modulating signal for SAP. The SAP computational technique uses extended fast Fourier transform and represents a generalized approach for simple and complex modulating signals.

  7. The promises and pitfalls of applying computational models to neurological and psychiatric disorders.

    PubMed

    Teufel, Christoph; Fletcher, Paul C

    2016-10-01

    Computational models have become an integral part of basic neuroscience and have facilitated some of the major advances in the field. More recently, such models have also been applied to the understanding of disruptions in brain function. In this review, using examples and a simple analogy, we discuss the potential for computational models to inform our understanding of brain function and dysfunction. We argue that they may provide, in unprecedented detail, an understanding of the neurobiological and mental basis of brain disorders and that such insights will be key to progress in diagnosis and treatment. However, there are also potential problems attending this approach. We highlight these and identify simple principles that should always govern the use of computational models in clinical neuroscience, noting especially the importance of a clear specification of a model's purpose and of the mapping between mathematical concepts and reality. © The Author (2016). Published by Oxford University Press on behalf of the Guarantors of Brain.

  8. Computation of ancestry scores with mixed families and unrelated individuals.

    PubMed

    Zhou, Yi-Hui; Marron, James S; Wright, Fred A

    2018-03-01

    The issue of robustness to family relationships in computing genotype ancestry scores such as eigenvector projections has received increased attention in genetic association, and is particularly challenging when sets of both unrelated individuals and closely related family members are included. The current standard is to compute loadings (left singular vectors) using unrelated individuals and to compute projected scores for remaining family members. However, projected ancestry scores from this approach suffer from shrinkage toward zero. We consider two main novel strategies: (i) matrix substitution based on decomposition of a target family-orthogonalized covariance matrix, and (ii) using family-averaged data to obtain loadings. We illustrate the performance via simulations, including resampling from 1000 Genomes Project data, and analysis of a cystic fibrosis dataset. The matrix substitution approach has similar performance to the current standard, but is simple and uses only a genotype covariance matrix, while the family-average method shows superior performance. Our approaches are accompanied by novel ancillary approaches that provide considerable insight, including individual-specific eigenvalue scree plots. © 2017 The Authors. Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  9. The importance of proving the null.

    PubMed

    Gallistel, C R

    2009-04-01

    Null hypotheses are simple, precise, and theoretically important. Conventional statistical analysis cannot support them; Bayesian analysis can. The challenge in a Bayesian analysis is to formulate a suitably vague alternative, because the vaguer the alternative is (the more it spreads out the unit mass of prior probability), the more the null is favored. A general solution is a sensitivity analysis: Compute the odds for or against the null as a function of the limit(s) on the vagueness of the alternative. If the odds on the null approach 1 from above as the hypothesized maximum size of the possible effect approaches 0, then the data favor the null over any vaguer alternative to it. The simple computations and the intuitive graphic representation of the analysis are illustrated by the analysis of diverse examples from the current literature. They pose 3 common experimental questions: (a) Are 2 means the same? (b) Is performance at chance? (c) Are factors additive? (c) 2009 APA, all rights reserved

  10. Detonation Product EOS Studies: Using ISLS to Refine Cheetah

    NASA Astrophysics Data System (ADS)

    Zaug, J. M.; Howard, W. M.; Fried, L. E.; Hansen, D. W.

    2002-07-01

    Knowledge of an effective interatomic potential function underlies any effort to predict or rationalize the properties of solids and liquids. The experiments we undertake are directed towards determination of equilibrium and dynamic properties of simple fluids at densities sufficiently high that traditional computational methods and semi-empirical forms successful at ambient conditions may require reconsideration. In this paper we present high-pressure and temperature experimental sound speed data on a simple fluid, methanol. Impulsive Stimulated Light Scattering (ISLS) conducted on diamond-anvil cell (DAC) encapsulated samples offers an experimental approach to determine cross-pair potential interactions through equation of state determinations. In addition the kinetics of structural relaxation in fluids can be studied. We compare our experimental results with our thermochemical computational model Cheetah. Experimentally grounded computational models provide a good basis to confidently understand the chemical nature of reactions at extreme conditions.

  11. An automated approach to the design of decision tree classifiers

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Chin, R.; Beaudet, P.

    1982-01-01

    An automated technique is presented for designing effective decision tree classifiers predicated only on a priori class statistics. The procedure relies on linear feature extractions and Bayes table look-up decision rules. Associated error matrices are computed and utilized to provide an optimal design of the decision tree at each so-called 'node'. A by-product of this procedure is a simple algorithm for computing the global probability of correct classification assuming the statistical independence of the decision rules. Attention is given to a more precise definition of decision tree classification, the mathematical details on the technique for automated decision tree design, and an example of a simple application of the procedure using class statistics acquired from an actual Landsat scene.

  12. Divergence thrust loss calculations for convergent-divergent nozzles: Extensions to the classical case

    NASA Technical Reports Server (NTRS)

    Berton, Jeffrey J.

    1991-01-01

    The analytical derivations of the non-axial thrust divergence losses for convergent-divergent nozzles are described as well as how these calculations are embodied in the Navy/NASA engine computer program. The convergent-divergent geometries considered are simple classic axisymmetric nozzles, two dimensional rectangular nozzles, and axisymmetric and two dimensional plug nozzles. A simple, traditional, inviscid mathematical approach is used to deduce the influence of the ineffectual non-axial thrust as a function of the nozzle exit divergence angle.

  13. A 32-bit NMOS microprocessor with a large register file

    NASA Astrophysics Data System (ADS)

    Sherburne, R. W., Jr.; Katevenis, M. G. H.; Patterson, D. A.; Sequin, C. H.

    1984-10-01

    Two scaled versions of a 32-bit NMOS reduced instruction set computer CPU, called RISC II, have been implemented on two different processing lines using the simple Mead and Conway layout rules with lambda values of 2 and 1.5 microns (corresponding to drawn gate lengths of 4 and 3 microns), respectively. The design utilizes a small set of simple instructions in conjunction with a large register file in order to provide high performance. This approach has resulted in two surprisingly powerful single-chip processors.

  14. Software Validation via Model Animation

    NASA Technical Reports Server (NTRS)

    Dutle, Aaron M.; Munoz, Cesar A.; Narkawicz, Anthony J.; Butler, Ricky W.

    2015-01-01

    This paper explores a new approach to validating software implementations that have been produced from formally-verified algorithms. Although visual inspection gives some confidence that the implementations faithfully reflect the formal models, it does not provide complete assurance that the software is correct. The proposed approach, which is based on animation of formal specifications, compares the outputs computed by the software implementations on a given suite of input values to the outputs computed by the formal models on the same inputs, and determines if they are equal up to a given tolerance. The approach is illustrated on a prototype air traffic management system that computes simple kinematic trajectories for aircraft. Proofs for the mathematical models of the system's algorithms are carried out in the Prototype Verification System (PVS). The animation tool PVSio is used to evaluate the formal models on a set of randomly generated test cases. Output values computed by PVSio are compared against output values computed by the actual software. This comparison improves the assurance that the translation from formal models to code is faithful and that, for example, floating point errors do not greatly affect correctness and safety properties.

  15. Self-consistent Green's function embedding for advanced electronic structure methods based on a dynamical mean-field concept

    NASA Astrophysics Data System (ADS)

    Chibani, Wael; Ren, Xinguo; Scheffler, Matthias; Rinke, Patrick

    2016-04-01

    We present an embedding scheme for periodic systems that facilitates the treatment of the physically important part (here a unit cell or a supercell) with advanced electronic structure methods, that are computationally too expensive for periodic systems. The rest of the periodic system is treated with computationally less demanding approaches, e.g., Kohn-Sham density-functional theory, in a self-consistent manner. Our scheme is based on the concept of dynamical mean-field theory formulated in terms of Green's functions. Our real-space dynamical mean-field embedding scheme features two nested Dyson equations, one for the embedded cluster and another for the periodic surrounding. The total energy is computed from the resulting Green's functions. The performance of our scheme is demonstrated by treating the embedded region with hybrid functionals and many-body perturbation theory in the GW approach for simple bulk systems. The total energy and the density of states converge rapidly with respect to the computational parameters and approach their bulk limit with increasing cluster (i.e., computational supercell) size.

  16. Virtual photons in imaginary time: Computing exact Casimir forces via standard numerical electromagnetism techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodriguez, Alejandro; Ibanescu, Mihai; Joannopoulos, J. D.

    2007-09-15

    We describe a numerical method to compute Casimir forces in arbitrary geometries, for arbitrary dielectric and metallic materials, with arbitrary accuracy (given sufficient computational resources). Our approach, based on well-established integration of the mean stress tensor evaluated via the fluctuation-dissipation theorem, is designed to directly exploit fast methods developed for classical computational electromagnetism, since it only involves repeated evaluation of the Green's function for imaginary frequencies (equivalently, real frequencies in imaginary time). We develop the approach by systematically examining various formulations of Casimir forces from the previous decades and evaluating them according to their suitability for numerical computation. We illustratemore » our approach with a simple finite-difference frequency-domain implementation, test it for known geometries such as a cylinder and a plate, and apply it to new geometries. In particular, we show that a pistonlike geometry of two squares sliding between metal walls, in both two and three dimensions with both perfect and realistic metallic materials, exhibits a surprising nonmonotonic ''lateral'' force from the walls.« less

  17. Regional Principal Color Based Saliency Detection

    PubMed Central

    Lou, Jing; Ren, Mingwu; Wang, Huan

    2014-01-01

    Saliency detection is widely used in many visual applications like image segmentation, object recognition and classification. In this paper, we will introduce a new method to detect salient objects in natural images. The approach is based on a regional principal color contrast modal, which incorporates low-level and medium-level visual cues. The method allows a simple computation of color features and two categories of spatial relationships to a saliency map, achieving higher F-measure rates. At the same time, we present an interpolation approach to evaluate resulting curves, and analyze parameters selection. Our method enables the effective computation of arbitrary resolution images. Experimental results on a saliency database show that our approach produces high quality saliency maps and performs favorably against ten saliency detection algorithms. PMID:25379960

  18. Analysis of Tube Hydroforming by means of an Inverse Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Ba Nghiep; Johnson, Kenneth I.; Khaleel, Mohammad A.

    2003-05-01

    This paper presents a computational tool for the analysis of freely hydroformed tubes by means of an inverse approach. The formulation of the inverse method developed by Guo et al. is adopted and extended to the tube hydrofoming problems in which the initial geometry is a round tube submitted to hydraulic pressure and axial feed at the tube ends (end-feed). A simple criterion based on a forming limit diagram is used to predict the necking regions in the deformed workpiece. Although the developed computational tool is a stand-alone code, it has been linked to the Marc finite element code formore » meshing and visualization of results. The application of the inverse approach to tube hydroforming is illustrated through the analyses of the aluminum alloy AA6061-T4 seamless tubes under free hydroforming conditions. The results obtained are in good agreement with those issued from a direct incremental approach. However, the computational time in the inverse procedure is much less than that in the incremental method.« less

  19. Drell-Yan Lepton pair production at NNLO QCD with parton showers

    DOE PAGES

    Hoeche, Stefan; Li, Ye; Prestel, Stefan

    2015-04-13

    We present a simple approach to combine NNLO QCD calculations and parton showers, based on the UNLOPS technique. We apply the method to the computation of Drell-Yan lepton-pair production at the Large Hadron Collider. We comment on possible improvements and intrinsic uncertainties.

  20. Exponentially Stabilizing Robot Control Laws

    NASA Technical Reports Server (NTRS)

    Wen, John T.; Bayard, David S.

    1990-01-01

    New class of exponentially stabilizing laws for joint-level control of robotic manipulators introduced. In case of set-point control, approach offers simplicity of proportion/derivative control architecture. In case of tracking control, approach provides several important alternatives to completed-torque method, as far as computational requirements and convergence. New control laws modified in simple fashion to obtain asymptotically stable adaptive control, when robot model and/or payload mass properties unknown.

  1. Video image processing

    NASA Technical Reports Server (NTRS)

    Murray, N. D.

    1985-01-01

    Current technology projections indicate a lack of availability of special purpose computing for Space Station applications. Potential functions for video image special purpose processing are being investigated, such as smoothing, enhancement, restoration and filtering, data compression, feature extraction, object detection and identification, pixel interpolation/extrapolation, spectral estimation and factorization, and vision synthesis. Also, architectural approaches are being identified and a conceptual design generated. Computationally simple algorithms will be research and their image/vision effectiveness determined. Suitable algorithms will be implimented into an overall architectural approach that will provide image/vision processing at video rates that are flexible, selectable, and programmable. Information is given in the form of charts, diagrams and outlines.

  2. A simple approach for estimating the refractive index structure parameter (Cn²) profile in the atmosphere.

    PubMed

    Basu, Sukanta

    2015-09-01

    Utilizing the so-called Thorpe scale as a measure of the turbulence outer scale, we propose a physically-based approach for the estimation of Cn2 profiles in the lower atmosphere. This approach only requires coarse-resolution temperature profiles (a.k.a., soundings) as input, yet it has the intrinsic ability to capture layers of high optical turbulence. The prowess of this computationally inexpensive approach is demonstrated by validations against observational data from a field campaign over Mauna Kea, Hawaii.

  3. Bounds on the power of proofs and advice in general physical theories.

    PubMed

    Lee, Ciarán M; Hoban, Matty J

    2016-06-01

    Quantum theory presents us with the tools for computational and communication advantages over classical theory. One approach to uncovering the source of these advantages is to determine how computation and communication power vary as quantum theory is replaced by other operationally defined theories from a broad framework of such theories. Such investigations may reveal some of the key physical features required for powerful computation and communication. In this paper, we investigate how simple physical principles bound the power of two different computational paradigms which combine computation and communication in a non-trivial fashion: computation with advice and interactive proof systems. We show that the existence of non-trivial dynamics in a theory implies a bound on the power of computation with advice. Moreover, we provide an explicit example of a theory with no non-trivial dynamics in which the power of computation with advice is unbounded. Finally, we show that the power of simple interactive proof systems in theories where local measurements suffice for tomography is non-trivially bounded. This result provides a proof that [Formula: see text] is contained in [Formula: see text], which does not make use of any uniquely quantum structure-such as the fact that observables correspond to self-adjoint operators-and thus may be of independent interest.

  4. Comparison of the different approaches to generate holograms from data acquired with a Kinect sensor

    NASA Astrophysics Data System (ADS)

    Kang, Ji-Hoon; Leportier, Thibault; Ju, Byeong-Kwon; Song, Jin Dong; Lee, Kwang-Hoon; Park, Min-Chul

    2017-05-01

    Data of real scenes acquired in real-time with a Kinect sensor can be processed with different approaches to generate a hologram. 3D models can be generated from a point cloud or a mesh representation. The advantage of the point cloud approach is that computation process is well established since it involves only diffraction and propagation of point sources between parallel planes. On the other hand, the mesh representation enables to reduce the number of elements necessary to represent the object. Then, even though the computation time for the contribution of a single element increases compared to a simple point, the total computation time can be reduced significantly. However, the algorithm is more complex since propagation of elemental polygons between non-parallel planes should be implemented. Finally, since a depth map of the scene is acquired at the same time than the intensity image, a depth layer approach can also be adopted. This technique is appropriate for a fast computation since propagation of an optical wavefront from one plane to another can be handled efficiently with the fast Fourier transform. Fast computation with depth layer approach is convenient for real time applications, but point cloud method is more appropriate when high resolution is needed. In this study, since Kinect can be used to obtain both point cloud and depth map, we examine the different approaches that can be adopted for hologram computation and compare their performance.

  5. Digitized adiabatic quantum computing with a superconducting circuit.

    PubMed

    Barends, R; Shabani, A; Lamata, L; Kelly, J; Mezzacapo, A; Las Heras, U; Babbush, R; Fowler, A G; Campbell, B; Chen, Yu; Chen, Z; Chiaro, B; Dunsworth, A; Jeffrey, E; Lucero, E; Megrant, A; Mutus, J Y; Neeley, M; Neill, C; O'Malley, P J J; Quintana, C; Roushan, P; Sank, D; Vainsencher, A; Wenner, J; White, T C; Solano, E; Neven, H; Martinis, John M

    2016-06-09

    Quantum mechanics can help to solve complex problems in physics and chemistry, provided they can be programmed in a physical device. In adiabatic quantum computing, a system is slowly evolved from the ground state of a simple initial Hamiltonian to a final Hamiltonian that encodes a computational problem. The appeal of this approach lies in the combination of simplicity and generality; in principle, any problem can be encoded. In practice, applications are restricted by limited connectivity, available interactions and noise. A complementary approach is digital quantum computing, which enables the construction of arbitrary interactions and is compatible with error correction, but uses quantum circuit algorithms that are problem-specific. Here we combine the advantages of both approaches by implementing digitized adiabatic quantum computing in a superconducting system. We tomographically probe the system during the digitized evolution and explore the scaling of errors with system size. We then let the full system find the solution to random instances of the one-dimensional Ising problem as well as problem Hamiltonians that involve more complex interactions. This digital quantum simulation of the adiabatic algorithm consists of up to nine qubits and up to 1,000 quantum logic gates. The demonstration of digitized adiabatic quantum computing in the solid state opens a path to synthesizing long-range correlations and solving complex computational problems. When combined with fault-tolerance, our approach becomes a general-purpose algorithm that is scalable.

  6. Agent Model Development for Assessing Climate-Induced Geopolitical Instability.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boslough, Mark B.; Backus, George A.

    2005-12-01

    We present the initial stages of development of new agent-based computational methods to generate and test hypotheses about linkages between environmental change and international instability. This report summarizes the first year's effort of an originally proposed three-year Laboratory Directed Research and Development (LDRD) project. The preliminary work focused on a set of simple agent-based models and benefited from lessons learned in previous related projects and case studies of human response to climate change and environmental scarcity. Our approach was to define a qualitative model using extremely simple cellular agent models akin to Lovelock's Daisyworld and Schelling's segregation model. Such modelsmore » do not require significant computing resources, and users can modify behavior rules to gain insights. One of the difficulties in agent-based modeling is finding the right balance between model simplicity and real-world representation. Our approach was to keep agent behaviors as simple as possible during the development stage (described herein) and to ground them with a realistic geospatial Earth system model in subsequent years. This work is directed toward incorporating projected climate data--including various C02 scenarios from the Intergovernmental Panel on Climate Change (IPCC) Third Assessment Report--and ultimately toward coupling a useful agent-based model to a general circulation model.3« less

  7. A computational visual saliency model based on statistics and machine learning.

    PubMed

    Lin, Ru-Je; Lin, Wei-Song

    2014-08-01

    Identifying the type of stimuli that attracts human visual attention has been an appealing topic for scientists for many years. In particular, marking the salient regions in images is useful for both psychologists and many computer vision applications. In this paper, we propose a computational approach for producing saliency maps using statistics and machine learning methods. Based on four assumptions, three properties (Feature-Prior, Position-Prior, and Feature-Distribution) can be derived and combined by a simple intersection operation to obtain a saliency map. These properties are implemented by a similarity computation, support vector regression (SVR) technique, statistical analysis of training samples, and information theory using low-level features. This technique is able to learn the preferences of human visual behavior while simultaneously considering feature uniqueness. Experimental results show that our approach performs better in predicting human visual attention regions than 12 other models in two test databases. © 2014 ARVO.

  8. Computer-aided design/computer-aided manufacturing skull base drill.

    PubMed

    Couldwell, William T; MacDonald, Joel D; Thomas, Charles L; Hansen, Bradley C; Lapalikar, Aniruddha; Thakkar, Bharat; Balaji, Alagar K

    2017-05-01

    The authors have developed a simple device for computer-aided design/computer-aided manufacturing (CAD-CAM) that uses an image-guided system to define a cutting tool path that is shared with a surgical machining system for drilling bone. Information from 2D images (obtained via CT and MRI) is transmitted to a processor that produces a 3D image. The processor generates code defining an optimized cutting tool path, which is sent to a surgical machining system that can drill the desired portion of bone. This tool has applications for bone removal in both cranial and spine neurosurgical approaches. Such applications have the potential to reduce surgical time and associated complications such as infection or blood loss. The device enables rapid removal of bone within 1 mm of vital structures. The validity of such a machining tool is exemplified in the rapid (< 3 minutes machining time) and accurate removal of bone for transtemporal (for example, translabyrinthine) approaches.

  9. Trapped-Ion Quantum Logic with Global Radiation Fields.

    PubMed

    Weidt, S; Randall, J; Webster, S C; Lake, K; Webb, A E; Cohen, I; Navickas, T; Lekitsch, B; Retzker, A; Hensinger, W K

    2016-11-25

    Trapped ions are a promising tool for building a large-scale quantum computer. However, the number of required radiation fields for the realization of quantum gates in any proposed ion-based architecture scales with the number of ions within the quantum computer, posing a major obstacle when imagining a device with millions of ions. Here, we present a fundamentally different approach for trapped-ion quantum computing where this detrimental scaling vanishes. The method is based on individually controlled voltages applied to each logic gate location to facilitate the actual gate operation analogous to a traditional transistor architecture within a classical computer processor. To demonstrate the key principle of this approach we implement a versatile quantum gate method based on long-wavelength radiation and use this method to generate a maximally entangled state of two quantum engineered clock qubits with fidelity 0.985(12). This quantum gate also constitutes a simple-to-implement tool for quantum metrology, sensing, and simulation.

  10. Computational Immunology for the Defense of Large Scale Systems

    DTIC Science & Technology

    2002-07-01

    or unusual activity (e.g., multiple login attempts, possibly in order to guess a password). We can summarize our results as follows: • Our...such as those used in SRI’s Emerald project. There are two important characteristics of the approach introduced in [5]. First, it identifies a simple

  11. A Pedagogical Approach to the Magnus Expansion

    ERIC Educational Resources Information Center

    Blanes, S.; Casas, F.; Oteo, J. A.; Ros, J.

    2010-01-01

    Time-dependent perturbation theory as a tool to compute approximate solutions of the Schrodinger equation does not preserve unitarity. Here we present, in a simple way, how the "Magnus expansion" (also known as "exponential perturbation theory") provides such unitary approximate solutions. The purpose is to illustrate the importance and…

  12. 77 FR 72766 - Small Business Size Standards: Support Activities for Mining

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-06

    ... its entirety for parties who have an interest in SBA's overall approach to establishing, evaluating....gov , Docket ID: SBA-2009- 0008. SBA continues to welcome comments on its methodology from interested.... Average firm size. SBA computes two measures of average firm size: simple average and weighted average...

  13. The Instructional Cost Index. A Simplified Approach to Interinstitutional Cost Comparison.

    ERIC Educational Resources Information Center

    Beatty, George, Jr.; And Others

    The paper describes a simple, yet effective method of computing a comparative index of instructional costs. The Instructional Cost Index identifies direct cost differentials among instructional programs. Cost differentials are described in terms of differences among numerical values of variables that reflect fundamental academic and resource…

  14. On Improving Efficiency of Differential Evolution for Aerodynamic Shape Optimization Applications

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.

    2004-01-01

    Differential Evolution (DE) is a simple and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems. Although DE offers several advantages over traditional optimization approaches, its use in applications such as aerodynamic shape optimization where the objective function evaluations are computationally expensive is limited by the large number of function evaluations often required. In this paper various approaches for improving the efficiency of DE are reviewed and discussed. These approaches are implemented in a DE-based aerodynamic shape optimization method that uses a Navier-Stokes solver for the objective function evaluations. Parallelization techniques on distributed computers are used to reduce turnaround times. Results are presented for the inverse design of a turbine airfoil. The efficiency improvements achieved by the different approaches are evaluated and compared.

  15. Smith predictor based-sliding mode controller for integrating processes with elevated deadtime.

    PubMed

    Camacho, Oscar; De la Cruz, Francisco

    2004-04-01

    An approach to control integrating processes with elevated deadtime using a Smith predictor sliding mode controller is presented. A PID sliding surface and an integrating first-order plus deadtime model have been used to synthesize the controller. Since the performance of existing controllers with a Smith predictor decrease in the presence of modeling errors, this paper presents a simple approach to combining the Smith predictor with the sliding mode concept, which is a proven, simple, and robust procedure. The proposed scheme has a set of tuning equations as a function of the characteristic parameters of the model. For implementation of our proposed approach, computer based industrial controllers that execute PID algorithms can be used. The performance and robustness of the proposed controller are compared with the Matausek-Micić scheme for linear systems using simulations.

  16. Practical Algorithms for the Longest Common Extension Problem

    NASA Astrophysics Data System (ADS)

    Ilie, Lucian; Tinta, Liviu

    The Longest Common Extension problem considers a string s and computes, for each of a number of pairs (i,j), the longest substring of s that starts at both i and j. It appears as a subproblem in many fundamental string problems and can be solved by linear-time preprocessing of the string that allows (worst-case) constant-time computation for each pair. The two known approaches use powerful algorithms: either constant-time computation of the Lowest Common Ancestor in trees or constant-time computation of Range Minimum Queries (RMQ) in arrays. We show here that, from practical point of view, such complicated approaches are not needed. We give two very simple algorithms for this problem that require no preprocessing. The first needs only the string and is significantly faster than all previous algorithms on the average. The second combines the first with a direct RMQ computation on the Longest Common Prefix array. It takes advantage of the superior speed of the cache memory and is the fastest on virtually all inputs.

  17. On the use and computation of the Jordan canonical form in system theory

    NASA Technical Reports Server (NTRS)

    Sridhar, B.; Jordan, D.

    1974-01-01

    This paper investigates various aspects of the application of the Jordan canonical form of a matrix in system theory and develops a computational approach to determining the Jordan form for a given matrix. Applications include pole placement, controllability and observability studies, serving as an intermediate step in yielding other canonical forms, and theorem proving. The computational method developed in this paper is both simple and efficient. The method is based on the definition of a generalized eigenvector and a natural extension of Gauss elimination techniques. Examples are included for demonstration purposes.

  18. Classical problems in computational aero-acoustics

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.

    1996-01-01

    In relation to the expected problems in the development of computational aeroacoustics (CAA), the preliminary applications were to classical problems where the known analytical solutions could be used to validate the numerical results. Such comparisons were used to overcome the numerical problems inherent in these calculations. Comparisons were made between the various numerical approaches to the problems such as direct simulations, acoustic analogies and acoustic/viscous splitting techniques. The aim was to demonstrate the applicability of CAA as a tool in the same class as computational fluid dynamics. The scattering problems that occur are considered and simple sources are discussed.

  19. Adiabatic markovian dynamics.

    PubMed

    Oreshkov, Ognyan; Calsamiglia, John

    2010-07-30

    We propose a theory of adiabaticity in quantum markovian dynamics based on a decomposition of the Hilbert space induced by the asymptotic behavior of the Lindblad semigroup. A central idea of our approach is that the natural generalization of the concept of eigenspace of the Hamiltonian in the case of markovian dynamics is a noiseless subsystem with a minimal noisy cofactor. Unlike previous attempts to define adiabaticity for open systems, our approach deals exclusively with physical entities and provides a simple, intuitive picture at the Hilbert-space level, linking the notion of adiabaticity to the theory of noiseless subsystems. As two applications of our theory, we propose a general framework for decoherence-assisted computation in noiseless codes and a dissipation-driven approach to holonomic computation based on adiabatic dragging of subsystems that is generally not achievable by nondissipative means.

  20. A multi-resolution approach for optimal mass transport

    NASA Astrophysics Data System (ADS)

    Dominitz, Ayelet; Angenent, Sigurd; Tannenbaum, Allen

    2007-09-01

    Optimal mass transport is an important technique with numerous applications in econometrics, fluid dynamics, automatic control, statistical physics, shape optimization, expert systems, and meteorology. Motivated by certain problems in image registration and medical image visualization, in this note, we describe a simple gradient descent methodology for computing the optimal L2 transport mapping which may be easily implemented using a multiresolution scheme. We also indicate how the optimal transport map may be computed on the sphere. A numerical example is presented illustrating our ideas.

  1. A fuzzy clustering algorithm to detect planar and quadric shapes

    NASA Technical Reports Server (NTRS)

    Krishnapuram, Raghu; Frigui, Hichem; Nasraoui, Olfa

    1992-01-01

    In this paper, we introduce a new fuzzy clustering algorithm to detect an unknown number of planar and quadric shapes in noisy data. The proposed algorithm is computationally and implementationally simple, and it overcomes many of the drawbacks of the existing algorithms that have been proposed for similar tasks. Since the clustering is performed in the original image space, and since no features need to be computed, this approach is particularly suited for sparse data. The algorithm may also be used in pattern recognition applications.

  2. A Computer-Based Educational Approach to the Air Command and Staff College Associate Program

    DTIC Science & Technology

    1985-04-01

    control interactive vid e o, grade student responses and perform some analysis on the dat a . Its main advantages lie in the ability of the author to...basic goal of provid- ing the instructor with assitance in the development of good CBE. One way of viewing the different tools on the market is to...ractice , tutorials and simple games all have as their premise the computer replacing the teacher in a one-on-one en- counter. The other modes, simulation

  3. Anytime query-tuned kernel machine classifiers via Cholesky factorization

    NASA Technical Reports Server (NTRS)

    DeCoste, D.

    2002-01-01

    We recently demonstrated 2 to 64-fold query-time speedups of Support Vector Machine and Kernel Fisher classifiers via a new computational geometry method for anytime output bounds (DeCoste,2002). This new paper refines our approach in two key ways. First, we introduce a simple linear algebra formulation based on Cholesky factorization, yielding simpler equations and lower computational overhead. Second, this new formulation suggests new methods for achieving additional speedups, including tuning on query samples. We demonstrate effectiveness on benchmark datasets.

  4. Memory management in genome-wide association studies

    PubMed Central

    2009-01-01

    Genome-wide association is a powerful tool for the identification of genes that underlie common diseases. Genome-wide association studies generate billions of genotypes and pose significant computational challenges for most users including limited computer memory. We applied a recently developed memory management tool to two analyses of North American Rheumatoid Arthritis Consortium studies and measured the performance in terms of central processing unit and memory usage. We conclude that our memory management approach is simple, efficient, and effective for genome-wide association studies. PMID:20018047

  5. Simple, Inexpensive Attainment and Measurement of Very High Cooling and Warming Rates✰

    PubMed Central

    Kleinhans, F.W.; Seki, Shinsuke; Mazur, Peter

    2010-01-01

    We have developed a simple, inexpensive system (< $300 US) for measuring cooling and warming rates of small (~ 0.1 μl) aqueous samples at rates as high as 105 °C/min. The measurement system itself, can track rates approaching one million °C/min. For temperature sensing, a Type T thermocouple with 50 μm wire was used. The thermocouple output voltage was read with an inexpensive USB based digital oscilloscope interfaced to a laptop computer, and the raw data were processed with MS Excel. PMID:20599881

  6. Constructing a simple parametric model of shoulder from medical images

    NASA Astrophysics Data System (ADS)

    Atmani, H.; Fofi, D.; Merienne, F.; Trouilloud, P.

    2006-02-01

    The modelling of the shoulder joint is an important step to set a Computer-Aided Surgery System for shoulder prosthesis placement. Our approach mainly concerns the bones structures of the scapulo-humeral joint. Our goal is to develop a tool that allows the surgeon to extract morphological data from medical images in order to interpret the biomechanical behaviour of a prosthesised shoulder for preoperative and peroperative virtual surgery. To provide a light and easy-handling representation of the shoulder, a geometrical model composed of quadrics, planes and other simple forms is proposed.

  7. Simple Shared Motifs (SSM) in conserved region of promoters: a new approach to identify co-regulation patterns.

    PubMed

    Gruel, Jérémy; LeBorgne, Michel; LeMeur, Nolwenn; Théret, Nathalie

    2011-09-12

    Regulation of gene expression plays a pivotal role in cellular functions. However, understanding the dynamics of transcription remains a challenging task. A host of computational approaches have been developed to identify regulatory motifs, mainly based on the recognition of DNA sequences for transcription factor binding sites. Recent integration of additional data from genomic analyses or phylogenetic footprinting has significantly improved these methods. Here, we propose a different approach based on the compilation of Simple Shared Motifs (SSM), groups of sequences defined by their length and similarity and present in conserved sequences of gene promoters. We developed an original algorithm to search and count SSM in pairs of genes. An exceptional number of SSM is considered as a common regulatory pattern. The SSM approach is applied to a sample set of genes and validated using functional gene-set enrichment analyses. We demonstrate that the SSM approach selects genes that are over-represented in specific biological categories (Ontology and Pathways) and are enriched in co-expressed genes. Finally we show that genes co-expressed in the same tissue or involved in the same biological pathway have increased SSM values. Using unbiased clustering of genes, Simple Shared Motifs analysis constitutes an original contribution to provide a clearer definition of expression networks.

  8. Simple Shared Motifs (SSM) in conserved region of promoters: a new approach to identify co-regulation patterns

    PubMed Central

    2011-01-01

    Background Regulation of gene expression plays a pivotal role in cellular functions. However, understanding the dynamics of transcription remains a challenging task. A host of computational approaches have been developed to identify regulatory motifs, mainly based on the recognition of DNA sequences for transcription factor binding sites. Recent integration of additional data from genomic analyses or phylogenetic footprinting has significantly improved these methods. Results Here, we propose a different approach based on the compilation of Simple Shared Motifs (SSM), groups of sequences defined by their length and similarity and present in conserved sequences of gene promoters. We developed an original algorithm to search and count SSM in pairs of genes. An exceptional number of SSM is considered as a common regulatory pattern. The SSM approach is applied to a sample set of genes and validated using functional gene-set enrichment analyses. We demonstrate that the SSM approach selects genes that are over-represented in specific biological categories (Ontology and Pathways) and are enriched in co-expressed genes. Finally we show that genes co-expressed in the same tissue or involved in the same biological pathway have increased SSM values. Conclusions Using unbiased clustering of genes, Simple Shared Motifs analysis constitutes an original contribution to provide a clearer definition of expression networks. PMID:21910886

  9. Towards an Autonomic Cluster Management System (ACMS) with Reflex Autonomicity

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt; Hinchey, Mike; Sterritt, Roy

    2005-01-01

    Cluster computing, whereby a large number of simple processors or nodes are combined together to apparently function as a single powerful computer, has emerged as a research area in its own right. The approach offers a relatively inexpensive means of providing a fault-tolerant environment and achieving significant computational capabilities for high-performance computing applications. However, the task of manually managing and configuring a cluster quickly becomes daunting as the cluster grows in size. Autonomic computing, with its vision to provide self-management, can potentially solve many of the problems inherent in cluster management. We describe the development of a prototype Autonomic Cluster Management System (ACMS) that exploits autonomic properties in automating cluster management and its evolution to include reflex reactions via pulse monitoring.

  10. Structural dynamics and vibrations of damped, aircraft-type structures

    NASA Technical Reports Server (NTRS)

    Young, Maurice I.

    1992-01-01

    Engineering preliminary design methods for approximating and predicting the effects of viscous or equivalent viscous-type damping treatments on the free and forced vibration of lightly damped aircraft-type structures are developed. Similar developments are presented for dynamic hysteresis viscoelastic-type damping treatments. It is shown by both engineering analysis and numerical illustrations that the intermodal coupling of the undamped modes arising from the introduction of damping may be neglected in applying these preliminary design methods, except when dissimilar modes of these lightly damped, complex aircraft-type structures have identical or nearly identical natural frequencies. In such cases, it is shown that a relatively simple, additional interaction calculation between pairs of modes exhibiting this 'modal response' phenomenon suffices in the prediction of interacting modal damping fractions. The accuracy of the methods is shown to be very good to excellent, depending on the normal natural frequency separation of the system modes, thereby permitting a relatively simple preliminary design approach. This approach is shown to be a natural precursor to elaborate finite element, digital computer design computations in evaluating the type, quantity, and location of damping treatment.

  11. Universal RCFT correlators from the holomorphic bootstrap

    NASA Astrophysics Data System (ADS)

    Mukhi, Sunil; Muralidhara, Girish

    2018-02-01

    We elaborate and extend the method of Wronskian differential equations for conformal blocks to compute four-point correlation functions on the plane for classes of primary fields in rational (and possibly more general) conformal field theories. This approach leads to universal differential equations for families of CFT's and provides a very simple re-derivation of the BPZ results for the degenerate fields ϕ 1,2 and ϕ 2,1 in the c < 1 minimal models. We apply this technique to compute correlators for the WZW models corresponding to the Deligne-Cvitanović exceptional series of Lie algebras. The application turns out to be subtle in certain cases where there are multiple decoupled primaries. The power of this approach is demonstrated by applying it to compute four-point functions for the Baby Monster CFT, which does not belong to any minimal series.

  12. The Gain of Resource Delegation in Distributed Computing Environments

    NASA Astrophysics Data System (ADS)

    Fölling, Alexander; Grimme, Christian; Lepping, Joachim; Papaspyrou, Alexander

    In this paper, we address job scheduling in Distributed Computing Infrastructures, that is a loosely coupled network of autonomous acting High Performance Computing systems. In contrast to the common approach of mutual workload exchange, we consider the more intuitive operator's viewpoint of load-dependent resource reconfiguration. In case of a site's over-utilization, the scheduling system is able to lease resources from other sites to keep up service quality for its local user community. Contrary, the granting of idle resources can increase utilization in times of low local workload and thus ensure higher efficiency. The evaluation considers real workload data and is done with respect to common service quality indicators. For two simple resource exchange policies and three basic setups we show the possible gain of this approach and analyze the dynamics in workload-adaptive reconfiguration behavior.

  13. Using a new discretization approach to design a delayed LQG controller

    NASA Astrophysics Data System (ADS)

    Haraguchi, M.; Hu, H. Y.

    2008-07-01

    In general, discrete-time controls have become more and more preferable in engineering because of their easy implementation and simple computations. However, the available discretization approaches for the systems having time delays increase the system dimensions and have a high computational cost. This paper presents an effective discretization approach for the continuous-time systems with an input delay. The approach enables one to transform the input-delay system into a delay-free system, but retain the system dimensions unchanged in the state transformation. To demonstrate an application of the approach, this paper presents the design of an LQ regulator for continuous-time systems with an input delay and gives a state observer with a Kalman filter for estimating the full-state vector from some measurements of the system as well. The case studies in the paper well support the efficacy and efficiency of the proposed approach applied to the vibration control of a three-story structure model with the actuator delay taken into account.

  14. A Simple and Computationally Efficient Sampling Approach to Covariate Adjustment for Multifactor Dimensionality Reduction Analysis of Epistasis

    PubMed Central

    Gui, Jiang; Andrew, Angeline S.; Andrews, Peter; Nelson, Heather M.; Kelsey, Karl T.; Karagas, Margaret R.; Moore, Jason H.

    2010-01-01

    Epistasis or gene-gene interaction is a fundamental component of the genetic architecture of complex traits such as disease susceptibility. Multifactor dimensionality reduction (MDR) was developed as a nonparametric and model-free method to detect epistasis when there are no significant marginal genetic effects. However, in many studies of complex disease, other covariates like age of onset and smoking status could have a strong main effect and may potentially interfere with MDR's ability to achieve its goal. In this paper, we present a simple and computationally efficient sampling method to adjust for covariate effects in MDR. We use simulation to show that after adjustment, MDR has sufficient power to detect true gene-gene interactions. We also compare our method with the state-of-art technique in covariate adjustment. The results suggest that our proposed method performs similarly, but is more computationally efficient. We then apply this new method to an analysis of a population-based bladder cancer study in New Hampshire. PMID:20924193

  15. On Improving Efficiency of Differential Evolution for Aerodynamic Shape Optimization Applications

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.

    2004-01-01

    Differential Evolution (DE) is a simple and robust evolutionary strategy that has been provEn effective in determining the global optimum for several difficult optimization problems. Although DE offers several advantages over traditional optimization approaches, its use in applications such as aerodynamic shape optimization where the objective function evaluations are computationally expensive is limited by the large number of function evaluations often required. In this paper various approaches for improving the efficiency of DE are reviewed and discussed. Several approaches that have proven effective for other evolutionary algorithms are modified and implemented in a DE-based aerodynamic shape optimization method that uses a Navier-Stokes solver for the objective function evaluations. Parallelization techniques on distributed computers are used to reduce turnaround times. Results are presented for standard test optimization problems and for the inverse design of a turbine airfoil. The efficiency improvements achieved by the different approaches are evaluated and compared.

  16. Programmable computing with a single magnetoresistive element

    NASA Astrophysics Data System (ADS)

    Ney, A.; Pampuch, C.; Koch, R.; Ploog, K. H.

    2003-10-01

    The development of transistor-based integrated circuits for modern computing is a story of great success. However, the proved concept for enhancing computational power by continuous miniaturization is approaching its fundamental limits. Alternative approaches consider logic elements that are reconfigurable at run-time to overcome the rigid architecture of the present hardware systems. Implementation of parallel algorithms on such `chameleon' processors has the potential to yield a dramatic increase of computational speed, competitive with that of supercomputers. Owing to their functional flexibility, `chameleon' processors can be readily optimized with respect to any computer application. In conventional microprocessors, information must be transferred to a memory to prevent it from getting lost, because electrically processed information is volatile. Therefore the computational performance can be improved if the logic gate is additionally capable of storing the output. Here we describe a simple hardware concept for a programmable logic element that is based on a single magnetic random access memory (MRAM) cell. It combines the inherent advantage of a non-volatile output with flexible functionality which can be selected at run-time to operate as an AND, OR, NAND or NOR gate.

  17. A new approach to global control of redundant manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1989-01-01

    A new and simple approach to configuration control of redundant manipulators is presented. In this approach, the redundancy is utilized to control the manipulator configuration directly in task space, where the task will be performed. A number of kinematic functions are defined to reflect the desirable configuration that will be achieved for a given end-effector position. The user-defined kinematic functions and the end-effector Cartesian coordinates are combined to form a set of task-related configuration variables as generalized coordinates for the manipulator. An adaptive scheme is then utilized to globally control the configuration variables so as to achieve tracking of some desired reference trajectories. This accomplishes the basic task of desired end-effector motion, while utilizing the redundancy to achieve any additional task through the desired time variation of the kinematic functions. The control law is simple and computationally very fast, and does not require the complex manipulator dynamic model.

  18. NeuronDepot: keeping your colleagues in sync by combining modern cloud storage services, the local file system, and simple web applications

    PubMed Central

    Rautenberg, Philipp L.; Kumaraswamy, Ajayrama; Tejero-Cantero, Alvaro; Doblander, Christoph; Norouzian, Mohammad R.; Kai, Kazuki; Jacobsen, Hans-Arno; Ai, Hiroyuki; Wachtler, Thomas; Ikeno, Hidetoshi

    2014-01-01

    Neuroscience today deals with a “data deluge” derived from the availability of high-throughput sensors of brain structure and brain activity, and increased computational resources for detailed simulations with complex output. We report here (1) a novel approach to data sharing between collaborating scientists that brings together file system tools and cloud technologies, (2) a service implementing this approach, called NeuronDepot, and (3) an example application of the service to a complex use case in the neurosciences. The main drivers for our approach are to facilitate collaborations with a transparent, automated data flow that shields scientists from having to learn new tools or data structuring paradigms. Using NeuronDepot is simple: one-time data assignment from the originator and cloud based syncing—thus making experimental and modeling data available across the collaboration with minimum overhead. Since data sharing is cloud based, our approach opens up the possibility of using new software developments and hardware scalabitliy which are associated with elastic cloud computing. We provide an implementation that relies on existing synchronization services and is usable from all devices via a reactive web interface. We are motivating our solution by solving the practical problems of the GinJang project, a collaboration of three universities across eight time zones with a complex workflow encompassing data from electrophysiological recordings, imaging, morphological reconstructions, and simulations. PMID:24971059

  19. NeuronDepot: keeping your colleagues in sync by combining modern cloud storage services, the local file system, and simple web applications.

    PubMed

    Rautenberg, Philipp L; Kumaraswamy, Ajayrama; Tejero-Cantero, Alvaro; Doblander, Christoph; Norouzian, Mohammad R; Kai, Kazuki; Jacobsen, Hans-Arno; Ai, Hiroyuki; Wachtler, Thomas; Ikeno, Hidetoshi

    2014-01-01

    Neuroscience today deals with a "data deluge" derived from the availability of high-throughput sensors of brain structure and brain activity, and increased computational resources for detailed simulations with complex output. We report here (1) a novel approach to data sharing between collaborating scientists that brings together file system tools and cloud technologies, (2) a service implementing this approach, called NeuronDepot, and (3) an example application of the service to a complex use case in the neurosciences. The main drivers for our approach are to facilitate collaborations with a transparent, automated data flow that shields scientists from having to learn new tools or data structuring paradigms. Using NeuronDepot is simple: one-time data assignment from the originator and cloud based syncing-thus making experimental and modeling data available across the collaboration with minimum overhead. Since data sharing is cloud based, our approach opens up the possibility of using new software developments and hardware scalabitliy which are associated with elastic cloud computing. We provide an implementation that relies on existing synchronization services and is usable from all devices via a reactive web interface. We are motivating our solution by solving the practical problems of the GinJang project, a collaboration of three universities across eight time zones with a complex workflow encompassing data from electrophysiological recordings, imaging, morphological reconstructions, and simulations.

  20. Supersonic propulsion simulation by incorporating component models in the large perturbation inlet (LAPIN) computer code

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Richard, Jacques C.

    1991-01-01

    An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.

  1. 4D Optimization of Scanned Ion Beam Tracking Therapy for Moving Tumors

    PubMed Central

    Eley, John Gordon; Newhauser, Wayne David; Lüchtenborg, Robert; Graeff, Christian; Bert, Christoph

    2014-01-01

    Motion mitigation strategies are needed to fully realize the theoretical advantages of scanned ion beam therapy for patients with moving tumors. The purpose of this study was to determine whether a new four-dimensional (4D) optimization approach for scanned-ion-beam tracking could reduce dose to avoidance volumes near a moving target while maintaining target dose coverage, compared to an existing 3D-optimized beam tracking approach. We tested these approaches computationally using a simple 4D geometrical phantom and a complex anatomic phantom, that is, a 4D computed tomogram of the thorax of a lung cancer patient. We also validated our findings using measurements of carbon-ion beams with a motorized film phantom. Relative to 3D-optimized beam tracking, 4D-optimized beam tracking reduced the maximum predicted dose to avoidance volumes by 53% in the simple phantom and by 13% in the thorax phantom. 4D-optimized beam tracking provided similar target dose homogeneity in the simple phantom (standard deviation of target dose was 0.4% versus 0.3%) and dramatically superior homogeneity in the thorax phantom (D5-D95 was 1.9% versus 38.7%). Measurements demonstrated that delivery of 4D-optimized beam tracking was technically feasible and confirmed a 42% decrease in maximum film exposure in the avoidance region compared with 3D-optimized beam tracking. In conclusion, we found that 4D-optimized beam tracking can reduce the maximum dose to avoidance volumes near a moving target while maintaining target dose coverage, compared with 3D-optimized beam tracking. PMID:24889215

  2. 4D optimization of scanned ion beam tracking therapy for moving tumors

    NASA Astrophysics Data System (ADS)

    Eley, John Gordon; Newhauser, Wayne David; Lüchtenborg, Robert; Graeff, Christian; Bert, Christoph

    2014-07-01

    Motion mitigation strategies are needed to fully realize the theoretical advantages of scanned ion beam therapy for patients with moving tumors. The purpose of this study was to determine whether a new four-dimensional (4D) optimization approach for scanned-ion-beam tracking could reduce dose to avoidance volumes near a moving target while maintaining target dose coverage, compared to an existing 3D-optimized beam tracking approach. We tested these approaches computationally using a simple 4D geometrical phantom and a complex anatomic phantom, that is, a 4D computed tomogram of the thorax of a lung cancer patient. We also validated our findings using measurements of carbon-ion beams with a motorized film phantom. Relative to 3D-optimized beam tracking, 4D-optimized beam tracking reduced the maximum predicted dose to avoidance volumes by 53% in the simple phantom and by 13% in the thorax phantom. 4D-optimized beam tracking provided similar target dose homogeneity in the simple phantom (standard deviation of target dose was 0.4% versus 0.3%) and dramatically superior homogeneity in the thorax phantom (D5-D95 was 1.9% versus 38.7%). Measurements demonstrated that delivery of 4D-optimized beam tracking was technically feasible and confirmed a 42% decrease in maximum film exposure in the avoidance region compared with 3D-optimized beam tracking. In conclusion, we found that 4D-optimized beam tracking can reduce the maximum dose to avoidance volumes near a moving target while maintaining target dose coverage, compared with 3D-optimized beam tracking.

  3. A methodology for the design of experiments in computational intelligence with multiple regression models.

    PubMed

    Fernandez-Lozano, Carlos; Gestal, Marcos; Munteanu, Cristian R; Dorado, Julian; Pazos, Alejandro

    2016-01-01

    The design of experiments and the validation of the results achieved with them are vital in any research study. This paper focuses on the use of different Machine Learning approaches for regression tasks in the field of Computational Intelligence and especially on a correct comparison between the different results provided for different methods, as those techniques are complex systems that require further study to be fully understood. A methodology commonly accepted in Computational intelligence is implemented in an R package called RRegrs. This package includes ten simple and complex regression models to carry out predictive modeling using Machine Learning and well-known regression algorithms. The framework for experimental design presented herein is evaluated and validated against RRegrs. Our results are different for three out of five state-of-the-art simple datasets and it can be stated that the selection of the best model according to our proposal is statistically significant and relevant. It is of relevance to use a statistical approach to indicate whether the differences are statistically significant using this kind of algorithms. Furthermore, our results with three real complex datasets report different best models than with the previously published methodology. Our final goal is to provide a complete methodology for the use of different steps in order to compare the results obtained in Computational Intelligence problems, as well as from other fields, such as for bioinformatics, cheminformatics, etc., given that our proposal is open and modifiable.

  4. A methodology for the design of experiments in computational intelligence with multiple regression models

    PubMed Central

    Gestal, Marcos; Munteanu, Cristian R.; Dorado, Julian; Pazos, Alejandro

    2016-01-01

    The design of experiments and the validation of the results achieved with them are vital in any research study. This paper focuses on the use of different Machine Learning approaches for regression tasks in the field of Computational Intelligence and especially on a correct comparison between the different results provided for different methods, as those techniques are complex systems that require further study to be fully understood. A methodology commonly accepted in Computational intelligence is implemented in an R package called RRegrs. This package includes ten simple and complex regression models to carry out predictive modeling using Machine Learning and well-known regression algorithms. The framework for experimental design presented herein is evaluated and validated against RRegrs. Our results are different for three out of five state-of-the-art simple datasets and it can be stated that the selection of the best model according to our proposal is statistically significant and relevant. It is of relevance to use a statistical approach to indicate whether the differences are statistically significant using this kind of algorithms. Furthermore, our results with three real complex datasets report different best models than with the previously published methodology. Our final goal is to provide a complete methodology for the use of different steps in order to compare the results obtained in Computational Intelligence problems, as well as from other fields, such as for bioinformatics, cheminformatics, etc., given that our proposal is open and modifiable. PMID:27920952

  5. Gravitational decoupling and the Picard-Lefschetz approach

    NASA Astrophysics Data System (ADS)

    Brown, Jon; Cole, Alex; Shiu, Gary; Cottrell, William

    2018-01-01

    In this work, we consider tunneling between nonmetastable states in gravitational theories. Such processes arise in various contexts, e.g., in inflationary scenarios where the inflaton potential involves multiple fields or multiple branches. They are also relevant for bubble wall nucleation in some cosmological settings. However, we show that the transition amplitudes computed using the Euclidean method generally do not approach the corresponding field theory limit as Mp→∞ . This implies that in the Euclidean framework, there is no systematic expansion in powers of GN for such processes. Such considerations also carry over directly to no-boundary scenarios involving Hawking-Turok instantons. In this note, we illustrate this failure of decoupling in the Euclidean approach with a simple model of axion monodromy and then argue that the situation can be remedied with a Lorentzian prescription such as the Picard-Lefschetz theory. As a proof of concept, we illustrate with a simple model how tunneling transition amplitudes can be calculated using the Picard-Lefschetz approach.

  6. Phase-unwrapping algorithm by a rounding-least-squares approach

    NASA Astrophysics Data System (ADS)

    Juarez-Salazar, Rigoberto; Robledo-Sanchez, Carlos; Guerrero-Sanchez, Fermin

    2014-02-01

    A simple and efficient phase-unwrapping algorithm based on a rounding procedure and a global least-squares minimization is proposed. Instead of processing the gradient of the wrapped phase, this algorithm operates over the gradient of the phase jumps by a robust and noniterative scheme. Thus, the residue-spreading and over-smoothing effects are reduced. The algorithm's performance is compared with four well-known phase-unwrapping methods: minimum cost network flow (MCNF), fast Fourier transform (FFT), quality-guided, and branch-cut. A computer simulation and experimental results show that the proposed algorithm reaches a high-accuracy level than the MCNF method by a low-computing time similar to the FFT phase-unwrapping method. Moreover, since the proposed algorithm is simple, fast, and user-free, it could be used in metrological interferometric and fringe-projection automatic real-time applications.

  7. Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals

    NASA Astrophysics Data System (ADS)

    Kutepov, A. L.; Oudovenko, V. S.; Kotliar, G.

    2017-10-01

    We present a code implementing the linearized quasiparticle self-consistent GW method (LQSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N3 scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method. Program Files doi:http://dx.doi.org/10.17632/cpchkfty4w.1 Licensing provisions: GNU General Public License Programming language: Fortran 90 External routines/libraries: BLAS, LAPACK, MPI (optional) Nature of problem: Direct implementation of the GW method scales as N4 with the system size, which quickly becomes prohibitively time consuming even in the modern computers. Solution method: We implemented the GW approach using a method that switches between real space and momentum space representations. Some operations are faster in real space, whereas others are more computationally efficient in the reciprocal space. This makes our approach scale as N3. Restrictions: The limiting factor is usually the memory available in a computer. Using 10 GB/core of memory allows us to study the systems up to 15 atoms per unit cell.

  8. Kinetics of DSB rejoining and formation of simple chromosome exchange aberrations

    NASA Technical Reports Server (NTRS)

    Cucinotta, F. A.; Nikjoo, H.; O'Neill, P.; Goodhead, D. T.

    2000-01-01

    PURPOSE: To investigate the role of kinetics in the processing of DNA double strand breaks (DSB), and the formation of simple chromosome exchange aberrations following X-ray exposures to mammalian cells based on an enzymatic approach. METHODS: Using computer simulations based on a biochemical approach, rate-equations that describe the processing of DSB through the formation of a DNA-enzyme complex were formulated. A second model that allows for competition between two processing pathways was also formulated. The formation of simple exchange aberrations was modelled as misrepair during the recombination of single DSB with undamaged DNA. Non-linear coupled differential equations corresponding to biochemical pathways were solved numerically by fitting to experimental data. RESULTS: When mediated by a DSB repair enzyme complex, the processing of single DSB showed a complex behaviour that gives the appearance of fast and slow components of rejoining. This is due to the time-delay caused by the action time of enzymes in biomolecular reactions. It is shown that the kinetic- and dose-responses of simple chromosome exchange aberrations are well described by a recombination model of DSB interacting with undamaged DNA when aberration formation increases with linear dose-dependence. Competition between two or more recombination processes is shown to lead to the formation of simple exchange aberrations with a dose-dependence similar to that of a linear quadratic model. CONCLUSIONS: Using a minimal number of assumptions, the kinetics and dose response observed experimentally for DSB rejoining and the formation of simple chromosome exchange aberrations are shown to be consistent with kinetic models based on enzymatic reaction approaches. A non-linear dose response for simple exchange aberrations is possible in a model of recombination of DNA containing a DSB with undamaged DNA when two or more pathways compete for DSB repair.

  9. Segmentation by fusion of histogram-based k-means clusters in different color spaces.

    PubMed

    Mignotte, Max

    2008-05-01

    This paper presents a new, simple, and efficient segmentation approach, based on a fusion procedure which aims at combining several segmentation maps associated to simpler partition models in order to finally get a more reliable and accurate segmentation result. The different label fields to be fused in our application are given by the same and simple (K-means based) clustering technique on an input image expressed in different color spaces. Our fusion strategy aims at combining these segmentation maps with a final clustering procedure using as input features, the local histogram of the class labels, previously estimated and associated to each site and for all these initial partitions. This fusion framework remains simple to implement, fast, general enough to be applied to various computer vision applications (e.g., motion detection and segmentation), and has been successfully applied on the Berkeley image database. The experiments herein reported in this paper illustrate the potential of this approach compared to the state-of-the-art segmentation methods recently proposed in the literature.

  10. Comparison of different approaches of modelling in a masonry building

    NASA Astrophysics Data System (ADS)

    Saba, M.; Meloni, D.

    2017-12-01

    The present work has the objective to model a simple masonry building, through two different modelling methods in order to assess their validity in terms of evaluation of static stresses. Have been chosen two of the most commercial software used to address this kind of problem, which are of S.T.A. Data S.r.l. and Sismicad12 of Concrete S.r.l. While the 3Muri software adopts the Frame by Macro Elements Method (FME), which should be more schematic and more efficient, Sismicad12 software uses the Finite Element Method (FEM), which guarantees accurate results, with greater computational burden. Remarkably differences of the static stresses, for such a simple structure between the two approaches have been found, and an interesting comparison and analysis of the reasons is proposed.

  11. Understanding valence-shell electron-pair repulsion (VSEPR) theory using origami molecular models

    NASA Astrophysics Data System (ADS)

    Endah Saraswati, Teguh; Saputro, Sulistyo; Ramli, Murni; Praseptiangga, Danar; Khasanah, Nurul; Marwati, Sri

    2017-01-01

    Valence-shell electron-pair repulsion (VSEPR) theory is conventionally used to predict molecular geometry. However, it is difficult to explore the full implications of this theory by simply drawing chemical structures. Here, we introduce origami modelling as a more accessible approach for exploration of the VSEPR theory. Our technique is simple, readily accessible and inexpensive compared with other sophisticated methods such as computer simulation or commercial three-dimensional modelling kits. This method can be implemented in chemistry education at both the high school and university levels. We discuss the example of a simple molecular structure prediction for ammonia (NH3). Using the origami model, both molecular shape and the scientific justification can be visualized easily. This ‘hands-on’ approach to building molecules will help promote understanding of VSEPR theory.

  12. An approach to quality and performance control in a computer-assisted clinical chemistry laboratory.

    PubMed Central

    Undrill, P E; Frazer, S C

    1979-01-01

    A locally developed, computer-based clinical chemistry laboratory system has been in operation since 1970. This utilises a Digital Equipment Co Ltd PDP 12 and an interconnected PDP 8/F computer. Details are presented of the performance and quality control techniques incorporated into the system. Laboratory performance is assessed through analysis of results from fixed-level control sera as well as from cumulative sum methods. At a simple level the presentation may be considered purely indicative, while at a more sophisticated level statistical concepts have been introduced to aid the laboratory controller in decision-making processes. PMID:438340

  13. The implementation of AI technologies in computer wargames

    NASA Astrophysics Data System (ADS)

    Tiller, John A.

    2004-08-01

    Computer wargames involve the most in-depth analysis of general game theory. The enumerated turns of a game like chess are dwarfed by the exponentially larger possibilities of even a simple computer wargame. Implementing challenging AI is computer wargames is an important goal in both the commercial and military environments. In the commercial marketplace, customers demand a challenging AI opponent when they play a computer wargame and are frustrated by a lack of competence on the part of the AI. In the military environment, challenging AI opponents are important for several reasons. A challenging AI opponent will force the military professional to avoid routine or set-piece approaches to situations and cause them to think much deeper about military situations before taking action. A good AI opponent would also include national characteristics of the opponent being simulated, thus providing the military professional with even more of a challenge in planning and approach. Implementing current AI technologies in computer wargames is a technological challenge. The goal is to join the needs of AI in computer wargames with the solutions of current AI technologies. This talk will address several of those issues, possible solutions, and currently unsolved problems.

  14. Combining patient administration and laboratory computer systems - a proposal to measure and improve the quality of care.

    PubMed

    Wolff, Anthony H; Kellett, John

    2011-12-01

    Several approaches to measuring the quality of hospital care have been suggested. We propose the simple and objective approach of using the health related data of the patient administration systems and the laboratory results that have been collected and stored electronically in hospitals for years. Imaginative manipulation of this data can give new insights into the quality of patient care. Copyright © 2011 European Federation of Internal Medicine. All rights reserved.

  15. A Simple and Efficient Computational Approach to Chafed Cable Time-Domain Reflectometry Signature Prediction

    NASA Technical Reports Server (NTRS)

    Kowalski, Marc Edward

    2009-01-01

    A method for the prediction of time-domain signatures of chafed coaxial cables is presented. The method is quasi-static in nature, and is thus efficient enough to be included in inference and inversion routines. Unlike previous models proposed, no restriction on the geometry or size of the chafe is required in the present approach. The model is validated and its speed is illustrated via comparison to simulations from a commercial, three-dimensional electromagnetic simulator.

  16. Combining convolutional neural networks and Hough Transform for classification of images containing lines

    NASA Astrophysics Data System (ADS)

    Sheshkus, Alexander; Limonova, Elena; Nikolaev, Dmitry; Krivtsov, Valeriy

    2017-03-01

    In this paper, we propose an expansion of convolutional neural network (CNN) input features based on Hough Transform. We perform morphological contrasting of source image followed by Hough Transform, and then use it as input for some convolutional filters. Thus, CNNs computational complexity and the number of units are not affected. Morphological contrasting and Hough Transform are the only additional computational expenses of introduced CNN input features expansion. Proposed approach was demonstrated on the example of CNN with very simple structure. We considered two image recognition problems, that were object classification on CIFAR-10 and printed character recognition on private dataset with symbols taken from Russian passports. Our approach allowed to reach noticeable accuracy improvement without taking much computational effort, which can be extremely important in industrial recognition systems or difficult problems utilising CNNs, like pressure ridge analysis and classification.

  17. The modelling of the flow-induced vibrations of periodic flat and axial-symmetric structures with a wave-based method

    NASA Astrophysics Data System (ADS)

    Errico, F.; Ichchou, M.; De Rosa, S.; Bareille, O.; Franco, F.

    2018-06-01

    The stochastic response of periodic flat and axial-symmetric structures, subjected to random and spatially-correlated loads, is here analysed through an approach based on the combination of a wave finite element and a transfer matrix method. Although giving a lower computational cost, the present approach keeps the same accuracy of classic finite element methods. When dealing with homogeneous structures, the accuracy is also extended to higher frequencies, without increasing the time of calculation. Depending on the complexity of the structure and the frequency range, the computational cost can be reduced more than two orders of magnitude. The presented methodology is validated both for simple and complex structural shapes, under deterministic and random loads.

  18. Knowledge-based zonal grid generation for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Andrews, Alison E.

    1988-01-01

    Automation of flow field zoning in two dimensions is an important step towards reducing the difficulty of three-dimensional grid generation in computational fluid dynamics. Using a knowledge-based approach makes sense, but problems arise which are caused by aspects of zoning involving perception, lack of expert consensus, and design processes. These obstacles are overcome by means of a simple shape and configuration language, a tunable zoning archetype, and a method of assembling plans from selected, predefined subplans. A demonstration system for knowledge-based two-dimensional flow field zoning has been successfully implemented and tested on representative aerodynamic configurations. The results show that this approach can produce flow field zonings that are acceptable to experts with differing evaluation criteria.

  19. Frequency domain system identification methods - Matrix fraction description approach

    NASA Technical Reports Server (NTRS)

    Horta, Luca G.; Juang, Jer-Nan

    1993-01-01

    This paper presents the use of matrix fraction descriptions for least-squares curve fitting of the frequency spectra to compute two matrix polynomials. The matrix polynomials are intermediate step to obtain a linearized representation of the experimental transfer function. Two approaches are presented: first, the matrix polynomials are identified using an estimated transfer function; second, the matrix polynomials are identified directly from the cross/auto spectra of the input and output signals. A set of Markov parameters are computed from the polynomials and subsequently realization theory is used to recover a minimum order state space model. Unevenly spaced frequency response functions may be used. Results from a simple numerical example and an experiment are discussed to highlight some of the important aspect of the algorithm.

  20. Robust and Simple Non-Reflecting Boundary Conditions for the Euler Equations: A New Approach Based on the Space-Time CE/SE Method

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Himansu, Ananda; Loh, Ching-Yuen; Wang, Xiao-Yen; Yu, Shang-Tao

    2003-01-01

    This paper reports on a significant advance in the area of non-reflecting boundary conditions (NRBCs) for unsteady flow computations. As a part of the development of the space-time conservation element and solution element (CE/SE) method, sets of NRBCs for 1D Euler problems are developed without using any characteristics-based techniques. These conditions are much simpler than those commonly reported in the literature, yet so robust that they are applicable to subsonic, transonic and supersonic flows even in the presence of discontinuities. In addition, the straightforward multidimensional extensions of the present 1D NRBCs have been shown numerically to be equally simple and robust. The paper details the theoretical underpinning of these NRBCs, and explains their unique robustness and accuracy in terms of the conservation of space-time fluxes. Some numerical results for an extended Sod's shock-tube problem, illustrating the effectiveness of the present NRBCs are included, together with an associated simple Fortran computer program. As a preliminary to the present development, a review of the basic CE/SE schemes is also included.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dalvit, Diego; Messina, Riccardo; Maia Neto, Paulo

    We develop the scattering approach for the dispersive force on a ground state atom on top of a corrugated surface. We present explicit results to first order in the corrugation amplitude. A variety of analytical results are derived in different limiting cases, including the van der Waals and Casimir-Polder regimes. We compute numerically the exact first-order dispersive potential for arbitrary separation distances and corrugation wavelengths, for a Rubidium atom on top of a silicon or gold corrugated surface. We consider in detail the correction to the proximity force approximation, and present a very simple approximation algorithm for computing the potential.

  2. Video analysis of projectile motion using tablet computers as experimental tools

    NASA Astrophysics Data System (ADS)

    Klein, P.; Gröber, S.; Kuhn, J.; Müller, A.

    2014-01-01

    Tablet computers were used as experimental tools to record and analyse the motion of a ball thrown vertically from a moving skateboard. Special applications plotted the measurement data component by component, allowing a simple determination of initial conditions and g in order to explore the underlying laws of motion. This experiment can easily be performed by students themselves, providing more autonomy in their problem-solving processes than traditional learning approaches. We believe that this autonomy and the authenticity of the experimental tool both foster their motivation.

  3. A neural network approach to job-shop scheduling.

    PubMed

    Zhou, D N; Cherkassky, V; Baldwin, T R; Olson, D E

    1991-01-01

    A novel analog computational network is presented for solving NP-complete constraint satisfaction problems, i.e. job-shop scheduling. In contrast to most neural approaches to combinatorial optimization based on quadratic energy cost function, the authors propose to use linear cost functions. As a result, the network complexity (number of neurons and the number of resistive interconnections) grows only linearly with problem size, and large-scale implementations become possible. The proposed approach is related to the linear programming network described by D.W. Tank and J.J. Hopfield (1985), which also uses a linear cost function for a simple optimization problem. It is shown how to map a difficult constraint-satisfaction problem onto a simple neural net in which the number of neural processors equals the number of subjobs (operations) and the number of interconnections grows linearly with the total number of operations. Simulations show that the authors' approach produces better solutions than existing neural approaches to job-shop scheduling, i.e. the traveling salesman problem-type Hopfield approach and integer linear programming approach of J.P.S. Foo and Y. Takefuji (1988), in terms of the quality of the solution and the network complexity.

  4. Probabilistic Design Storm Method for Improved Flood Estimation in Ungauged Catchments

    NASA Astrophysics Data System (ADS)

    Berk, Mario; Å pačková, Olga; Straub, Daniel

    2017-12-01

    The design storm approach with event-based rainfall-runoff models is a standard method for design flood estimation in ungauged catchments. The approach is conceptually simple and computationally inexpensive, but the underlying assumptions can lead to flawed design flood estimations. In particular, the implied average recurrence interval (ARI) neutrality between rainfall and runoff neglects uncertainty in other important parameters, leading to an underestimation of design floods. The selection of a single representative critical rainfall duration in the analysis leads to an additional underestimation of design floods. One way to overcome these nonconservative approximations is the use of a continuous rainfall-runoff model, which is associated with significant computational cost and requires rainfall input data that are often not readily available. As an alternative, we propose a novel Probabilistic Design Storm method that combines event-based flood modeling with basic probabilistic models and concepts from reliability analysis, in particular the First-Order Reliability Method (FORM). The proposed methodology overcomes the limitations of the standard design storm approach, while utilizing the same input information and models without excessive computational effort. Additionally, the Probabilistic Design Storm method allows deriving so-called design charts, which summarize representative design storm events (combinations of rainfall intensity and other relevant parameters) for floods with different return periods. These can be used to study the relationship between rainfall and runoff return periods. We demonstrate, investigate, and validate the method by means of an example catchment located in the Bavarian Pre-Alps, in combination with a simple hydrological model commonly used in practice.

  5. Principal component of explained variance: An efficient and optimal data dimension reduction framework for association studies.

    PubMed

    Turgeon, Maxime; Oualkacha, Karim; Ciampi, Antonio; Miftah, Hanane; Dehghan, Golsa; Zanke, Brent W; Benedet, Andréa L; Rosa-Neto, Pedro; Greenwood, Celia Mt; Labbe, Aurélie

    2018-05-01

    The genomics era has led to an increase in the dimensionality of data collected in the investigation of biological questions. In this context, dimension-reduction techniques can be used to summarise high-dimensional signals into low-dimensional ones, to further test for association with one or more covariates of interest. This paper revisits one such approach, previously known as principal component of heritability and renamed here as principal component of explained variance (PCEV). As its name suggests, the PCEV seeks a linear combination of outcomes in an optimal manner, by maximising the proportion of variance explained by one or several covariates of interest. By construction, this method optimises power; however, due to its computational complexity, it has unfortunately received little attention in the past. Here, we propose a general analytical PCEV framework that builds on the assets of the original method, i.e. conceptually simple and free of tuning parameters. Moreover, our framework extends the range of applications of the original procedure by providing a computationally simple strategy for high-dimensional outcomes, along with exact and asymptotic testing procedures that drastically reduce its computational cost. We investigate the merits of the PCEV using an extensive set of simulations. Furthermore, the use of the PCEV approach is illustrated using three examples taken from the fields of epigenetics and brain imaging.

  6. Techniques for Computing the DFT Using the Residue Fermat Number Systems and VLSI

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Chang, J. J.; Hsu, I. S.; Pei, D. Y.; Reed, I. S.

    1985-01-01

    The integer complex multiplier and adder over the direct sum of two copies of a finite field is specialized to the direct sum of the rings of integers modulo Fermat numbers. Such multiplications and additions can be used in the implementation of a discrete Fourier transform (DFT) of a sequence of complex numbers. The advantage of the present approach is that the number of multiplications needed for the DFT can be reduced substantially over the previous approach. The architectural designs using this approach are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.

  7. Symmetric log-domain diffeomorphic Registration: a demons-based approach.

    PubMed

    Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas

    2008-01-01

    Modern morphometric studies use non-linear image registration to compare anatomies and perform group analysis. Recently, log-Euclidean approaches have contributed to promote the use of such computational anatomy tools by permitting simple computations of statistics on a rather large class of invertible spatial transformations. In this work, we propose a non-linear registration algorithm perfectly fit for log-Euclidean statistics on diffeomorphisms. Our algorithm works completely in the log-domain, i.e. it uses a stationary velocity field. This implies that we guarantee the invertibility of the deformation and have access to the true inverse transformation. This also means that our output can be directly used for log-Euclidean statistics without relying on the heavy computation of the log of the spatial transformation. As it is often desirable, our algorithm is symmetric with respect to the order of the input images. Furthermore, we use an alternate optimization approach related to Thirion's demons algorithm to provide a fast non-linear registration algorithm. First results show that our algorithm outperforms both the demons algorithm and the recently proposed diffeomorphic demons algorithm in terms of accuracy of the transformation while remaining computationally efficient.

  8. Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments

    DOE PAGES

    Yim, Won Cheol; Cushman, John C.

    2017-07-22

    Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible andmore » used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. Thus, this freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.« less

  9. Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yim, Won Cheol; Cushman, John C.

    Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible andmore » used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. Thus, this freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.« less

  10. Statistical mechanics of homogeneous partly pinned fluid systems.

    PubMed

    Krakoviack, Vincent

    2010-12-01

    The homogeneous partly pinned fluid systems are simple models of a fluid confined in a disordered porous matrix obtained by arresting randomly chosen particles in a one-component bulk fluid or one of the two components of a binary mixture. In this paper, their configurational properties are investigated. It is shown that a peculiar complementarity exists between the mobile and immobile phases, which originates from the fact that the solid is prepared in presence of and in equilibrium with the adsorbed fluid. Simple identities follow, which connect different types of configurational averages, either relative to the fluid-matrix system or to the bulk fluid from which it is prepared. Crucial simplifications result for the computation of important structural quantities, both in computer simulations and in theoretical approaches. Finally, possible applications of the model in the field of dynamics in confinement or in strongly asymmetric mixtures are suggested.

  11. Aircraft stress sequence development: A complex engineering process made simple

    NASA Technical Reports Server (NTRS)

    Schrader, K. H.; Butts, D. G.; Sparks, W. A.

    1994-01-01

    Development of stress sequences for critical aircraft structure requires flight measured usage data, known aircraft loads, and established relationships between aircraft flight loads and structural stresses. Resulting cycle-by-cycle stress sequences can be directly usable for crack growth analysis and coupon spectra tests. Often, an expert in loads and spectra development manipulates the usage data into a typical sequence of representative flight conditions for which loads and stresses are calculated. For a fighter/trainer type aircraft, this effort is repeated many times for each of the fatigue critical locations (FCL) resulting in expenditure of numerous engineering hours. The Aircraft Stress Sequence Computer Program (ACSTRSEQ), developed by Southwest Research Institute under contract to San Antonio Air Logistics Center, presents a unique approach for making complex technical computations in a simple, easy to use method. The program is written in Microsoft Visual Basic for the Microsoft Windows environment.

  12. Computing multiple periodic solutions of nonlinear vibration problems using the harmonic balance method and Groebner bases

    NASA Astrophysics Data System (ADS)

    Grolet, Aurelien; Thouverez, Fabrice

    2015-02-01

    This paper is devoted to the study of vibration of mechanical systems with geometric nonlinearities. The harmonic balance method is used to derive systems of polynomial equations whose solutions give the frequency component of the possible steady states. Groebner basis methods are used for computing all solutions of polynomial systems. This approach allows to reduce the complete system to an unique polynomial equation in one variable driving all solutions of the problem. In addition, in order to decrease the number of variables, we propose to first work on the undamped system, and recover solution of the damped system using a continuation on the damping parameter. The search for multiple solutions is illustrated on a simple system, where the influence of the retained number of harmonic is studied. Finally, the procedure is applied on a simple cyclic system and we give a representation of the multiple states versus frequency.

  13. On-Line Method and Apparatus for Coordinated Mobility and Manipulation of Mobile Robots

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun (Inventor)

    1996-01-01

    A simple and computationally efficient approach is disclosed for on-line coordinated control of mobile robots consisting of a manipulator arm mounted on a mobile base. The effect of base mobility on the end-effector manipulability index is discussed. The base mobility and arm manipulation degrees-of-freedom are treated equally as the joints of a kinematically redundant composite robot. The redundancy introduced by the mobile base is exploited to satisfy a set of user-defined additional tasks during the end-effector motion. A simple on-line control scheme is proposed which allows the user to assign weighting factors to individual degrees-of-mobility and degrees-of-manipulation, as well as to each task specification. The computational efficiency of the control algorithm makes it particularly suitable for real-time implementations. Four case studies are discussed in detail to demonstrate the application of the coordinated control scheme to various mobile robots.

  14. An Information-theoretic Approach to Optimize JWST Observations and Retrievals of Transiting Exoplanet Atmospheres

    NASA Astrophysics Data System (ADS)

    Howe, Alex R.; Burrows, Adam; Deming, Drake

    2017-01-01

    We provide an example of an analysis to explore the optimization of observations of transiting hot Jupiters with the James Webb Space Telescope (JWST) to characterize their atmospheres based on a simple three-parameter forward model. We construct expansive forward model sets for 11 hot Jupiters, 10 of which are relatively well characterized, exploring a range of parameters such as equilibrium temperature and metallicity, as well as considering host stars over a wide range in brightness. We compute posterior distributions of our model parameters for each planet with all of the available JWST spectroscopic modes and several programs of combined observations and compute their effectiveness using the metric of estimated mutual information per degree of freedom. From these simulations, clear trends emerge that provide guidelines for designing a JWST observing program. We demonstrate that these guidelines apply over a wide range of planet parameters and target brightnesses for our simple forward model.

  15. Quantitative Modeling of Earth Surface Processes

    NASA Astrophysics Data System (ADS)

    Pelletier, Jon D.

    This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes.

  16. More details...
  17. Macroscopic dielectric function within time-dependent density functional theory—Real time evolution versus the Casida approach

    NASA Astrophysics Data System (ADS)

    Sander, Tobias; Kresse, Georg

    2017-02-01

    Linear optical properties can be calculated by solving the time-dependent density functional theory equations. Linearization of the equation of motion around the ground state orbitals results in the so-called Casida equation, which is formally very similar to the Bethe-Salpeter equation. Alternatively one can determine the spectral functions by applying an infinitely short electric field in time and then following the evolution of the electron orbitals and the evolution of the dipole moments. The long wavelength response function is then given by the Fourier transformation of the evolution of the dipole moments in time. In this work, we compare the results and performance of these two approaches for the projector augmented wave method. To allow for large time steps and still rely on a simple difference scheme to solve the differential equation, we correct for the errors in the frequency domain, using a simple analytic equation. In general, we find that both approaches yield virtually indistinguishable results. For standard density functionals, the time evolution approach is, with respect to the computational performance, clearly superior compared to the solution of the Casida equation. However, for functionals including nonlocal exchange, the direct solution of the Casida equation is usually much more efficient, even though it scales less beneficial with the system size. We relate this to the large computational prefactors in evaluating the nonlocal exchange, which renders the time evolution algorithm fairly inefficient.

  18. Computational Phenotyping in Psychiatry: A Worked Example

    PubMed Central

    2016-01-01

    Abstract Computational psychiatry is a rapidly emerging field that uses model-based quantities to infer the behavioral and neuronal abnormalities that underlie psychopathology. If successful, this approach promises key insights into (pathological) brain function as well as a more mechanistic and quantitative approach to psychiatric nosology—structuring therapeutic interventions and predicting response and relapse. The basic procedure in computational psychiatry is to build a computational model that formalizes a behavioral or neuronal process. Measured behavioral (or neuronal) responses are then used to infer the model parameters of a single subject or a group of subjects. Here, we provide an illustrative overview over this process, starting from the modeling of choice behavior in a specific task, simulating data, and then inverting that model to estimate group effects. Finally, we illustrate cross-validation to assess whether between-subject variables (e.g., diagnosis) can be recovered successfully. Our worked example uses a simple two-step maze task and a model of choice behavior based on (active) inference and Markov decision processes. The procedural steps and routines we illustrate are not restricted to a specific field of research or particular computational model but can, in principle, be applied in many domains of computational psychiatry. PMID:27517087

  19. Computational Phenotyping in Psychiatry: A Worked Example.

    PubMed

    Schwartenbeck, Philipp; Friston, Karl

    2016-01-01

    Computational psychiatry is a rapidly emerging field that uses model-based quantities to infer the behavioral and neuronal abnormalities that underlie psychopathology. If successful, this approach promises key insights into (pathological) brain function as well as a more mechanistic and quantitative approach to psychiatric nosology-structuring therapeutic interventions and predicting response and relapse. The basic procedure in computational psychiatry is to build a computational model that formalizes a behavioral or neuronal process. Measured behavioral (or neuronal) responses are then used to infer the model parameters of a single subject or a group of subjects. Here, we provide an illustrative overview over this process, starting from the modeling of choice behavior in a specific task, simulating data, and then inverting that model to estimate group effects. Finally, we illustrate cross-validation to assess whether between-subject variables (e.g., diagnosis) can be recovered successfully. Our worked example uses a simple two-step maze task and a model of choice behavior based on (active) inference and Markov decision processes. The procedural steps and routines we illustrate are not restricted to a specific field of research or particular computational model but can, in principle, be applied in many domains of computational psychiatry.

  20. Statistical Emulation of Climate Model Projections Based on Precomputed GCM Runs*

    DOE PAGES

    Castruccio, Stefano; McInerney, David J.; Stein, Michael L.; ...

    2014-02-24

    The authors describe a new approach for emulating the output of a fully coupled climate model under arbitrary forcing scenarios that is based on a small set of precomputed runs from the model. Temperature and precipitation are expressed as simple functions of the past trajectory of atmospheric CO 2 concentrations, and a statistical model is fit using a limited set of training runs. The approach is demonstrated to be a useful and computationally efficient alternative to pattern scaling and captures the nonlinear evolution of spatial patterns of climate anomalies inherent in transient climates. The approach does as well as patternmore » scaling in all circumstances and substantially better in many; it is not computationally demanding; and, once the statistical model is fit, it produces emulated climate output effectively instantaneously. In conclusion, it may therefore find wide application in climate impacts assessments and other policy analyses requiring rapid climate projections.« less

  21. Using ontology network structure in text mining.

    PubMed

    Berndt, Donald J; McCart, James A; Luther, Stephen L

    2010-11-13

    Statistical text mining treats documents as bags of words, with a focus on term frequencies within documents and across document collections. Unlike natural language processing (NLP) techniques that rely on an engineered vocabulary or a full-featured ontology, statistical approaches do not make use of domain-specific knowledge. The freedom from biases can be an advantage, but at the cost of ignoring potentially valuable knowledge. The approach proposed here investigates a hybrid strategy based on computing graph measures of term importance over an entire ontology and injecting the measures into the statistical text mining process. As a starting point, we adapt existing search engine algorithms such as PageRank and HITS to determine term importance within an ontology graph. The graph-theoretic approach is evaluated using a smoking data set from the i2b2 National Center for Biomedical Computing, cast as a simple binary classification task for categorizing smoking-related documents, demonstrating consistent improvements in accuracy.

  1. CAD Services: an Industry Standard Interface for Mechanical CAD Interoperability

    NASA Technical Reports Server (NTRS)

    Claus, Russell; Weitzer, Ilan

    2002-01-01

    Most organizations seek to design and develop new products in increasingly shorter time periods. At the same time, increased performance demands require a team-based multidisciplinary design process that may span several organizations. One approach to meet these demands is to use 'Geometry Centric' design. In this approach, design engineers team their efforts through one united representation of the design that is usually captured in a CAD system. Standards-based interfaces are critical to provide uniform, simple, distributed services that enable the 'Geometry Centric' design approach. This paper describes an industry-wide effort, under the Object Management Group's (OMG) Manufacturing Domain Task Force, to define interfaces that enable the interoperability of CAD, Computer Aided Manufacturing (CAM), and Computer Aided Engineering (CAE) tools. This critical link to enable 'Geometry Centric' design is called: Cad Services V1.0. This paper discusses the features of this standard and proposed application.

  2. IoGET: Internet of Geophysical and Environmental Things

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mudunuru, Maruti Kumar

    The objective of this project is to provide novel and fast reduced-order models for onboard computation at sensor nodes for real-time analysis. The approach will require that LANL perform high-fidelity numerical simulations, construct simple reduced-order models (ROMs) using machine learning and signal processing algorithms, and use real-time data analysis for ROMs and compressive sensing at sensor nodes.

  3. The GRASP 3: Graphical Reliability Analysis Simulation Program. Version 3: A users' manual and modelling guide

    NASA Technical Reports Server (NTRS)

    Phillips, D. T.; Manseur, B.; Foster, J. W.

    1982-01-01

    Alternate definitions of system failure create complex analysis for which analytic solutions are available only for simple, special cases. The GRASP methodology is a computer simulation approach for solving all classes of problems in which both failure and repair events are modeled according to the probability laws of the individual components of the system.

  4. Computational principles underlying recognition of acoustic signals in grasshoppers and crickets.

    PubMed

    Ronacher, Bernhard; Hennig, R Matthias; Clemens, Jan

    2015-01-01

    Grasshoppers and crickets independently evolved hearing organs and acoustic communication. They differ considerably in the organization of their auditory pathways, and the complexity of their songs, which are essential for mate attraction. Recent approaches aimed at describing the behavioral preference functions of females in both taxa by a simple modeling framework. The basic structure of the model consists of three processing steps: (1) feature extraction with a bank of 'LN models'-each containing a linear filter followed by a nonlinearity, (2) temporal integration, and (3) linear combination. The specific properties of the filters and nonlinearities were determined using a genetic learning algorithm trained on a large set of different song features and the corresponding behavioral response scores. The model showed an excellent prediction of the behavioral responses to the tested songs. Most remarkably, in both taxa the genetic algorithm found Gabor-like functions as the optimal filter shapes. By slight modifications of Gabor filters several types of preference functions could be modeled, which are observed in different cricket species. Furthermore, this model was able to explain several so far enigmatic results in grasshoppers. The computational approach offered a remarkably simple framework that can account for phenotypically rather different preference functions across several taxa.

  5. Estimated Benefits of Variable-Geometry Wing Camber Control for Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Bolonkin, Alexander; Gilyard, Glenn B.

    1999-01-01

    Analytical benefits of variable-camber capability on subsonic transport aircraft are explored. Using aerodynamic performance models, including drag as a function of deflection angle for control surfaces of interest, optimal performance benefits of variable camber are calculated. Results demonstrate that if all wing trailing-edge surfaces are available for optimization, drag can be significantly reduced at most points within the flight envelope. The optimization approach developed and illustrated for flight uses variable camber for optimization of aerodynamic efficiency (maximizing the lift-to-drag ratio). Most transport aircraft have significant latent capability in this area. Wing camber control that can affect performance optimization for transport aircraft includes symmetric use of ailerons and flaps. In this paper, drag characteristics for aileron and flap deflections are computed based on analytical and wind-tunnel data. All calculations based on predictions for the subject aircraft and the optimal surface deflection are obtained by simple interpolation for given conditions. An algorithm is also presented for computation of optimal surface deflection for given conditions. Benefits of variable camber for a transport configuration using a simple trailing-edge control surface system can approach more than 10 percent, especially for nonstandard flight conditions. In the cruise regime, the benefit is 1-3 percent.

  6. Decision theory with resource-bounded agents.

    PubMed

    Halpern, Joseph Y; Pass, Rafael; Seeman, Lior

    2014-04-01

    There have been two major lines of research aimed at capturing resource-bounded players in game theory. The first, initiated by Rubinstein (), charges an agent for doing costly computation; the second, initiated by Neyman (), does not charge for computation, but limits the computation that agents can do, typically by modeling agents as finite automata. We review recent work on applying both approaches in the context of decision theory. For the first approach, we take the objects of choice in a decision problem to be Turing machines, and charge players for the "complexity" of the Turing machine chosen (e.g., its running time). This approach can be used to explain well-known phenomena like first-impression-matters biases (i.e., people tend to put more weight on evidence they hear early on) and belief polarization (two people with different prior beliefs, hearing the same evidence, can end up with diametrically opposed conclusions) as the outcomes of quite rational decisions. For the second approach, we model people as finite automata, and provide a simple algorithm that, on a problem that captures a number of settings of interest, provably performs optimally as the number of states in the automaton increases. Copyright © 2014 Cognitive Science Society, Inc.

  7. Computational approaches for predicting biomedical research collaborations.

    PubMed

    Zhang, Qing; Yu, Hong

    2014-01-01

    Biomedical research is increasingly collaborative, and successful collaborations often produce high impact work. Computational approaches can be developed for automatically predicting biomedical research collaborations. Previous works of collaboration prediction mainly explored the topological structures of research collaboration networks, leaving out rich semantic information from the publications themselves. In this paper, we propose supervised machine learning approaches to predict research collaborations in the biomedical field. We explored both the semantic features extracted from author research interest profile and the author network topological features. We found that the most informative semantic features for author collaborations are related to research interest, including similarity of out-citing citations, similarity of abstracts. Of the four supervised machine learning models (naïve Bayes, naïve Bayes multinomial, SVMs, and logistic regression), the best performing model is logistic regression with an ROC ranging from 0.766 to 0.980 on different datasets. To our knowledge we are the first to study in depth how research interest and productivities can be used for collaboration prediction. Our approach is computationally efficient, scalable and yet simple to implement. The datasets of this study are available at https://github.com/qingzhanggithub/medline-collaboration-datasets.

  8. Acoustic backscatter models of fish: Gradual or punctuated evolution

    NASA Astrophysics Data System (ADS)

    Horne, John K.

    2004-05-01

    Sound-scattering characteristics of aquatic organisms are routinely investigated using theoretical and numerical models. Development of the inverse approach by van Holliday and colleagues in the 1970s catalyzed the development and validation of backscatter models for fish and zooplankton. As the understanding of biological scattering properties increased, so did the number and computational sophistication of backscatter models. The complexity of data used to represent modeled organisms has also evolved in parallel to model development. Simple geometric shapes representing body components or the whole organism have been replaced by anatomically accurate representations derived from imaging sensors such as computer-aided tomography (CAT) scans. In contrast, Medwin and Clay (1998) recommend that fish and zooplankton should be described by simple theories and models, without acoustically superfluous extensions. Since van Holliday's early work, how has data and computational complexity influenced accuracy and precision of model predictions? How has the understanding of aquatic organism scattering properties increased? Significant steps in the history of model development will be identified and changes in model results will be characterized and compared. [Work supported by ONR and the Alaska Fisheries Science Center.

  9. AORTIC COARCTATION: RECENT DEVELOPMENTS IN EXPERIMENTAL AND COMPUTATIONAL METHODS TO ASSESS TREATMENTS FOR THIS SIMPLE CONDITION

    PubMed Central

    LaDisa, John F.; Taylor, Charles A.; Feinstein, Jeffrey A.

    2010-01-01

    Coarctation of the aorta (CoA) is often considered a relatively simple disease, but long-term outcomes suggest otherwise as life expectancies are decades less than in the average population and substantial morbidity often exists. What follows is an expanded version of collective work conducted by the authors’ and numerous collaborators that was presented at the 1st International Conference on Computational Simulation in Congenital Heart Disease pertaining to recent advances for CoA. The work begins by focusing on what is known about blood flow, pressure and indices of wall shear stress (WSS) in patients with normal vascular anatomy from both clinical imaging and the use of computational fluid dynamics (CFD) techniques. Hemodynamic alterations observed in CFD studies from untreated CoA patients and those undergoing surgical or interventional treatment are subsequently discussed. The impact of surgical approach, stent design and valve morphology are also presented for these patient populations. Finally, recent work from a representative experimental animal model of CoA that may offer insight into proposed mechanisms of long-term morbidity in CoA is presented. PMID:21152106

  10. An efficient higher order family of root finders

    NASA Astrophysics Data System (ADS)

    Petkovic, Ljiljana D.; Rancic, Lidija; Petkovic, Miodrag S.

    2008-06-01

    A one parameter family of iterative methods for the simultaneous approximation of simple complex zeros of a polynomial, based on a cubically convergent Hansen-Patrick's family, is studied. We show that the convergence of the basic family of the fourth order can be increased to five and six using Newton's and Halley's corrections, respectively. Since these corrections use the already calculated values, the computational efficiency of the accelerated methods is significantly increased. Further acceleration is achieved by applying the Gauss-Seidel approach (single-step mode). One of the most important problems in solving nonlinear equations, the construction of initial conditions which provide both the guaranteed and fast convergence, is considered for the proposed accelerated family. These conditions are computationally verifiable; they depend only on the polynomial coefficients, its degree and initial approximations, which is of practical importance. Some modifications of the considered family, providing the computation of multiple zeros of polynomials and simple zeros of a wide class of analytic functions, are also studied. Numerical examples demonstrate the convergence properties of the presented family of root-finding methods.

  11. Validation of a pair of computer codes for estimation and optimization of subsonic aerodynamic performance of simple hinged-flap systems for thin swept wings

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Darden, Christine M.

    1988-01-01

    Extensive correlations of computer code results with experimental data are employed to illustrate the use of linearized theory attached flow methods for the estimation and optimization of the aerodynamic performance of simple hinged flap systems. Use of attached flow methods is based on the premise that high levels of aerodynamic efficiency require a flow that is as nearly attached as circumstances permit. A variety of swept wing configurations are considered ranging from fighters to supersonic transports, all with leading- and trailing-edge flaps for enhancement of subsonic aerodynamic efficiency. The results indicate that linearized theory attached flow computer code methods provide a rational basis for the estimation and optimization of flap system aerodynamic performance at subsonic speeds. The analysis also indicates that vortex flap design is not an opposing approach but is closely related to attached flow design concepts. The successful vortex flap design actually suppresses the formation of detached vortices to produce a small vortex which is restricted almost entirely to the leading edge flap itself.

  12. Physics-based real time ground motion parameter maps: the Central Mexico example

    NASA Astrophysics Data System (ADS)

    Ramirez Guzman, L.; Contreras Ruiz Esparza, M. G.; Quiroz Ramirez, A.; Carrillo Lucia, M. A.; Perez Yanez, C.

    2013-12-01

    We present the use of near real time ground motion simulations in the generation of ground motion parameter maps for Central Mexico. Simple algorithm approaches to predict ground motion parameters of civil protection and risk engineering interest are based on the use of observed instrumental values, reported macroseismic intensities and their correlations, and ground motion prediction equations (GMPEs). A remarkable example of the use of this approach is the worldwide Shakemap generation program of the United States Geological Survey (USGS). Nevertheless, simple approaches rely strongly on the availability of instrumental and macroseismic intensity reports, as well as the accuracy of the GMPEs and the site effect amplification calculation. In regions where information is scarce, the GMPEs, a reference value in a mean sense, provide most of the ground motion information together with site effects amplification using a simple parametric approaches (e.g. the use of Vs30), and have proven to be elusive. Here we propose an approach that includes physics-based ground motion predictions (PBGMP) corrected by instrumental information using a Bayesian Kriging approach (Kitanidis, 1983) and apply it to the central region of Mexico. The method assumes: 1) the availability of a large database of low and high frequency Green's functions developed for the region of interest, using fully three-dimensional and representative one-dimension models, 2) enough real time data to obtain the centroid moment tensor and a slip rate function, and 3) a computational infrastructure that can be used to compute the source parameters and generate broadband synthetics in near real time, which will be combined with recorded instrumental data. By using a recently developed velocity model of Central Mexico and an efficient finite element octree-based implementation we generate a database of source-receiver Green's functions, valid to 0.5 Hz, that covers 160 km x 300 km x 700 km of Mexico, including a large portion of the Pacific Mexican subduction zone. A subset of the velocity and strong ground motion data available in real time is processed to obtain the source parameters to generate broadband ground motions in a dense grid ( 10 km x 10 km cells). These are interpolated later with instrumental values using a Bayesian Kriging method. Peak ground velocity and acceleration, as well as SA (T=0.1, 0.5, 1 and 2s) maps, are generated for a small set of medium to large magnitude Mexican earthquakes (Mw=5 to 7.4). We evaluate each map by comparing against stations not considered in the computation.

  13. Cloud Compute for Global Climate Station Summaries

    NASA Astrophysics Data System (ADS)

    Baldwin, R.; May, B.; Cogbill, P.

    2017-12-01

    Global Climate Station Summaries are simple indicators of observational normals which include climatic data summarizations and frequency distributions. These typically are statistical analyses of station data over 5-, 10-, 20-, 30-year or longer time periods. The summaries are computed from the global surface hourly dataset. This dataset totaling over 500 gigabytes is comprised of 40 different types of weather observations with 20,000 stations worldwide. NCEI and the U.S. Navy developed these value added products in the form of hourly summaries from many of these observations. Enabling this compute functionality in the cloud is the focus of the project. An overview of approach and challenges associated with application transition to the cloud will be presented.

  14. Applications of multiple-constraint matrix updates to the optimal control of large structures

    NASA Technical Reports Server (NTRS)

    Smith, S. W.; Walcott, B. L.

    1992-01-01

    Low-authority control or vibration suppression in large, flexible space structures can be formulated as a linear feedback control problem requiring computation of displacement and velocity feedback gain matrices. To ensure stability in the uncontrolled modes, these gain matrices must be symmetric and positive definite. In this paper, efficient computation of symmetric, positive-definite feedback gain matrices is accomplished through the use of multiple-constraint matrix update techniques originally developed for structural identification applications. Two systems were used to illustrate the application: a simple spring-mass system and a planar truss. From these demonstrations, use of this multiple-constraint technique is seen to provide a straightforward approach for computing the low-authority gains.

  15. HT2DINV: A 2D forward and inverse code for steady-state and transient hydraulic tomography problems

    NASA Astrophysics Data System (ADS)

    Soueid Ahmed, A.; Jardani, A.; Revil, A.; Dupont, J. P.

    2015-12-01

    Hydraulic tomography is a technique used to characterize the spatial heterogeneities of storativity and transmissivity fields. The responses of an aquifer to a source of hydraulic stimulations are used to recover the features of the estimated fields using inverse techniques. We developed a 2D free source Matlab package for performing hydraulic tomography analysis in steady state and transient regimes. The package uses the finite elements method to solve the ground water flow equation for simple or complex geometries accounting for the anisotropy of the material properties. The inverse problem is based on implementing the geostatistical quasi-linear approach of Kitanidis combined with the adjoint-state method to compute the required sensitivity matrices. For undetermined inverse problems, the adjoint-state method provides a faster and more accurate approach for the evaluation of sensitivity matrices compared with the finite differences method. Our methodology is organized in a way that permits the end-user to activate parallel computing in order to reduce the computational burden. Three case studies are investigated demonstrating the robustness and efficiency of our approach for inverting hydraulic parameters.

  16. Bayesian generalized least squares regression with application to log Pearson type 3 regional skew estimation

    NASA Astrophysics Data System (ADS)

    Reis, D. S.; Stedinger, J. R.; Martins, E. S.

    2005-10-01

    This paper develops a Bayesian approach to analysis of a generalized least squares (GLS) regression model for regional analyses of hydrologic data. The new approach allows computation of the posterior distributions of the parameters and the model error variance using a quasi-analytic approach. Two regional skew estimation studies illustrate the value of the Bayesian GLS approach for regional statistical analysis of a shape parameter and demonstrate that regional skew models can be relatively precise with effective record lengths in excess of 60 years. With Bayesian GLS the marginal posterior distribution of the model error variance and the corresponding mean and variance of the parameters can be computed directly, thereby providing a simple but important extension of the regional GLS regression procedures popularized by Tasker and Stedinger (1989), which is sensitive to the likely values of the model error variance when it is small relative to the sampling error in the at-site estimator.

  17. Single block three-dimensional volume grids about complex aerodynamic vehicles

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.; Weilmuenster, K. James

    1993-01-01

    This paper presents an alternate approach for the generation of volumetric grids for supersonic and hypersonic flows about complex configurations. The method uses parametric two dimensional block face grid definition within the framework of GRIDGEN2D. The incorporation of face decomposition reduces complex surfaces to simple shapes. These simple shapes are combined to obtain the final face definition. The advantages of this method include the reduction of overall grid generation time through the use of vectorized computer code, the elimination of the need to generate matching block faces, and the implementation of simplified boundary conditions. A simple axisymmetric grid is used to illustrate this method. In addition, volume grids for two complex configurations, the Langley Lifting Body (HL-20) and the Space Shuttle Orbiter, are shown.

  18. A simple GPU-accelerated two-dimensional MUSCL-Hancock solver for ideal magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Bard, Christopher M.; Dorelli, John C.

    2014-02-01

    We describe our experience using NVIDIA's CUDA (Compute Unified Device Architecture) C programming environment to implement a two-dimensional second-order MUSCL-Hancock ideal magnetohydrodynamics (MHD) solver on a GTX 480 Graphics Processing Unit (GPU). Taking a simple approach in which the MHD variables are stored exclusively in the global memory of the GTX 480 and accessed in a cache-friendly manner (without further optimizing memory access by, for example, staging data in the GPU's faster shared memory), we achieved a maximum speed-up of ≈126 for a 10242 grid relative to the sequential C code running on a single Intel Nehalem (2.8 GHz) core. This speedup is consistent with simple estimates based on the known floating point performance, memory throughput and parallel processing capacity of the GTX 480.

  19. Jig-Shape Optimization of a Low-Boom Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi

    2018-01-01

    A simple approach for optimizing the jig-shape is proposed in this study. This simple approach is based on an unconstrained optimization problem and applied to a low-boom supersonic aircraft. In this study, the jig-shape optimization is performed using the two-step approach. First, starting design variables are computed using the least-squares surface fitting technique. Next, the jig-shape is further tuned using a numerical optimization procedure based on an in-house object-oriented optimization tool. During the numerical optimization procedure, a design jig-shape is determined by the baseline jig-shape and basis functions. A total of 12 symmetric mode shapes of the cruise-weight configuration, rigid pitch shape, rigid left and right stabilator rotation shapes, and a residual shape are selected as sixteen basis functions. After three optimization runs, the trim shape error distribution is improved, and the maximum trim shape error of 0.9844 inches of the starting configuration becomes 0.00367 inch by the end of the third optimization run.

  20. A group communication approach for mobile computing mobile channel: An ISIS tool for mobile services

    NASA Astrophysics Data System (ADS)

    Cho, Kenjiro; Birman, Kenneth P.

    1994-05-01

    This paper examines group communication as an infrastructure to support mobility of users, and presents a simple scheme to support user mobility by means of switching a control point between replicated servers. We describe the design and implementation of a set of tools, called Mobile Channel, for use with the ISIS system. Mobile Channel is based on a combination of the two replication schemes: the primary-backup approach and the state machine approach. Mobile Channel implements a reliable one-to-many FIFO channel, in which a mobile client sees a single reliable server; servers, acting as a state machine, see multicast messages from clients. Migrations of mobile clients are handled as an intentional primary switch, and hand-offs or server failures are completely masked to mobile clients. To achieve high performance, servers are replicated at a sliding-window level. Our scheme provides a simple abstraction of migration, eliminates complicated hand-off protocols, provides fault-tolerance and is implemented within the existing group communication mechanism.

  1. Ordinal optimization and its application to complex deterministic problems

    NASA Astrophysics Data System (ADS)

    Yang, Mike Shang-Yu

    1998-10-01

    We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.

  2. A novel patient-specific model to compute coronary fractional flow reserve.

    PubMed

    Kwon, Soon-Sung; Chung, Eui-Chul; Park, Jin-Seo; Kim, Gook-Tae; Kim, Jun-Woo; Kim, Keun-Hong; Shin, Eun-Seok; Shim, Eun Bo

    2014-09-01

    The fractional flow reserve (FFR) is a widely used clinical index to evaluate the functional severity of coronary stenosis. A computer simulation method based on patients' computed tomography (CT) data is a plausible non-invasive approach for computing the FFR. This method can provide a detailed solution for the stenosed coronary hemodynamics by coupling computational fluid dynamics (CFD) with the lumped parameter model (LPM) of the cardiovascular system. In this work, we have implemented a simple computational method to compute the FFR. As this method uses only coronary arteries for the CFD model and includes only the LPM of the coronary vascular system, it provides simpler boundary conditions for the coronary geometry and is computationally more efficient than existing approaches. To test the efficacy of this method, we simulated a three-dimensional straight vessel using CFD coupled with the LPM. The computed results were compared with those of the LPM. To validate this method in terms of clinically realistic geometry, a patient-specific model of stenosed coronary arteries was constructed from CT images, and the computed FFR was compared with clinically measured results. We evaluated the effect of a model aorta on the computed FFR and compared this with a model without the aorta. Computationally, the model without the aorta was more efficient than that with the aorta, reducing the CPU time required for computing a cardiac cycle to 43.4%. Copyright © 2014. Published by Elsevier Ltd.

  3. Approximate method for predicting the permanent set in a beam in vacuo and in water subject to a shock wave

    NASA Technical Reports Server (NTRS)

    Stiehl, A. L.; Haberman, R. C.; Cowles, J. H.

    1988-01-01

    An approximate method to compute the maximum deformation and permanent set of a beam subjected to shock wave laoding in vacuo and in water was investigated. The method equates the maximum kinetic energy of the beam (and water) to the elastic plastic work done by a static uniform load applied to a beam. Results for the water case indicate that the plastic deformation is controlled by the kinetic energy of the water. The simplified approach can result in significant savings in computer time or it can expediently be used as a check of results from a more rigorous approach. The accuracy of the method is demonstrated by various examples of beams with simple support and clamped support boundary conditions.

  4. ChemCalc: a building block for tomorrow's chemical infrastructure.

    PubMed

    Patiny, Luc; Borel, Alain

    2013-05-24

    Web services, as an aspect of cloud computing, are becoming an important part of the general IT infrastructure, and scientific computing is no exception to this trend. We propose a simple approach to develop chemical Web services, through which servers could expose the essential data manipulation functionality that students and researchers need for chemical calculations. These services return their results as JSON (JavaScript Object Notation) objects, which facilitates their use for Web applications. The ChemCalc project http://www.chemcalc.org demonstrates this approach: we present three Web services related with mass spectrometry, namely isotopic distribution simulation, peptide fragmentation simulation, and molecular formula determination. We also developed a complete Web application based on these three Web services, taking advantage of modern HTML5 and JavaScript libraries (ChemDoodle and jQuery).

  5. Techniques for computing the discrete Fourier transform using the quadratic residue Fermat number systems

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Chang, J. J.; Hsu, I. S.; Pei, D. Y.; Reed, I. S.

    1986-01-01

    The complex integer multiplier and adder over the direct sum of two copies of finite field developed by Cozzens and Finkelstein (1985) is specialized to the direct sum of the rings of integers modulo Fermat numbers. Such multiplication over the rings of integers modulo Fermat numbers can be performed by means of two integer multiplications, whereas the complex integer multiplication requires three integer multiplications. Such multiplications and additions can be used in the implementation of a discrete Fourier transform (DFT) of a sequence of complex numbers. The advantage of the present approach is that the number of multiplications needed to compute a systolic array of the DFT can be reduced substantially. The architectural designs using this approach are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.

  6. Comparison of rigorous and simple vibrational models for the CO2 gasdynamic laser

    NASA Technical Reports Server (NTRS)

    Monson, D. J.

    1977-01-01

    The accuracy of a simple vibrational model for computing the gain in a CO2 gasdynamic laser is assessed by comparing results computed from it with results computed from a rigorous vibrational model. The simple model is that of Anderson et al. (1971), in which the vibrational kinetics are modeled by grouping the nonequilibrium vibrational degrees of freedom into two modes, to each of which there corresponds an equation describing vibrational relaxation. The two models agree fairly well in the computed gain at low temperatures, but the simple model predicts too high a gain at the higher temperatures of current interest. The sources of error contributing to the overestimation given by the simple model are determined by examining the simplified relaxation equations.

  7. Predicting protein structures with a multiplayer online game.

    PubMed

    Cooper, Seth; Khatib, Firas; Treuille, Adrien; Barbero, Janos; Lee, Jeehyung; Beenen, Michael; Leaver-Fay, Andrew; Baker, David; Popović, Zoran; Players, Foldit

    2010-08-05

    People exert large amounts of problem-solving effort playing computer games. Simple image- and text-recognition tasks have been successfully 'crowd-sourced' through games, but it is not clear if more complex scientific problems can be solved with human-directed computing. Protein structure prediction is one such problem: locating the biologically relevant native conformation of a protein is a formidable computational challenge given the very large size of the search space. Here we describe Foldit, a multiplayer online game that engages non-scientists in solving hard prediction problems. Foldit players interact with protein structures using direct manipulation tools and user-friendly versions of algorithms from the Rosetta structure prediction methodology, while they compete and collaborate to optimize the computed energy. We show that top-ranked Foldit players excel at solving challenging structure refinement problems in which substantial backbone rearrangements are necessary to achieve the burial of hydrophobic residues. Players working collaboratively develop a rich assortment of new strategies and algorithms; unlike computational approaches, they explore not only the conformational space but also the space of possible search strategies. The integration of human visual problem-solving and strategy development capabilities with traditional computational algorithms through interactive multiplayer games is a powerful new approach to solving computationally-limited scientific problems.

  8. A new modelling approach for zooplankton behaviour

    NASA Astrophysics Data System (ADS)

    Keiyu, A. Y.; Yamazaki, H.; Strickler, J. R.

    We have developed a new simulation technique to model zooplankton behaviour. The approach utilizes neither the conventional artificial intelligence nor neural network methods. We have designed an adaptive behaviour network, which is similar to BEER [(1990) Intelligence as an adaptive behaviour: an experiment in computational neuroethology, Academic Press], based on observational studies of zooplankton behaviour. The proposed method is compared with non- "intelligent" models—random walk and correlated walk models—as well as observed behaviour in a laboratory tank. Although the network is simple, the model exhibits rich behavioural patterns similar to live copepods.

  9. Fingerprint-Based Structure Retrieval Using Electron Density

    PubMed Central

    Yin, Shuangye; Dokholyan, Nikolay V.

    2010-01-01

    We present a computational approach that can quickly search a large protein structural database to identify structures that fit a given electron density, such as determined by cryo-electron microscopy. We use geometric invariants (fingerprints) constructed using 3D Zernike moments to describe the electron density, and reduce the problem of fitting of the structure to the electron density to simple fingerprint comparison. Using this approach, we are able to screen the entire Protein Data Bank and identify structures that fit two experimental electron densities determined by cryo-electron microscopy. PMID:21287628

  10. Fingerprint-based structure retrieval using electron density.

    PubMed

    Yin, Shuangye; Dokholyan, Nikolay V

    2011-03-01

    We present a computational approach that can quickly search a large protein structural database to identify structures that fit a given electron density, such as determined by cryo-electron microscopy. We use geometric invariants (fingerprints) constructed using 3D Zernike moments to describe the electron density, and reduce the problem of fitting of the structure to the electron density to simple fingerprint comparison. Using this approach, we are able to screen the entire Protein Data Bank and identify structures that fit two experimental electron densities determined by cryo-electron microscopy. Copyright © 2010 Wiley-Liss, Inc.

  11. Improved numerical methods for turbulent viscous flows aerothermal modeling program, phase 2

    NASA Technical Reports Server (NTRS)

    Karki, K. C.; Patankar, S. V.; Runchal, A. K.; Mongia, H. C.

    1988-01-01

    The details of a study to develop accurate and efficient numerical schemes to predict complex flows are described. In this program, several discretization schemes were evaluated using simple test cases. This assessment led to the selection of three schemes for an in-depth evaluation based on two-dimensional flows. The scheme with the superior overall performance was incorporated in a computer program for three-dimensional flows. To improve the computational efficiency, the selected discretization scheme was combined with a direct solution approach in which the fluid flow equations are solved simultaneously rather than sequentially.

  12. Simple models for the simulation of submarine melt for a Greenland glacial system model

    NASA Astrophysics Data System (ADS)

    Beckmann, Johanna; Perrette, Mahé; Ganopolski, Andrey

    2018-01-01

    Two hundred marine-terminating Greenland outlet glaciers deliver more than half of the annually accumulated ice into the ocean and have played an important role in the Greenland ice sheet mass loss observed since the mid-1990s. Submarine melt may play a crucial role in the mass balance and position of the grounding line of these outlet glaciers. As the ocean warms, it is expected that submarine melt will increase, potentially driving outlet glaciers retreat and contributing to sea level rise. Projections of the future contribution of outlet glaciers to sea level rise are hampered by the necessity to use models with extremely high resolution of the order of a few hundred meters. That requirement in not only demanded when modeling outlet glaciers as a stand alone model but also when coupling them with high-resolution 3-D ocean models. In addition, fjord bathymetry data are mostly missing or inaccurate (errors of several hundreds of meters), which questions the benefit of using computationally expensive 3-D models for future predictions. Here we propose an alternative approach built on the use of a computationally efficient simple model of submarine melt based on turbulent plume theory. We show that such a simple model is in reasonable agreement with several available modeling studies. We performed a suite of experiments to analyze sensitivity of these simple models to model parameters and climate characteristics. We found that the computationally cheap plume model demonstrates qualitatively similar behavior as 3-D general circulation models. To match results of the 3-D models in a quantitative manner, a scaling factor of the order of 1 is needed for the plume models. We applied this approach to model submarine melt for six representative Greenland glaciers and found that the application of a line plume can produce submarine melt compatible with observational data. Our results show that the line plume model is more appropriate than the cone plume model for simulating the average submarine melting of real glaciers in Greenland.

  13. 2D hybrid analysis: Approach for building three-dimensional atomic model by electron microscopy image matching.

    PubMed

    Matsumoto, Atsushi; Miyazaki, Naoyuki; Takagi, Junichi; Iwasaki, Kenji

    2017-03-23

    In this study, we develop an approach termed "2D hybrid analysis" for building atomic models by image matching from electron microscopy (EM) images of biological molecules. The key advantage is that it is applicable to flexible molecules, which are difficult to analyze by 3DEM approach. In the proposed approach, first, a lot of atomic models with different conformations are built by computer simulation. Then, simulated EM images are built from each atomic model. Finally, they are compared with the experimental EM image. Two kinds of models are used as simulated EM images: the negative stain model and the simple projection model. Although the former is more realistic, the latter is adopted to perform faster computations. The use of the negative stain model enables decomposition of the averaged EM images into multiple projection images, each of which originated from a different conformation or orientation. We apply this approach to the EM images of integrin to obtain the distribution of the conformations, from which the pathway of the conformational change of the protein is deduced.

  14. Phenomenological Approach to Training

    DTIC Science & Technology

    1977-08-01

    overt responses simpler discrete steps is also like digital computer performed. It will be suggested that a highly progr.ms or flowcharts , which consist...simple proficiency performance. cue/reaction Instruction. Putting this another way, try to visualize a 2-dlnenslonal flowchart it is important to... flowchart of discrete steps, but this does not and can easily apply situational context, which is explain how the orbit is maintained. The moon built

  15. Multi-Scale Modeling to Improve Single-Molecule, Single-Cell Experiments

    NASA Astrophysics Data System (ADS)

    Munsky, Brian; Shepherd, Douglas

    2014-03-01

    Single-cell, single-molecule experiments are producing an unprecedented amount of data to capture the dynamics of biological systems. When integrated with computational models, observations of spatial, temporal and stochastic fluctuations can yield powerful quantitative insight. We concentrate on experiments that localize and count individual molecules of mRNA. These high precision experiments have large imaging and computational processing costs, and we explore how improved computational analyses can dramatically reduce overall data requirements. In particular, we show how analyses of spatial, temporal and stochastic fluctuations can significantly enhance parameter estimation results for small, noisy data sets. We also show how full probability distribution analyses can constrain parameters with far less data than bulk analyses or statistical moment closures. Finally, we discuss how a systematic modeling progression from simple to more complex analyses can reduce total computational costs by orders of magnitude. We illustrate our approach using single-molecule, spatial mRNA measurements of Interleukin 1-alpha mRNA induction in human THP1 cells following stimulation. Our approach could improve the effectiveness of single-molecule gene regulation analyses for many other process.

  16. Gene regulatory networks: a coarse-grained, equation-free approach to multiscale computation.

    PubMed

    Erban, Radek; Kevrekidis, Ioannis G; Adalsteinsson, David; Elston, Timothy C

    2006-02-28

    We present computer-assisted methods for analyzing stochastic models of gene regulatory networks. The main idea that underlies this equation-free analysis is the design and execution of appropriately initialized short bursts of stochastic simulations; the results of these are processed to estimate coarse-grained quantities of interest, such as mesoscopic transport coefficients. In particular, using a simple model of a genetic toggle switch, we illustrate the computation of an effective free energy Phi and of a state-dependent effective diffusion coefficient D that characterize an unavailable effective Fokker-Planck equation. Additionally we illustrate the linking of equation-free techniques with continuation methods for performing a form of stochastic "bifurcation analysis"; estimation of mean switching times in the case of a bistable switch is also implemented in this equation-free context. The accuracy of our methods is tested by direct comparison with long-time stochastic simulations. This type of equation-free analysis appears to be a promising approach to computing features of the long-time, coarse-grained behavior of certain classes of complex stochastic models of gene regulatory networks, circumventing the need for long Monte Carlo simulations.

  17. A new approach for vibration control in large space structures

    NASA Technical Reports Server (NTRS)

    Kumar, K.; Cochran, J. E., Jr.

    1987-01-01

    An approach for augmenting vibration damping characteristics in space structures with large panels is presented. It is based on generation of bending moments rather than forces. The moments are generated using bimetallic strips, suitably mounted at selected stations on both sides of the large panels, under the influence of differential solar heating, giving rise to thermal gradients and stresses. The collocated angular velocity sensors are utilized in conjunction with mini-servos to regulate the control moments by flipping the bimetallic strips. A simple computation of the rate of dissipation of vibrational energy is undertaken to assess the effectiveness of the proposed approach.

  18. Automating Embedded Analysis Capabilities and Managing Software Complexity in Multiphysics Simulation, Part I: Template-Based Generic Programming

    DOE PAGES

    Pawlowski, Roger P.; Phipps, Eric T.; Salinger, Andrew G.

    2012-01-01

    An approach for incorporating embedded simulation and analysis capabilities in complex simulation codes through template-based generic programming is presented. This approach relies on templating and operator overloading within the C++ language to transform a given calculation into one that can compute a variety of additional quantities that are necessary for many state-of-the-art simulation and analysis algorithms. An approach for incorporating these ideas into complex simulation codes through general graph-based assembly is also presented. These ideas have been implemented within a set of packages in the Trilinos framework and are demonstrated on a simple problem from chemical engineering.

  19. Robust and Simple Non-Reflecting Boundary Conditions for the Euler Equations - A New Approach based on the Space-Time CE/SE Method

    NASA Technical Reports Server (NTRS)

    Chang, S.-C.; Himansu, A.; Loh, C.-Y.; Wang, X.-Y.; Yu, S.-T.J.

    2005-01-01

    This paper reports on a significant advance in the area of nonreflecting boundary conditions (NRBCs) for unsteady flow computations. As a part of t he development of t he space-time conservation element and solution element (CE/SE) method, sets of NRBCs for 1D Euler problems are developed without using any characteristics- based techniques. These conditions are much simpler than those commonly reported in the literature, yet so robust that they are applicable to subsonic, transonic and supersonic flows even in the presence of discontinuities. In addition, the straightforward multidimensional extensions of the present 1D NRBCs have been shown numerically to be equally simple and robust. The paper details the theoretical underpinning of these NRBCs, and explains t heir unique robustness and accuracy in terms of t he conservation of space-time fluxes. Some numerical results for an extended Sod's shock-tube problem, illustrating the effectiveness of the present NRBCs are included, together with an associated simple Fortran computer program. As a preliminary to the present development, a review of the basic CE/SE schemes is also included.

  20. Bernal's road to random packing and the structure of liquids

    NASA Astrophysics Data System (ADS)

    Finney, John L.

    2013-11-01

    Until the 1960s, liquids were generally regarded as either dense gases or disordered solids, and theoretical attempts at understanding their structures and properties were largely based on those concepts. Bernal, himself a crystallographer, was unhappy with either approach, preferring to regard simple liquids as 'homogeneous, coherent and essentially irregular assemblages of molecules containing no crystalline regions'. He set about realizing this conceptual model through a detailed examination of the structures and properties of random packings of spheres. In order to test the relevance of the model to real liquids, ways had to be found to realize and characterize random packings. This was at a time when computing was slow and in its infancy, so he and his collaborators set about building models in the laboratory, and examining aspects of their structures in order to characterize them in ways which would enable comparison with the properties of real liquids. Some of the imaginative - often time consuming and frustrating - routes followed are described, as well the comparisons made with the properties of simple liquids. With the increase of the power of computers in the 1960s, computational approaches became increasingly exploited in random packing studies. This enabled the use of packing concepts, and the tools developed to characterize them, in understanding systems as diverse as metallic glasses, crystal-liquid interfaces, protein structures, enzyme-substrate interactions and the distribution of galaxies, as well as their exploitation in, for example, oil extraction, understanding chromatographic separation columns, and packed beds in industrial processes.

  1. Computer Vision Photogrammetry for Underwater Archaeological Site Recording in a Low-Visibility Environment

    NASA Astrophysics Data System (ADS)

    Van Damme, T.

    2015-04-01

    Computer Vision Photogrammetry allows archaeologists to accurately record underwater sites in three dimensions using simple twodimensional picture or video sequences, automatically processed in dedicated software. In this article, I share my experience in working with one such software package, namely PhotoScan, to record a Dutch shipwreck site. In order to demonstrate the method's reliability and flexibility, the site in question is reconstructed from simple GoPro footage, captured in low-visibility conditions. Based on the results of this case study, Computer Vision Photogrammetry compares very favourably to manual recording methods both in recording efficiency, and in the quality of the final results. In a final section, the significance of Computer Vision Photogrammetry is then assessed from a historical perspective, by placing the current research in the wider context of about half a century of successful use of Analytical and later Digital photogrammetry in the field of underwater archaeology. I conclude that while photogrammetry has been used in our discipline for several decades now, for various reasons the method was only ever used by a relatively small percentage of projects. This is likely to change in the near future since, compared to the `traditional' photogrammetry approaches employed in the past, today Computer Vision Photogrammetry is easier to use, more reliable and more affordable than ever before, while at the same time producing more accurate and more detailed three-dimensional results.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Vleet, Mary J.; Misquitta, Alston J.; Stone, Anthony J.

    Short-range repulsion within inter-molecular force fields is conventionally described by either Lennard-Jones or Born-Mayer forms. Despite their widespread use, these simple functional forms are often unable to describe the interaction energy accurately over a broad range of inter-molecular distances, thus creating challenges in the development of ab initio force fields and potentially leading to decreased accuracy and transferability. Herein, we derive a novel short-range functional form based on a simple Slater-like model of overlapping atomic densities and an iterated stockholder atom (ISA) partitioning of the molecular electron density. We demonstrate that this Slater-ISA methodology yields a more accurate, transferable, andmore » robust description of the short-range interactions at minimal additional computational cost compared to standard Lennard-Jones or Born-Mayer approaches. Lastly, we show how this methodology can be adapted to yield the standard Born-Mayer functional form while still retaining many of the advantages of the Slater-ISA approach.« less

  3. Epidermoid cyst of the external auditory canal in children: diagnosis and management.

    PubMed

    Abdel-Aziz, Mosaad

    2011-07-01

    Epidermoid cyst of the external auditory canal (EAC) is rarely encountered in the clinical practice, but when it occurs, it may cause obstruction of the meatus that necessitates surgical excision. The aims of this study were to present 9 pediatric patients with epidermoid cysts of the EAC and to evaluate the outcome of the surgical technique that has been used in excision. Surgical removal of the cyst was carried out through a simple transmeatal approach, a medially based rectangular skin flap was elevated and the cyst was completely removed. No complications or recurrence have been reported. Epidermoid cyst should be listed in the differential diagnosis of EAC masses; it appears on computed tomography as a cystic mass in the outer cartilaginous part of EAC that is usually limited to the soft tissue with no bone erosion. It can be removed easily through simple transmeatal approach with high success rate and no morbidity.

  4. In-Vivo Real-Time Control of Protein Expression from Endogenous and Synthetic Gene Networks

    PubMed Central

    Orabona, Emanuele; De Stefano, Luca; Ferry, Mike; Hasty, Jeff; di Bernardo, Mario; di Bernardo, Diego

    2014-01-01

    We describe an innovative experimental and computational approach to control the expression of a protein in a population of yeast cells. We designed a simple control algorithm to automatically regulate the administration of inducer molecules to the cells by comparing the actual protein expression level in the cell population with the desired expression level. We then built an automated platform based on a microfluidic device, a time-lapse microscopy apparatus, and a set of motorized syringes, all controlled by a computer. We tested the platform to force yeast cells to express a desired fixed, or time-varying, amount of a reporter protein over thousands of minutes. The computer automatically switched the type of sugar administered to the cells, its concentration and its duration, according to the control algorithm. Our approach can be used to control expression of any protein, fused to a fluorescent reporter, provided that an external molecule known to (indirectly) affect its promoter activity is available. PMID:24831205

  5. A computational approach to animal breeding.

    PubMed

    Berger-Wolf, Tanya Y; Moore, Cristopher; Saia, Jared

    2007-02-07

    We propose a computational model of mating strategies for controlled animal breeding programs. A mating strategy in a controlled breeding program is a heuristic with some optimization criteria as a goal. Thus, it is appropriate to use the computational tools available for analysis of optimization heuristics. In this paper, we propose the first discrete model of the controlled animal breeding problem and analyse heuristics for two possible objectives: (1) breeding for maximum diversity and (2) breeding a target individual. These two goals are representative of conservation biology and agricultural livestock management, respectively. We evaluate several mating strategies and provide upper and lower bounds for the expected number of matings. While the population parameters may vary and can change the actual number of matings for a particular strategy, the order of magnitude of the number of expected matings and the relative competitiveness of the mating heuristics remains the same. Thus, our simple discrete model of the animal breeding problem provides a novel viable and robust approach to designing and comparing breeding strategies in captive populations.

  6. Optical signal processing using photonic reservoir computing

    NASA Astrophysics Data System (ADS)

    Salehi, Mohammad Reza; Dehyadegari, Louiza

    2014-10-01

    As a new approach to recognition and classification problems, photonic reservoir computing has such advantages as parallel information processing, power efficient and high speed. In this paper, a photonic structure has been proposed for reservoir computing which is investigated using a simple, yet, non-partial noisy time series prediction task. This study includes the application of a suitable topology with self-feedbacks in a network of SOA's - which lends the system a strong memory - and leads to adjusting adequate parameters resulting in perfect recognition accuracy (100%) for noise-free time series, which shows a 3% improvement over previous results. For the classification of noisy time series, the rate of accuracy showed a 4% increase and amounted to 96%. Furthermore, an analytical approach was suggested to solve rate equations which led to a substantial decrease in the simulation time, which is an important parameter in classification of large signals such as speech recognition, and better results came up compared with previous works.

  7. Using a combined computational-experimental approach to predict antibody-specific B cell epitopes.

    PubMed

    Sela-Culang, Inbal; Benhnia, Mohammed Rafii-El-Idrissi; Matho, Michael H; Kaever, Thomas; Maybeno, Matt; Schlossman, Andrew; Nimrod, Guy; Li, Sheng; Xiang, Yan; Zajonc, Dirk; Crotty, Shane; Ofran, Yanay; Peters, Bjoern

    2014-04-08

    Antibody epitope mapping is crucial for understanding B cell-mediated immunity and required for characterizing therapeutic antibodies. In contrast to T cell epitope mapping, no computational tools are in widespread use for prediction of B cell epitopes. Here, we show that, utilizing the sequence of an antibody, it is possible to identify discontinuous epitopes on its cognate antigen. The predictions are based on residue-pairing preferences and other interface characteristics. We combined these antibody-specific predictions with results of cross-blocking experiments that identify groups of antibodies with overlapping epitopes to improve the predictions. We validate the high performance of this approach by mapping the epitopes of a set of antibodies against the previously uncharacterized D8 antigen, using complementary techniques to reduce method-specific biases (X-ray crystallography, peptide ELISA, deuterium exchange, and site-directed mutagenesis). These results suggest that antibody-specific computational predictions and simple cross-blocking experiments allow for accurate prediction of residues in conformational B cell epitopes. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. A Simple GPU-Accelerated Two-Dimensional MUSCL-Hancock Solver for Ideal Magnetohydrodynamics

    NASA Technical Reports Server (NTRS)

    Bard, Christopher; Dorelli, John C.

    2013-01-01

    We describe our experience using NVIDIA's CUDA (Compute Unified Device Architecture) C programming environment to implement a two-dimensional second-order MUSCL-Hancock ideal magnetohydrodynamics (MHD) solver on a GTX 480 Graphics Processing Unit (GPU). Taking a simple approach in which the MHD variables are stored exclusively in the global memory of the GTX 480 and accessed in a cache-friendly manner (without further optimizing memory access by, for example, staging data in the GPU's faster shared memory), we achieved a maximum speed-up of approx. = 126 for a sq 1024 grid relative to the sequential C code running on a single Intel Nehalem (2.8 GHz) core. This speedup is consistent with simple estimates based on the known floating point performance, memory throughput and parallel processing capacity of the GTX 480.

  9. Unsteady three-dimensional thermal field prediction in turbine blades using nonlinear BEM

    NASA Technical Reports Server (NTRS)

    Martin, Thomas J.; Dulikravich, George S.

    1993-01-01

    A time-and-space accurate and computationally efficient fully three dimensional unsteady temperature field analysis computer code has been developed for truly arbitrary configurations. It uses boundary element method (BEM) formulation based on an unsteady Green's function approach, multi-point Gaussian quadrature spatial integration on each panel, and a highly clustered time-step integration. The code accepts either temperatures or heat fluxes as boundary conditions that can vary in time on a point-by-point basis. Comparisons of the BEM numerical results and known analytical unsteady results for simple shapes demonstrate very high accuracy and reliability of the algorithm. An example of computed three dimensional temperature and heat flux fields in a realistically shaped internally cooled turbine blade is also discussed.

  10. Further reduction of minimal first-met bad markings for the computationally efficient synthesis of a maximally permissive controller

    NASA Astrophysics Data System (ADS)

    Liu, GaiYun; Chao, Daniel Yuh

    2015-08-01

    To date, research on the supervisor design for flexible manufacturing systems focuses on speeding up the computation of optimal (maximally permissive) liveness-enforcing controllers. Recent deadlock prevention policies for systems of simple sequential processes with resources (S3PR) reduce the computation burden by considering only the minimal portion of all first-met bad markings (FBMs). Maximal permissiveness is ensured by not forbidding any live state. This paper proposes a method to further reduce the size of minimal set of FBMs to efficiently solve integer linear programming problems while maintaining maximal permissiveness using a vector-covering approach. This paper improves the previous work and achieves the simplest structure with the minimal number of monitors.

  11. Model Reduction of Computational Aerothermodynamics for Multi-Discipline Analysis in High Speed Flows

    NASA Astrophysics Data System (ADS)

    Crowell, Andrew Rippetoe

    This dissertation describes model reduction techniques for the computation of aerodynamic heat flux and pressure loads for multi-disciplinary analysis of hypersonic vehicles. NASA and the Department of Defense have expressed renewed interest in the development of responsive, reusable hypersonic cruise vehicles capable of sustained high-speed flight and access to space. However, an extensive set of technical challenges have obstructed the development of such vehicles. These technical challenges are partially due to both the inability to accurately test scaled vehicles in wind tunnels and to the time intensive nature of high-fidelity computational modeling, particularly for the fluid using Computational Fluid Dynamics (CFD). The aim of this dissertation is to develop efficient and accurate models for the aerodynamic heat flux and pressure loads to replace the need for computationally expensive, high-fidelity CFD during coupled analysis. Furthermore, aerodynamic heating and pressure loads are systematically evaluated for a number of different operating conditions, including: simple two-dimensional flow over flat surfaces up to three-dimensional flows over deformed surfaces with shock-shock interaction and shock-boundary layer interaction. An additional focus of this dissertation is on the implementation and computation of results using the developed aerodynamic heating and pressure models in complex fluid-thermal-structural simulations. Model reduction is achieved using a two-pronged approach. One prong focuses on developing analytical corrections to isothermal, steady-state CFD flow solutions in order to capture flow effects associated with transient spatially-varying surface temperatures and surface pressures (e.g., surface deformation, surface vibration, shock impingements, etc.). The second prong is focused on minimizing the computational expense of computing the steady-state CFD solutions by developing an efficient surrogate CFD model. The developed two-pronged approach is found to exhibit balanced performance in terms of accuracy and computational expense, relative to several existing approaches. This approach enables CFD-based loads to be implemented into long duration fluid-thermal-structural simulations.

  12. An efficient, large-scale, non-lattice-detection algorithm for exhaustive structural auditing of biomedical ontologies.

    PubMed

    Zhang, Guo-Qiang; Xing, Guangming; Cui, Licong

    2018-04-01

    One of the basic challenges in developing structural methods for systematic audition on the quality of biomedical ontologies is the computational cost usually involved in exhaustive sub-graph analysis. We introduce ANT-LCA, a new algorithm for computing all non-trivial lowest common ancestors (LCA) of each pair of concepts in the hierarchical order induced by an ontology. The computation of LCA is a fundamental step for non-lattice approach for ontology quality assurance. Distinct from existing approaches, ANT-LCA only computes LCAs for non-trivial pairs, those having at least one common ancestor. To skip all trivial pairs that may be of no practical interest, ANT-LCA employs a simple but innovative algorithmic strategy combining topological order and dynamic programming to keep track of non-trivial pairs. We provide correctness proofs and demonstrate a substantial reduction in computational time for two largest biomedical ontologies: SNOMED CT and Gene Ontology (GO). ANT-LCA achieved an average computation time of 30 and 3 sec per version for SNOMED CT and GO, respectively, about 2 orders of magnitude faster than the best known approaches. Our algorithm overcomes a fundamental computational barrier in sub-graph based structural analysis of large ontological systems. It enables the implementation of a new breed of structural auditing methods that not only identifies potential problematic areas, but also automatically suggests changes to fix the issues. Such structural auditing methods can lead to more effective tools supporting ontology quality assurance work. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. A space efficient flexible pivot selection approach to evaluate determinant and inverse of a matrix.

    PubMed

    Jafree, Hafsa Athar; Imtiaz, Muhammad; Inayatullah, Syed; Khan, Fozia Hanif; Nizami, Tajuddin

    2014-01-01

    This paper presents new simple approaches for evaluating determinant and inverse of a matrix. The choice of pivot selection has been kept arbitrary thus they reduce the error while solving an ill conditioned system. Computation of determinant of a matrix has been made more efficient by saving unnecessary data storage and also by reducing the order of the matrix at each iteration, while dictionary notation [1] has been incorporated for computing the matrix inverse thereby saving unnecessary calculations. These algorithms are highly class room oriented, easy to use and implemented by students. By taking the advantage of flexibility in pivot selection, one may easily avoid development of the fractions by most. Unlike the matrix inversion method [2] and [3], the presented algorithms obviate the use of permutations and inverse permutations.

  14. Dimensional analysis using toric ideals: primitive invariants.

    PubMed

    Atherton, Mark A; Bates, Ronald A; Wynn, Henry P

    2014-01-01

    Classical dimensional analysis in its original form starts by expressing the units for derived quantities, such as force, in terms of power products of basic units [Formula: see text] etc. This suggests the use of toric ideal theory from algebraic geometry. Within this the Graver basis provides a unique primitive basis in a well-defined sense, which typically has more terms than the standard Buckingham approach. Some textbook examples are revisited and the full set of primitive invariants found. First, a worked example based on convection is introduced to recall the Buckingham method, but using computer algebra to obtain an integer [Formula: see text] matrix from the initial integer [Formula: see text] matrix holding the exponents for the derived quantities. The [Formula: see text] matrix defines the dimensionless variables. But, rather than this integer linear algebra approach it is shown how, by staying with the power product representation, the full set of invariants (dimensionless groups) is obtained directly from the toric ideal defined by [Formula: see text]. One candidate for the set of invariants is a simple basis of the toric ideal. This, although larger than the rank of [Formula: see text], is typically not unique. However, the alternative Graver basis is unique and defines a maximal set of invariants, which are primitive in a simple sense. In addition to the running example four examples are taken from: a windmill, convection, electrodynamics and the hydrogen atom. The method reveals some named invariants. A selection of computer algebra packages is used to show the considerable ease with which both a simple basis and a Graver basis can be found.

  15. Universal behavior in ideal slip

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Ferrante, John; Smith, John R.

    1991-01-01

    The slip energies and stresses are computed for defect-free crystals of Ni, Cu, Ag, and Al using the many-atom approach. A simple analytical expression for the slip energies is obtained, leading to a universal form for slip, with the energy scaled by the surface energy and displacement scaled by the lattice constant. Maximum stresses are found to be somewhat larger than but comparable with experimentally determined maximum whisker strengths.

  16. Digitizing for Computer-Aided Finite Element Model Generation.

    DTIC Science & Technology

    1979-10-10

    this approach is a collection of programs developed over the last eight years at the University of Arizona, and called the GIFTS system. This paper...briefly describes the latest version of the system, GIFTS -5, and demonstrates its suitability in a design environment by simple examples. The programs...constituting the GIFTS system were used as a tool for research in many areas, including mesh generation, finite element data base design, interactive

  17. Finding higher order Darboux polynomials for a family of rational first order ordinary differential equations

    NASA Astrophysics Data System (ADS)

    Avellar, J.; Claudino, A. L. G. C.; Duarte, L. G. S.; da Mota, L. A. C. P.

    2015-10-01

    For the Darbouxian methods we are studying here, in order to solve first order rational ordinary differential equations (1ODEs), the most costly (computationally) step is the finding of the needed Darboux polynomials. This can be so grave that it can render the whole approach unpractical. Hereby we introduce a simple heuristics to speed up this process for a class of 1ODEs.

  18. The Emergence of Compositional Communication in a Synthetic Ethology Framework

    DTIC Science & Technology

    2005-08-12

    34Integrating Language and Cognition: A Cognitive Robotics Approach", invited contribution to IEEE Computational Intelligence Magazine . The first two...papers address the main topic of investigation of the research proposal. In particular, we have introduced a simple structured meaning-signal mapping...Cavalli-Sforza (1982) to investigate analytically the evolution of structured com- munication codes. Let x 6 [0,1] be the proportion of individuals in a

  19. Coupling of rainfall-induced landslide triggering model with predictions of debris flow runout distances

    NASA Astrophysics Data System (ADS)

    Lehmann, Peter; von Ruette, Jonas; Fan, Linfeng; Or, Dani

    2014-05-01

    Rapid debris flows initiated by rainfall induced shallow landslides present a highly destructive natural hazard in steep terrain. The impact and run-out paths of debris flows depend on the volume, composition and initiation zone of released material and are requirements to make accurate debris flow predictions and hazard maps. For that purpose we couple the mechanistic 'Catchment-scale Hydro-mechanical Landslide Triggering (CHLT)' model to compute timing, location, and landslide volume with simple approaches to estimate debris flow runout distances. The runout models were tested using two landslide inventories obtained in the Swiss Alps following prolonged rainfall events. The predicted runout distances were in good agreement with observations, confirming the utility of such simple models for landscape scale estimates. In a next step debris flow paths were computed for landslides predicted with the CHLT model for a certain range of soil properties to explore its effect on runout distances. This combined approach offers a more complete spatial picture of shallow landslide and subsequent debris flow hazards. The additional information provided by CHLT model concerning location, shape, soil type and water content of the released mass may also be incorporated into more advanced models of runout to improve predictability and impact of such abruptly-released mass.

  20. Characterization of the Optical Properties of Turbid Media by Supervised Learning of Scattering Patterns.

    PubMed

    Hassaninia, Iman; Bostanabad, Ramin; Chen, Wei; Mohseni, Hooman

    2017-11-10

    Fabricated tissue phantoms are instrumental in optical in-vitro investigations concerning cancer diagnosis, therapeutic applications, and drug efficacy tests. We present a simple non-invasive computational technique that, when coupled with experiments, has the potential for characterization of a wide range of biological tissues. The fundamental idea of our approach is to find a supervised learner that links the scattering pattern of a turbid sample to its thickness and scattering parameters. Once found, this supervised learner is employed in an inverse optimization problem for estimating the scattering parameters of a sample given its thickness and scattering pattern. Multi-response Gaussian processes are used for the supervised learning task and a simple setup is introduced to obtain the scattering pattern of a tissue sample. To increase the predictive power of the supervised learner, the scattering patterns are filtered, enriched by a regressor, and finally characterized with two parameters, namely, transmitted power and scaled Gaussian width. We computationally illustrate that our approach achieves errors of roughly 5% in predicting the scattering properties of many biological tissues. Our method has the potential to facilitate the characterization of tissues and fabrication of phantoms used for diagnostic and therapeutic purposes over a wide range of optical spectrum.

  1. Are V1 Simple Cells Optimized for Visual Occlusions? A Comparative Study

    PubMed Central

    Bornschein, Jörg; Henniges, Marc; Lücke, Jörg

    2013-01-01

    Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. PMID:23754938

  2. Nonlinear power spectrum from resummed perturbation theory: a leap beyond the BAO scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anselmi, Stefano; Pietroni, Massimo, E-mail: anselmi@ieec.uab.es, E-mail: massimo.pietroni@pd.infn.it

    2012-12-01

    A new computational scheme for the nonlinear cosmological matter power spectrum (PS) is presented. Our method is based on evolution equations in time, which can be cast in a form extremely convenient for fast numerical evaluations. A nonlinear PS is obtained in a time comparable to that needed for a simple 1-loop computation, and the numerical implementation is very simple. Our results agree with N-body simulations at the percent level in the BAO range of scales, and at the few-percent level up to k ≅ 1 h/Mpc at z∼>0.5, thereby opening the possibility of applying this tool to scales interestingmore » for weak lensing. We clarify the approximations inherent to this approach as well as its relations to previous ones, such as the Time Renormalization Group, and the multi-point propagator expansion. We discuss possible lines of improvements of the method and its intrinsic limitations by multi streaming at small scales and low redshifts.« less

  3. Point-in-convex polygon and point-in-convex polyhedron algorithms with O(1) complexity using space subdivision

    NASA Astrophysics Data System (ADS)

    Skala, Vaclav

    2016-06-01

    There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E2 a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E3 case, the complexity is O(N) even for the convex polyhedron as no ordering is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.

  4. AN INFORMATION-THEORETIC APPROACH TO OPTIMIZE JWST OBSERVATIONS AND RETRIEVALS OF TRANSITING EXOPLANET ATMOSPHERES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howe, Alex R.; Burrows, Adam; Deming, Drake, E-mail: arhowe@umich.edu, E-mail: burrows@astro.princeton.edu, E-mail: ddeming@astro.umd.edu

    We provide an example of an analysis to explore the optimization of observations of transiting hot Jupiters with the James Webb Space Telescope ( JWST ) to characterize their atmospheres based on a simple three-parameter forward model. We construct expansive forward model sets for 11 hot Jupiters, 10 of which are relatively well characterized, exploring a range of parameters such as equilibrium temperature and metallicity, as well as considering host stars over a wide range in brightness. We compute posterior distributions of our model parameters for each planet with all of the available JWST spectroscopic modes and several programs ofmore » combined observations and compute their effectiveness using the metric of estimated mutual information per degree of freedom. From these simulations, clear trends emerge that provide guidelines for designing a JWST observing program. We demonstrate that these guidelines apply over a wide range of planet parameters and target brightnesses for our simple forward model.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skala, Vaclav

    There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E{sup 2} a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E{sup 3} case, the complexity is O(N) even for the convex polyhedron as no orderingmore » is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.« less

  6. Audio Classification in Speech and Music: A Comparison between a Statistical and a Neural Approach

    NASA Astrophysics Data System (ADS)

    Bugatti, Alessandro; Flammini, Alessandra; Migliorati, Pierangelo

    2002-12-01

    We focus the attention on the problem of audio classification in speech and music for multimedia applications. In particular, we present a comparison between two different techniques for speech/music discrimination. The first method is based on Zero crossing rate and Bayesian classification. It is very simple from a computational point of view, and gives good results in case of pure music or speech. The simulation results show that some performance degradation arises when the music segment contains also some speech superimposed on music, or strong rhythmic components. To overcome these problems, we propose a second method, that uses more features, and is based on neural networks (specifically a multi-layer Perceptron). In this case we obtain better performance, at the expense of a limited growth in the computational complexity. In practice, the proposed neural network is simple to be implemented if a suitable polynomial is used as the activation function, and a real-time implementation is possible even if low-cost embedded systems are used.

  7. Simple trigonometry on computed tomography helps in planning renal access.

    PubMed

    Bilen, Cenk Yücel; Koçak, Burak; Kitirci, Gürcan; Danaci, Murat; Sarikaya, Saban

    2007-08-01

    To retrospectively assess the usefulness of the measurements on preoperative computed tomography (CT) of patients with urinary stone disease for planning the access site using vertical angulation of the C-arm. Of the patients who underwent percutaneous nephrolithotomy from November 2001 to October 2006, 41 patients with superior calix access had undergone preoperative CT. The depth of the target stone (y) and the vertical distance from that point to the first rib free slice (x) were measured on CT. The limit of the ratio of x over y was accepted as 0.58, with ratios below that indicating that infracostal access could be achieved by vertical angulation of the C-arm. We achieved an approach to the superior calix through an infracostal access in 28 patients. The preoperative trigonometric study on CT predicted 24 of them. The stone-free rate was 92.6%, and no chest-related complications developed. Simple trigonometry on CT of the patients with complex stones could help endourologists in planning renal access.

  8. Shaping the light for the investigation of depth-extended scattering media

    NASA Astrophysics Data System (ADS)

    Osten, W.; Frenner, K.; Pedrini, G.; Singh, A. K.; Schindler, J.; Takeda, M.

    2018-02-01

    Scattering media are an ongoing challenge for all kind of imaging technologies including coherent and incoherent principles. Inspired by new approaches of computational imaging and supported by the availability of powerful computers, spatial light modulators, light sources and detectors, a variety of new methods ranging from holography to time-of-flight imaging, phase conjugation, phase recovery using iterative algorithms and correlation techniques have been introduced and applied to different types of objects. However, considering the obvious progress in this field, several problems are still matter of investigation and their solution could open new doors for the inspection and application of scattering media as well. In particular, these open questions include the possibility of extending the 2d-approach to the inspection of depth-extended objects, the direct use of a scattering media as a simple tool for imaging of complex objects and the improvement of coherent inspection techniques for the dimensional characterization of incoherently radiating spots embedded in scattering media. In this paper we show our recent findings in coping with these challenges. First we describe how to explore depth-extended objects by means of a scattering media. Afterwards, we extend this approach by implementing a new type of microscope making use of a simple scatter plate as a kind of flat and unconventional imaging lens. Finally, we introduce our shearing interferometer in combination with structured illumination for retrieving the axial position of fluorescent light emitting spots embedded in scattering media.

  9. A moving control volume approach to computing hydrodynamic forces and torques on immersed bodies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nangia, Nishant; Johansen, Hans; Patankar, Neelesh A.

    Here, we present a moving control volume (CV) approach to computing hydrodynamic forces and torques on complex geometries. The method requires surface and volumetric integrals over a simple and regular Cartesian box that moves with an arbitrary velocity to enclose the body at all times. The moving box is aligned with Cartesian grid faces, which makes the integral evaluation straightforward in an immersed boundary (IB) framework. Discontinuous and noisy derivatives of velocity and pressure at the fluid–structure interface are avoided and far-field (smooth) velo city and pressure information is used. We re-visit the approach to compute hydrodynamic forces and torquesmore » through force/torque balance equations in a Lagrangian frame that some of us took in a prior work (Bhalla et al., 2013 [13]). We prove the equivalence of the two approaches for IB methods, thanks to the use of Peskin's delta functions. Both approaches are able to suppress spurious force oscillations and are in excellent agreement, as expected theoretically. Test cases ranging from Stokes to high Reynolds number regimes are considered. We discuss regridding issues for the moving CV method in an adaptive mesh refinement (AMR) context. The proposed moving CV method is not limited to a specific IB method and can also be used, for example, with embedded boundary methods.« less

  10. A moving control volume approach to computing hydrodynamic forces and torques on immersed bodies

    DOE PAGES

    Nangia, Nishant; Johansen, Hans; Patankar, Neelesh A.; ...

    2017-10-01

    Here, we present a moving control volume (CV) approach to computing hydrodynamic forces and torques on complex geometries. The method requires surface and volumetric integrals over a simple and regular Cartesian box that moves with an arbitrary velocity to enclose the body at all times. The moving box is aligned with Cartesian grid faces, which makes the integral evaluation straightforward in an immersed boundary (IB) framework. Discontinuous and noisy derivatives of velocity and pressure at the fluid–structure interface are avoided and far-field (smooth) velo city and pressure information is used. We re-visit the approach to compute hydrodynamic forces and torquesmore » through force/torque balance equations in a Lagrangian frame that some of us took in a prior work (Bhalla et al., 2013 [13]). We prove the equivalence of the two approaches for IB methods, thanks to the use of Peskin's delta functions. Both approaches are able to suppress spurious force oscillations and are in excellent agreement, as expected theoretically. Test cases ranging from Stokes to high Reynolds number regimes are considered. We discuss regridding issues for the moving CV method in an adaptive mesh refinement (AMR) context. The proposed moving CV method is not limited to a specific IB method and can also be used, for example, with embedded boundary methods.« less

  11. An approach to multivariable control of manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    The paper presents simple schemes for multivariable control of multiple-joint robot manipulators in joint and Cartesian coordinates. The joint control scheme consists of two independent multivariable feedforward and feedback controllers. The feedforward controller is the minimal inverse of the linearized model of robot dynamics and contains only proportional-double-derivative (PD2) terms - implying feedforward from the desired position, velocity and acceleration. This controller ensures that the manipulator joint angles track any reference trajectories. The feedback controller is of proportional-integral-derivative (PID) type and is designed to achieve pole placement. This controller reduces any initial tracking error to zero as desired and also ensures that robust steady-state tracking of step-plus-exponential trajectories is achieved by the joint angles. Simple and explicit expressions of computation of the feedforward and feedback gains are obtained based on the linearized model of robot dynamics. This leads to computationally efficient schemes for either on-line gain computation or off-line gain scheduling to account for variations in the linearized robot model due to changes in the operating point. The joint control scheme is extended to direct control of the end-effector motion in Cartesian space. Simulation results are given for illustration.

  12. Multicriteria meta-heuristics for AGV dispatching control based on computational intelligence.

    PubMed

    Naso, David; Turchiano, Biagio

    2005-04-01

    In many manufacturing environments, automated guided vehicles are used to move the processed materials between various pickup and delivery points. The assignment of vehicles to unit loads is a complex problem that is often solved in real-time with simple dispatching rules. This paper proposes an automated guided vehicles dispatching approach based on computational intelligence. We adopt a fuzzy multicriteria decision strategy to simultaneously take into account multiple aspects in every dispatching decision. Since the typical short-term view of dispatching rules is one of the main limitations of such real-time assignment heuristics, we also incorporate in the multicriteria algorithm a specific heuristic rule that takes into account the empty-vehicle travel on a longer time-horizon. Moreover, we also adopt a genetic algorithm to tune the weights associated to each decision criteria in the global decision algorithm. The proposed approach is validated by means of a comparison with other dispatching rules, and with other recently proposed multicriteria dispatching strategies also based on computational Intelligence. The analysis of the results obtained by the proposed dispatching approach in both nominal and perturbed operating conditions (congestions, faults) confirms its effectiveness.

  13. Low rank approach to computing first and higher order derivatives using automatic differentiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, J. A.; Abdel-Khalik, H. S.; Utke, J.

    2012-07-01

    This manuscript outlines a new approach for increasing the efficiency of applying automatic differentiation (AD) to large scale computational models. By using the principles of the Efficient Subspace Method (ESM), low rank approximations of the derivatives for first and higher orders can be calculated using minimized computational resources. The output obtained from nuclear reactor calculations typically has a much smaller numerical rank compared to the number of inputs and outputs. This rank deficiency can be exploited to reduce the number of derivatives that need to be calculated using AD. The effective rank can be determined according to ESM by computingmore » derivatives with AD at random inputs. Reduced or pseudo variables are then defined and new derivatives are calculated with respect to the pseudo variables. Two different AD packages are used: OpenAD and Rapsodia. OpenAD is used to determine the effective rank and the subspace that contains the derivatives. Rapsodia is then used to calculate derivatives with respect to the pseudo variables for the desired order. The overall approach is applied to two simple problems and to MATWS, a safety code for sodium cooled reactors. (authors)« less

  14. Extending fields in a level set method by solving a biharmonic equation

    NASA Astrophysics Data System (ADS)

    Moroney, Timothy J.; Lusmore, Dylan R.; McCue, Scott W.; McElwain, D. L. Sean

    2017-08-01

    We present an approach for computing extensions of velocities or other fields in level set methods by solving a biharmonic equation. The approach differs from other commonly used approaches to velocity extension because it deals with the interface fully implicitly through the level set function. No explicit properties of the interface, such as its location or the velocity on the interface, are required in computing the extension. These features lead to a particularly simple implementation using either a sparse direct solver or a matrix-free conjugate gradient solver. Furthermore, we propose a fast Poisson preconditioner that can be used to accelerate the convergence of the latter. We demonstrate the biharmonic extension on a number of test problems that serve to illustrate its effectiveness at producing smooth and accurate extensions near interfaces. A further feature of the method is the natural way in which it deals with symmetry and periodicity, ensuring through its construction that the extension field also respects these symmetries.

  15. Algorithmic aspects for the reconstruction of spatio-spectral data cubes in the perspective of the SKA

    NASA Astrophysics Data System (ADS)

    Mary, D.; Ferrari, A.; Ferrari, C.; Deguignet, J.; Vannier, M.

    2016-12-01

    With millions of receivers leading to TerraByte data cubes, the story of the giant SKA telescope is also that of collaborative efforts from radioastronomy, signal processing, optimization and computer sciences. Reconstructing SKA cubes poses two challenges. First, the majority of existing algorithms work in 2D and cannot be directly translated into 3D. Second, the reconstruction implies solving an inverse problem and it is not clear what ultimate limit we can expect on the error of this solution. This study addresses (of course partially) both challenges. We consider an extremely simple data acquisition model, and we focus on strategies making it possible to implement 3D reconstruction algorithms that use state-of-the-art image/spectral regularization. The proposed approach has two main features: (i) reduced memory storage with respect to a previous approach; (ii) efficient parallelization and ventilation of the computational load over the spectral bands. This work will allow to implement and compare various 3D reconstruction approaches in a large scale framework.

  16. An approach for drag correction based on the local heterogeneity for gas-solid flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Tingwen; Wang, Limin; Rogers, William

    2016-09-22

    The drag models typically used for gas-solids interaction are mainly developed based on homogeneous systems of flow passing fixed particle assembly. It has been shown that the heterogeneous structures, i.e., clusters and bubbles in fluidized beds, need to be resolved to account for their effect in the numerical simulations. Since the heterogeneity is essentially captured through the local concentration gradient in the computational cells, this study proposes a simple approach to account for the non-uniformity of solids spatial distribution inside a computational cell and its effect on the interaction between gas and solid phases. Finally, to validate this approach, themore » predicted drag coefficient has been compared to the results from direct numerical simulations. In addition, the need to account for this type of heterogeneity is discussed for a periodic riser flow simulation with highly resolved numerical grids and the impact of the proposed correction for drag is demonstrated.« less

  17. Experimental validation of spatial Fourier transform-based multiple sound zone generation with a linear loudspeaker array.

    PubMed

    Okamoto, Takuma; Sakaguchi, Atsushi

    2017-03-01

    Generating acoustically bright and dark zones using loudspeakers is gaining attention as one of the most important acoustic communication techniques for such uses as personal sound systems and multilingual guide services. Although most conventional methods are based on numerical solutions, an analytical approach based on the spatial Fourier transform with a linear loudspeaker array has been proposed, and its effectiveness has been compared with conventional acoustic energy difference maximization and presented by computer simulations. To describe the effectiveness of the proposal in actual environments, this paper investigates the experimental validation of the proposed approach with rectangular and Hann windows and compared it with three conventional methods: simple delay-and-sum beamforming, contrast maximization, and least squares-based pressure matching using an actually implemented linear array of 64 loudspeakers in an anechoic chamber. The results of both the computer simulations and the actual experiments show that the proposed approach with a Hann window more accurately controlled the bright and dark zones than the conventional methods.

  18. Laplace Transform Based Radiative Transfer Studies

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Lin, B.; Ng, T.; Yang, P.; Wiscombe, W.; Herath, J.; Duffy, D.

    2006-12-01

    Multiple scattering is the major uncertainty for data analysis of space-based lidar measurements. Until now, accurate quantitative lidar data analysis has been limited to very thin objects that are dominated by single scattering, where photons from the laser beam only scatter a single time with particles in the atmosphere before reaching the receiver, and simple linear relationship between physical property and lidar signal exists. In reality, multiple scattering is always a factor in space-based lidar measurement and it dominates space- based lidar returns from clouds, dust aerosols, vegetation canopy and phytoplankton. While multiple scattering are clear signals, the lack of a fast-enough lidar multiple scattering computation tool forces us to treat the signal as unwanted "noise" and use simple multiple scattering correction scheme to remove them. Such multiple scattering treatments waste the multiple scattering signals and may cause orders of magnitude errors in retrieved physical properties. Thus the lack of fast and accurate time-dependent radiative transfer tools significantly limits lidar remote sensing capabilities. Analyzing lidar multiple scattering signals requires fast and accurate time-dependent radiative transfer computations. Currently, multiple scattering is done with Monte Carlo simulations. Monte Carlo simulations take minutes to hours and are too slow for interactive satellite data analysis processes and can only be used to help system / algorithm design and error assessment. We present an innovative physics approach to solve the time-dependent radiative transfer problem. The technique utilizes FPGA based reconfigurable computing hardware. The approach is as following, 1. Physics solution: Perform Laplace transform on the time and spatial dimensions and Fourier transform on the viewing azimuth dimension, and convert the radiative transfer differential equation solving into a fast matrix inversion problem. The majority of the radiative transfer computation goes to matrix inversion processes, FFT and inverse Laplace transforms. 2. Hardware solutions: Perform the well-defined matrix inversion, FFT and Laplace transforms on highly parallel, reconfigurable computing hardware. This physics-based computational tool leads to accurate quantitative analysis of space-based lidar signals and improves data quality of current lidar mission such as CALIPSO. This presentation will introduce the basic idea of this approach, preliminary results based on SRC's FPGA-based Mapstation, and how we may apply it to CALIPSO data analysis.

  19. 20 CFR 725.608 - Interest.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... simple annual interest, computed from the date on which the benefits were due. The interest shall be... payment of retroactive benefits, the beneficiary shall also be entitled to simple annual interest on such... entitled to simple annual interest computed from the date upon which the beneficiary's right to additional...

  20. Multidisciplinary optimization in aircraft design using analytic technology models

    NASA Technical Reports Server (NTRS)

    Malone, Brett; Mason, W. H.

    1991-01-01

    An approach to multidisciplinary optimization is presented which combines the Global Sensitivity Equation method, parametric optimization, and analytic technology models. The result is a powerful yet simple procedure for identifying key design issues. It can be used both to investigate technology integration issues very early in the design cycle, and to establish the information flow framework between disciplines for use in multidisciplinary optimization projects using much more computational intense representations of each technology. To illustrate the approach, an examination of the optimization of a short takeoff heavy transport aircraft is presented for numerous combinations of performance and technology constraints.

  1. Simulation Speed Analysis and Improvements of Modelica Models for Building Energy Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jorissen, Filip; Wetter, Michael; Helsen, Lieve

    This paper presents an approach for speeding up Modelica models. Insight is provided into how Modelica models are solved and what determines the tool’s computational speed. Aspects such as algebraic loops, code efficiency and integrator choice are discussed. This is illustrated using simple building simulation examples and Dymola. The generality of the work is in some cases verified using OpenModelica. Using this approach, a medium sized office building including building envelope, heating ventilation and air conditioning (HVAC) systems and control strategy can be simulated at a speed five hundred times faster than real time.

  2. Adjoint shape optimization for fluid-structure interaction of ducted flows

    NASA Astrophysics Data System (ADS)

    Heners, J. P.; Radtke, L.; Hinze, M.; Düster, A.

    2018-03-01

    Based on the coupled problem of time-dependent fluid-structure interaction, equations for an appropriate adjoint problem are derived by the consequent use of the formal Lagrange calculus. Solutions of both primal and adjoint equations are computed in a partitioned fashion and enable the formulation of a surface sensitivity. This sensitivity is used in the context of a steepest descent algorithm for the computation of the required gradient of an appropriate cost functional. The efficiency of the developed optimization approach is demonstrated by minimization of the pressure drop in a simple two-dimensional channel flow and in a three-dimensional ducted flow surrounded by a thin-walled structure.

  3. Modelling molecular adsorption on charged or polarized surfaces: a critical flaw in common approaches.

    PubMed

    Bal, Kristof M; Neyts, Erik C

    2018-03-28

    A number of recent computational material design studies based on density functional theory (DFT) calculations have put forward a new class of materials with electrically switchable chemical characteristics that can be exploited in the development of tunable gas storage and electrocatalytic applications. We find systematic flaws in almost every computational study of gas adsorption on polarized or charged surfaces, stemming from an improper and unreproducible treatment of periodicity, leading to very large errors of up to 3 eV in some cases. Two simple corrective procedures that lead to consistent results are proposed, constituting a crucial course correction to the research in the field.

  4. Multiobjective Aerodynamic Shape Optimization Using Pareto Differential Evolution and Generalized Response Surface Metamodels

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.

    2004-01-01

    Differential Evolution (DE) is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. The DE algorithm has been recently extended to multiobjective optimization problem by using a Pareto-based approach. In this paper, a Pareto DE algorithm is applied to multiobjective aerodynamic shape optimization problems that are characterized by computationally expensive objective function evaluations. To improve computational expensive the algorithm is coupled with generalized response surface meta-models based on artificial neural networks. Results are presented for some test optimization problems from the literature to demonstrate the capabilities of the method.

  5. Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons

    PubMed Central

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-01-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717

  6. State-Transition Structures in Physics and in Computation

    NASA Astrophysics Data System (ADS)

    Petri, C. A.

    1982-12-01

    In order to establish close connections between physical and computational processes, it is assumed that the concepts of “state” and of “transition” are acceptable both to physicists and to computer scientists, at least in an informal way. The aim of this paper is to propose formal definitions of state and transition elements on the basis of very low level physical concepts in such a way that (1) all physically possible computations can be described as embedded in physical processes; (2) the computational aspects of physical processes can be described on a well-defined level of abstraction; (3) the gulf between the continuous models of physics and the discrete models of computer science can be bridged by simple mathematical constructs which may be given a physical interpretation; (4) a combinatorial, nonstatistical definition of “information” can be given on low levels of abstraction which may serve as a basis to derive higher-level concepts of information, e.g., by a statistical or probabilistic approach. Conceivable practical consequences are discussed.

  7. Load Balancing Strategies for Multiphase Flows on Structured Grids

    NASA Astrophysics Data System (ADS)

    Olshefski, Kristopher; Owkes, Mark

    2017-11-01

    The computation time required to perform large simulations of complex systems is currently one of the leading bottlenecks of computational research. Parallelization allows multiple processing cores to perform calculations simultaneously and reduces computational times. However, load imbalances between processors waste computing resources as processors wait for others to complete imbalanced tasks. In multiphase flows, these imbalances arise due to the additional computational effort required at the gas-liquid interface. However, many current load balancing schemes are only designed for unstructured grid applications. The purpose of this research is to develop a load balancing strategy while maintaining the simplicity of a structured grid. Several approaches are investigated including brute force oversubscription, node oversubscription through Message Passing Interface (MPI) commands, and shared memory load balancing using OpenMP. Each of these strategies are tested with a simple one-dimensional model prior to implementation into the three-dimensional NGA code. Current results show load balancing will reduce computational time by at least 30%.

  8. A template-based approach for parallel hexahedral two-refinement

    DOE PAGES

    Owen, Steven J.; Shih, Ryan M.; Ernst, Corey D.

    2016-10-17

    Here, we provide a template-based approach for generating locally refined all-hex meshes. We focus specifically on refinement of initially structured grids utilizing a 2-refinement approach where uniformly refined hexes are subdivided into eight child elements. The refinement algorithm consists of identifying marked nodes that are used as the basis for a set of four simple refinement templates. The target application for 2-refinement is a parallel grid-based all-hex meshing tool for high performance computing in a distributed environment. The result is a parallel consistent locally refined mesh requiring minimal communication and where minimum mesh quality is greater than scaled Jacobian 0.3more » prior to smoothing.« less

  9. A template-based approach for parallel hexahedral two-refinement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owen, Steven J.; Shih, Ryan M.; Ernst, Corey D.

    Here, we provide a template-based approach for generating locally refined all-hex meshes. We focus specifically on refinement of initially structured grids utilizing a 2-refinement approach where uniformly refined hexes are subdivided into eight child elements. The refinement algorithm consists of identifying marked nodes that are used as the basis for a set of four simple refinement templates. The target application for 2-refinement is a parallel grid-based all-hex meshing tool for high performance computing in a distributed environment. The result is a parallel consistent locally refined mesh requiring minimal communication and where minimum mesh quality is greater than scaled Jacobian 0.3more » prior to smoothing.« less

  10. Adversarial risk analysis with incomplete information: a level-k approach.

    PubMed

    Rothschild, Casey; McLay, Laura; Guikema, Seth

    2012-07-01

    This article proposes, develops, and illustrates the application of level-k game theory to adversarial risk analysis. Level-k reasoning, which assumes that players play strategically but have bounded rationality, is useful for operationalizing a Bayesian approach to adversarial risk analysis. It can be applied in a broad class of settings, including settings with asynchronous play and partial but incomplete revelation of early moves. Its computational and elicitation requirements are modest. We illustrate the approach with an application to a simple defend-attack model in which the defender's countermeasures are revealed with a probability less than one to the attacker before he decides on how or whether to attack. © 2011 Society for Risk Analysis.

  11. Feasibility of Decentralized Linear-Quadratic-Gaussian Control of Autonomous Distributed Spacecraft

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    1999-01-01

    A distributed satellite formation, modeled as an arbitrary number of fully connected nodes in a network, could be controlled using a decentralized controller framework that distributes operations in parallel over the network. For such problems, a solution that minimizes data transmission requirements, in the context of linear-quadratic-Gaussian (LQG) control theory, was given by Speyer. This approach is advantageous because it is non-hierarchical, detected failures gracefully degrade system performance, fewer local computations are required than for a centralized controller, and it is optimal with respect to the standard LQG cost function. Disadvantages of the approach are the need for a fully connected communications network, the total operations performed over all the nodes are greater than for a centralized controller, and the approach is formulated for linear time-invariant systems. To investigate the feasibility of the decentralized approach to satellite formation flying, a simple centralized LQG design for a spacecraft orbit control problem is adapted to the decentralized framework. The simple design uses a fixed reference trajectory (an equatorial, Keplerian, circular orbit), and by appropriate choice of coordinates and measurements is formulated as a linear time-invariant system.

  12. Probabilistic co-adaptive brain-computer interfacing

    NASA Astrophysics Data System (ADS)

    Bryan, Matthew J.; Martin, Stefan A.; Cheung, Willy; Rao, Rajesh P. N.

    2013-12-01

    Objective. Brain-computer interfaces (BCIs) are confronted with two fundamental challenges: (a) the uncertainty associated with decoding noisy brain signals, and (b) the need for co-adaptation between the brain and the interface so as to cooperatively achieve a common goal in a task. We seek to mitigate these challenges. Approach. We introduce a new approach to brain-computer interfacing based on partially observable Markov decision processes (POMDPs). POMDPs provide a principled approach to handling uncertainty and achieving co-adaptation in the following manner: (1) Bayesian inference is used to compute posterior probability distributions (‘beliefs’) over brain and environment state, and (2) actions are selected based on entire belief distributions in order to maximize total expected reward; by employing methods from reinforcement learning, the POMDP’s reward function can be updated over time to allow for co-adaptive behaviour. Main results. We illustrate our approach using a simple non-invasive BCI which optimizes the speed-accuracy trade-off for individual subjects based on the signal-to-noise characteristics of their brain signals. We additionally demonstrate that the POMDP BCI can automatically detect changes in the user’s control strategy and can co-adaptively switch control strategies on-the-fly to maximize expected reward. Significance. Our results suggest that the framework of POMDPs offers a promising approach for designing BCIs that can handle uncertainty in neural signals and co-adapt with the user on an ongoing basis. The fact that the POMDP BCI maintains a probability distribution over the user’s brain state allows a much more powerful form of decision making than traditional BCI approaches, which have typically been based on the output of classifiers or regression techniques. Furthermore, the co-adaptation of the system allows the BCI to make online improvements to its behaviour, adjusting itself automatically to the user’s changing circumstances.

  13. Less can be more: How to make operations more flexible and robust with fewer resources

    NASA Astrophysics Data System (ADS)

    Haksöz, ćaǧrı; Katsikopoulos, Konstantinos; Gigerenzer, Gerd

    2018-06-01

    We review empirical evidence from practice and general theoretical conditions, under which simple rules of thumb can help to make operations flexible and robust. An operation is flexible when it responds adaptively to adverse events such as natural disasters; an operation is robust when it is less affected by adverse events in the first place. We illustrate the relationship between flexibility and robustness in the context of supply chain risk. In addition to increasing flexibility and robustness, simple rules simultaneously reduce the need for resources such as time, money, information, and computation. We illustrate the simple-rules approach with an easy-to-use graphical aid for diagnosing and managing supply chain risk. More generally, we recommend a four-step process for determining the amount of resources that decision makers should invest in so as to increase flexibility and robustness.

  14. Manipulators with flexible links: A simple model and experiments

    NASA Technical Reports Server (NTRS)

    Shimoyama, Isao; Oppenheim, Irving J.

    1989-01-01

    A simple dynamic model proposed for flexible links is briefly reviewed and experimental control results are presented for different flexible systems. A simple dynamic model is useful for rapid prototyping of manipulators and their control systems, for possible application to manipulator design decisions, and for real time computation as might be applied in model based or feedforward control. Such a model is proposed, with the further advantage that clear physical arguments and explanations can be associated with its simplifying features and with its resulting analytical properties. The model is mathematically equivalent to Rayleigh's method. Taking the example of planar bending, the approach originates in its choice of two amplitude variables, typically chosen as the link end rotations referenced to the chord (or the tangent) motion of the link. This particular choice is key in establishing the advantageous features of the model, and it was used to support the series of experiments reported.

  15. Surface Curvatures Computation from Equidistance Contours

    NASA Astrophysics Data System (ADS)

    Tanaka, Hiromi T.; Kling, Olivier; Lee, Daniel T. L.

    1990-03-01

    The subject of our research is on the 3D shape representation problem for a special class of range image, one where the natural mode of the acquired range data is in the form of equidistance contours, as exemplified by a moire interferometry range system. In this paper we present a novel surface curvature computation scheme that directly computes the surface curvatures (the principal curvatures, Gaussian curvature and mean curvature) from the equidistance contours without any explicit computations or implicit estimates of partial derivatives. We show how the special nature of the equidistance contours, specifically, the dense information of the surface curves in the 2D contour plane, turns into an advantage for the computation of the surface curvatures. The approach is based on using simple geometric construction to obtain the normal sections and the normal curvatures. This method is general and can be extended to any dense range image data. We show in details how this computation is formulated and give an analysis on the error bounds of the computation steps showing that the method is stable. Computation results on real equidistance range contours are also shown.

  16. Modeling the Effects of Beam Size and Flaw Morphology on Ultrasonic Pulse/Echo Sizing of Delaminations in Carbon Composites

    NASA Technical Reports Server (NTRS)

    Margetan, Frank J.; Leckey, Cara A.; Barnard, Dan

    2012-01-01

    The size and shape of a delamination in a multi-layered structure can be estimated in various ways from an ultrasonic pulse/echo image. For example the -6dB contours of measured response provide one simple estimate of the boundary. More sophisticated approaches can be imagined where one adjusts the proposed boundary to bring measured and predicted UT images into optimal agreement. Such approaches require suitable models of the inspection process. In this paper we explore issues pertaining to model-based size estimation for delaminations in carbon fiber reinforced laminates. In particular we consider the influence on sizing when the delamination is non-planar or partially transmitting in certain regions. Two models for predicting broadband sonic time-domain responses are considered: (1) a fast "simple" model using paraxial beam expansions and Kirchhoff and phase-screen approximations; and (2) the more exact (but computationally intensive) 3D elastodynamic finite integration technique (EFIT). Model-to-model and model-to experiment comparisons are made for delaminations in uniaxial composite plates, and the simple model is then used to critique the -6dB rule for delamination sizing.

  17. Efficient multi-atlas abdominal segmentation on clinically acquired CT with SIMPLE context learning.

    PubMed

    Xu, Zhoubing; Burke, Ryan P; Lee, Christopher P; Baucom, Rebeccah B; Poulose, Benjamin K; Abramson, Richard G; Landman, Bennett A

    2015-08-01

    Abdominal segmentation on clinically acquired computed tomography (CT) has been a challenging problem given the inter-subject variance of human abdomens and complex 3-D relationships among organs. Multi-atlas segmentation (MAS) provides a potentially robust solution by leveraging label atlases via image registration and statistical fusion. We posit that the efficiency of atlas selection requires further exploration in the context of substantial registration errors. The selective and iterative method for performance level estimation (SIMPLE) method is a MAS technique integrating atlas selection and label fusion that has proven effective for prostate radiotherapy planning. Herein, we revisit atlas selection and fusion techniques for segmenting 12 abdominal structures using clinically acquired CT. Using a re-derived SIMPLE algorithm, we show that performance on multi-organ classification can be improved by accounting for exogenous information through Bayesian priors (so called context learning). These innovations are integrated with the joint label fusion (JLF) approach to reduce the impact of correlated errors among selected atlases for each organ, and a graph cut technique is used to regularize the combined segmentation. In a study of 100 subjects, the proposed method outperformed other comparable MAS approaches, including majority vote, SIMPLE, JLF, and the Wolz locally weighted vote technique. The proposed technique provides consistent improvement over state-of-the-art approaches (median improvement of 7.0% and 16.2% in DSC over JLF and Wolz, respectively) and moves toward efficient segmentation of large-scale clinically acquired CT data for biomarker screening, surgical navigation, and data mining. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Conjugate gradient based projection - A new explicit methodology for frictional contact

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; Li, Maocheng; Sha, Desong

    1993-01-01

    With special attention towards the applicability to parallel computation or vectorization, a new and effective explicit approach for linear complementary formulations involving a conjugate gradient based projection methodology is proposed in this study for contact problems with Coulomb friction. The overall objectives are focussed towards providing an explicit methodology of computation for the complete contact problem with friction. In this regard, the primary idea for solving the linear complementary formulations stems from an established search direction which is projected to a feasible region determined by the non-negative constraint condition; this direction is then applied to the Fletcher-Reeves conjugate gradient method resulting in a powerful explicit methodology which possesses high accuracy, excellent convergence characteristics, fast computational speed and is relatively simple to implement for contact problems involving Coulomb friction.

  19. Fast and Reliable Thermodynamic Approach for Determining the Protonation State of the Asp Dyad.

    PubMed

    Huang, Jinfeng; Sun, Bin; Yao, Yuan; Liu, Junjun

    2017-09-25

    The protonation state of the asp dyad is significantly important in revealing enzymatic mechanisms and developing drugs. However, it is hard to determine by calculating free energy changes between possible protonation states, because the free energy changes due to protein conformational flexibility are usually much larger than those originating from different locations of protons. Sophisticated and computationally expensive methods such as free energy perturbation, thermodynamic integration (TI), and quantum mechanics/molecular mechanics are therefore usually used for this purpose. In the present study, we have developed a simple thermodynamic approach to effectively eliminating the free energy changes arising from protein conformational flexibility and estimating the free energy changes only originated from the locations of protons, which provides a fast and reliable method for determining the protonation state of asp dyads. The test of this approach on a total of 15 asp dyad systems, including BACE-1 and HIV-1 protease, shows that the predictions from this approach are all consistent with experiments or with the computationally expensive TI calculations. It is clear that our thermodynamic approach could be used to rapidly and reliably determine the protonation state of the asp dyad.

  20. Computational Flow Modeling of Human Upper Airway Breathing

    NASA Astrophysics Data System (ADS)

    Mylavarapu, Goutham

    Computational modeling of biological systems have gained a lot of interest in biomedical research, in the recent past. This thesis focuses on the application of computational simulations to study airflow dynamics in human upper respiratory tract. With advancements in medical imaging, patient specific geometries of anatomically accurate respiratory tracts can now be reconstructed from Magnetic Resonance Images (MRI) or Computed Tomography (CT) scans, with better and accurate details than traditional cadaver cast models. Computational studies using these individualized geometrical models have advantages of non-invasiveness, ease, minimum patient interaction, improved accuracy over experimental and clinical studies. Numerical simulations can provide detailed flow fields including velocities, flow rates, airway wall pressure, shear stresses, turbulence in an airway. Interpretation of these physical quantities will enable to develop efficient treatment procedures, medical devices, targeted drug delivery etc. The hypothesis for this research is that computational modeling can predict the outcomes of a surgical intervention or a treatment plan prior to its application and will guide the physician in providing better treatment to the patients. In the current work, three different computational approaches Computational Fluid Dynamics (CFD), Flow-Structure Interaction (FSI) and Particle Flow simulations were used to investigate flow in airway geometries. CFD approach assumes airway wall as rigid, and relatively easy to simulate, compared to the more challenging FSI approach, where interactions of airway wall deformations with flow are also accounted. The CFD methodology using different turbulence models is validated against experimental measurements in an airway phantom. Two case-studies using CFD, to quantify a pre and post-operative airway and another, to perform virtual surgery to determine the best possible surgery in a constricted airway is demonstrated. The unsteady Large Eddy simulations (LES) and a steady Reynolds Averaged Navier Stokes (RANS) approaches in CFD modeling are discussed. The more challenging FSI approach is modeled first in simple two-dimensional anatomical geometry and then extended to simplified three dimensional geometry and finally in three dimensionally accurate geometries. The concepts of virtual surgery and the differences to CFD are discussed. Finally, the influence of various drug delivery parameters on particle deposition efficiency in airway anatomy are investigated through particle-flow simulations in a nasal airway model.

  1. A wave superposition method formulated in digital acoustic space

    NASA Astrophysics Data System (ADS)

    Hwang, Yong-Sin

    In this thesis, a new formulation of the Wave Superposition method is proposed wherein the conventional mesh approach is replaced by a simple 3-D digital work space that easily accommodates shape optimization for minimizing or maximizing radiation efficiency. As sound quality is in demand in almost all product designs and also because of fierce competition between product manufacturers, faster and accurate computational method for shape optimization is always desired. Because the conventional Wave Superposition method relies solely on mesh geometry, it cannot accommodate fast shape changes in the design stage of a consumer product or machinery, where many iterations of shape changes are required. Since the use of a mesh hinders easy shape changes, a new approach for representing geometry is introduced by constructing a uniform lattice in a 3-D digital work space. A voxel (a portmanteau, a new word made from combining the sound and meaning, of the words, volumetric and pixel) is essentially a volume element defined by the uniform lattice, and does not require separate connectivity information as a mesh element does. In the presented method, geometry is represented with voxels that can easily adapt to shape changes, therefore it is more suitable for shape optimization. The new method was validated by computing radiated sound power of structures of simple and complex geometries and complex mode shapes. It was shown that matching volume velocity is a key component to an accurate analysis. A sensitivity study showed that it required at least 6 elements per acoustic wavelength, and a complexity study showed a minimal reduction in computational time.

  2. Encoder-Decoder Optimization for Brain-Computer Interfaces

    PubMed Central

    Merel, Josh; Pianto, Donald M.; Cunningham, John P.; Paninski, Liam

    2015-01-01

    Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages. PMID:26029919

  3. Encoder-decoder optimization for brain-computer interfaces.

    PubMed

    Merel, Josh; Pianto, Donald M; Cunningham, John P; Paninski, Liam

    2015-06-01

    Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages.

  4. Supervised learning from human performance at the computationally hard problem of optimal traffic signal control on a network of junctions

    PubMed Central

    Box, Simon

    2014-01-01

    Optimal switching of traffic lights on a network of junctions is a computationally intractable problem. In this research, road traffic networks containing signallized junctions are simulated. A computer game interface is used to enable a human ‘player’ to control the traffic light settings on the junctions within the simulation. A supervised learning approach, based on simple neural network classifiers can be used to capture human player's strategies in the game and thus develop a human-trained machine control (HuTMaC) system that approaches human levels of performance. Experiments conducted within the simulation compare the performance of HuTMaC to two well-established traffic-responsive control systems that are widely deployed in the developed world and also to a temporal difference learning-based control method. In all experiments, HuTMaC outperforms the other control methods in terms of average delay and variance over delay. The conclusion is that these results add weight to the suggestion that HuTMaC may be a viable alternative, or supplemental method, to approximate optimization for some practical engineering control problems where the optimal strategy is computationally intractable. PMID:26064570

  5. Utilizing computerized entertainment education in the development of decision aids for lower literate and naïve computer users.

    PubMed

    Jibaja-Weiss, Maria L; Volk, Robert J

    2007-01-01

    Decision aids have been developed by using various delivery methods, including interactive computer programs. Such programs, however, still rely heavily on written information, health and digital literacy, and reading ease. We describe an approach to overcome these potential barriers for low-literate, underserved populations by making design considerations for poor readers and naïve computer users and by using concepts from entertainment education to engage the user and to contextualize the content for the user. The system design goals are to make the program both didactic and entertaining and the navigation and graphical user interface as simple as possible. One entertainment education strategy, the soap opera, is linked seamlessly to interactive learning modules to enhance the content of the soap opera episodes. The edutainment decision aid model (EDAM) guides developers through the design process. Although designing patient decision aids that are educational, entertaining, and targeted toward poor readers and those with limited computer skills is a complex task, it is a promising strategy for aiding this population. Entertainment education may be a highly effective approach to promoting informed decision making for patients with low health literacy.

  6. Quantum Monte Carlo for atoms and molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnett, R.N.

    1989-11-01

    The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H{sub 2}, LiH, Li{sub 2}, and H{sub 2}O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li{sub 2}, and H{sub 2}O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations,more » the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions.« less

  7. Supervised learning from human performance at the computationally hard problem of optimal traffic signal control on a network of junctions.

    PubMed

    Box, Simon

    2014-12-01

    Optimal switching of traffic lights on a network of junctions is a computationally intractable problem. In this research, road traffic networks containing signallized junctions are simulated. A computer game interface is used to enable a human 'player' to control the traffic light settings on the junctions within the simulation. A supervised learning approach, based on simple neural network classifiers can be used to capture human player's strategies in the game and thus develop a human-trained machine control (HuTMaC) system that approaches human levels of performance. Experiments conducted within the simulation compare the performance of HuTMaC to two well-established traffic-responsive control systems that are widely deployed in the developed world and also to a temporal difference learning-based control method. In all experiments, HuTMaC outperforms the other control methods in terms of average delay and variance over delay. The conclusion is that these results add weight to the suggestion that HuTMaC may be a viable alternative, or supplemental method, to approximate optimization for some practical engineering control problems where the optimal strategy is computationally intractable.

  8. Investigation of Climate Change Impact on Water Resources for an Alpine Basin in Northern Italy: Implications for Evapotranspiration Modeling Complexity

    PubMed Central

    Ravazzani, Giovanni; Ghilardi, Matteo; Mendlik, Thomas; Gobiet, Andreas; Corbari, Chiara; Mancini, Marco

    2014-01-01

    Assessing the future effects of climate change on water availability requires an understanding of how precipitation and evapotranspiration rates will respond to changes in atmospheric forcing. Use of simplified hydrological models is required beacause of lack of meteorological forcings with the high space and time resolutions required to model hydrological processes in mountains river basins, and the necessity of reducing the computational costs. The main objective of this study was to quantify the differences between a simplified hydrological model, which uses only precipitation and temperature to compute the hydrological balance when simulating the impact of climate change, and an enhanced version of the model, which solves the energy balance to compute the actual evapotranspiration. For the meteorological forcing of future scenario, at-site bias-corrected time series based on two regional climate models were used. A quantile-based error-correction approach was used to downscale the regional climate model simulations to a point scale and to reduce its error characteristics. The study shows that a simple temperature-based approach for computing the evapotranspiration is sufficiently accurate for performing hydrological impact investigations of climate change for the Alpine river basin which was studied. PMID:25285917

  9. Investigation of climate change impact on water resources for an Alpine basin in northern Italy: implications for evapotranspiration modeling complexity.

    PubMed

    Ravazzani, Giovanni; Ghilardi, Matteo; Mendlik, Thomas; Gobiet, Andreas; Corbari, Chiara; Mancini, Marco

    2014-01-01

    Assessing the future effects of climate change on water availability requires an understanding of how precipitation and evapotranspiration rates will respond to changes in atmospheric forcing. Use of simplified hydrological models is required because of lack of meteorological forcings with the high space and time resolutions required to model hydrological processes in mountains river basins, and the necessity of reducing the computational costs. The main objective of this study was to quantify the differences between a simplified hydrological model, which uses only precipitation and temperature to compute the hydrological balance when simulating the impact of climate change, and an enhanced version of the model, which solves the energy balance to compute the actual evapotranspiration. For the meteorological forcing of future scenario, at-site bias-corrected time series based on two regional climate models were used. A quantile-based error-correction approach was used to downscale the regional climate model simulations to a point scale and to reduce its error characteristics. The study shows that a simple temperature-based approach for computing the evapotranspiration is sufficiently accurate for performing hydrological impact investigations of climate change for the Alpine river basin which was studied.

  10. Emerging Computer Media: On Image Interaction

    NASA Astrophysics Data System (ADS)

    Lippman, Andrew B.

    1982-01-01

    Emerging technologies such as inexpensive, powerful local computing, optical digital videodiscs, and the technologies of human-machine interaction are initiating a revolution in both image storage systems and image interaction systems. This paper will present a review of new approaches to computer media predicated upon three dimensional position sensing, speech recognition, and high density image storage. Examples will be shown such as the Spatial Data Management Systems wherein the free use of place results in intuitively clear retrieval systems and potentials for image association; the Movie-Map, wherein inherently static media generate dynamic views of data, and conferencing work-in-progress wherein joint processing is stressed. Application to medical imaging will be suggested, but the primary emphasis is on the general direction of imaging and reference systems. We are passing the age of simple possibility of computer graphics and image porcessing and entering the age of ready usability.

  11. Performance of some numerical Laplace inversion methods on American put option formula

    NASA Astrophysics Data System (ADS)

    Octaviano, I.; Yuniar, A. R.; Anisa, L.; Surjanto, S. D.; Putri, E. R. M.

    2018-03-01

    Numerical inversion approaches of Laplace transform is used to obtain a semianalytic solution. Some of the mathematical inversion methods such as Durbin-Crump, Widder, and Papoulis can be used to calculate American put options through the optimal exercise price in the Laplace space. The comparison of methods on some simple functions is aimed to know the accuracy and parameters which used in the calculation of American put options. The result obtained is the performance of each method regarding accuracy and computational speed. The Durbin-Crump method has an average error relative of 2.006e-004 with computational speed of 0.04871 seconds, the Widder method has an average error relative of 0.0048 with computational speed of 3.100181 seconds, and the Papoulis method has an average error relative of 9.8558e-004 with computational speed of 0.020793 seconds.

  12. Quantum computation on the edge of a symmetry-protected topological order.

    PubMed

    Miyake, Akimasa

    2010-07-23

    We elaborate the idea of quantum computation through measuring the correlation of a gapped ground state, while the bulk Hamiltonian is utilized to stabilize the resource. A simple computational primitive, by pulling out a single spin adiabatically from the bulk followed by its measurement, is shown to make any ground state of the one-dimensional isotropic Haldane phase useful ubiquitously as a quantum logical wire. The primitive is compatible with certain discrete symmetries that protect this topological order, and the antiferromagnetic Heisenberg spin-1 finite chain is practically available. Our approach manifests a holographic principle in that the logical information of a universal quantum computer can be written and processed perfectly on the edge state (i.e., boundary) of the system, supported by the persistent entanglement from the bulk even when the ground state and its evolution cannot be exactly analyzed.

  13. System analysis in rotorcraft design: The past decade

    NASA Technical Reports Server (NTRS)

    Galloway, Thomas L.

    1988-01-01

    Rapid advances in the technology of electronic digital computers and the need for an integrated synthesis approach in developing future rotorcraft programs has led to increased emphasis on system analysis techniques in rotorcraft design. The task in systems analysis is to deal with complex, interdependent, and conflicting requirements in a structured manner so rational and objective decisions can be made. Whether the results are wisdom or rubbish depends upon the validity and sometimes more importantly, the consistency of the inputs, the correctness of the analysis, and a sensible choice of measures of effectiveness to draw conclusions. In rotorcraft design this means combining design requirements, technology assessment, sensitivity analysis and reviews techniques currently in use by NASA and Army organizations in developing research programs and vehicle specifications for rotorcraft. These procedures span simple graphical approaches to comprehensive analysis on large mainframe computers. Examples of recent applications to military and civil missions are highlighted.

  14. An efficient method for facial component detection in thermal images

    NASA Astrophysics Data System (ADS)

    Paul, Michael; Blanik, Nikolai; Blazek, Vladimir; Leonhardt, Steffen

    2015-04-01

    A method to detect certain regions in thermal images of human faces is presented. In this approach, the following steps are necessary to locate the periorbital and the nose regions: First, the face is segmented from the background by thresholding and morphological filtering. Subsequently, a search region within the face, around its center of mass, is evaluated. Automatically computed temperature thresholds are used per subject and image or image sequence to generate binary images, in which the periorbital regions are located by integral projections. Then, the located positions are used to approximate the nose position. It is possible to track features in the located regions. Therefore, these regions are interesting for different applications like human-machine interaction, biometrics and biomedical imaging. The method is easy to implement and does not rely on any training images or templates. Furthermore, the approach saves processing resources due to simple computations and restricted search regions.

  15. Dynamic analysis of flexible rotor-bearing systems using a modal approach

    NASA Technical Reports Server (NTRS)

    Choy, K. C.; Gunter, E. J.; Barrett, L. E.

    1978-01-01

    The generalized dynamic equations of motion were obtained by the direct stiffness method for multimass flexible rotor-bearing systems. The direct solution of the equations of motion is illustrated on a simple 3-mass system. For complex rotor-bearing systems, the direct solution of the equations becomes very difficult. The transformation of the equations of motion into modal coordinates can greatly simplify the computation for the solution. The use of undamped and damped system mode shapes in the transformation are discussed. A set of undamped critical speed modes is used to transform the equations of motion into a set of coupled modal equations of motion. A rapid procedure for computing stability, steady state unbalance response, and transient response of the rotor-bearing system is presented. Examples of the application of this modal approach are presented. The dynamics of the system is further investigated with frequency spectrum analysis of the transient response.

  16. On the lower bound of monitor solutions of maximally permissive supervisors for a subclass α-S3PR of flexible manufacturing systems

    NASA Astrophysics Data System (ADS)

    Chao, Daniel Yuh

    2015-01-01

    Recently, a novel and computationally efficient method - based on a vector covering approach - to design optimal control places and an iteration approach that computes the reachability graph to obtain a maximally permissive liveness enforcing supervisor for FMS (flexible manufacturing systems) have been reported. However, it is unclear as to the relationship between the structure of the net and the minimal number of monitors required. This paper develops a theory to show that the minimal number of monitors required cannot be less than that of basic siphons in α-S3PR (systems of simple sequential processes with resources). This confirms that two of the three controlled systems by Chen et al. are of a minimal monitor configuration since they belong to α-S3PR and their number in each example equals that of basic siphons.

  17. A computational study of liposome logic: towards cellular computing from the bottom up

    PubMed Central

    Smaldon, James; Romero-Campero, Francisco J.; Fernández Trillo, Francisco; Gheorghe, Marian; Alexander, Cameron

    2010-01-01

    In this paper we propose a new bottom-up approach to cellular computing, in which computational chemical processes are encapsulated within liposomes. This “liposome logic” approach (also called vesicle computing) makes use of supra-molecular chemistry constructs, e.g. protocells, chells, etc. as minimal cellular platforms to which logical functionality can be added. Modeling and simulations feature prominently in “top-down” synthetic biology, particularly in the specification, design and implementation of logic circuits through bacterial genome reengineering. The second contribution in this paper is the demonstration of a novel set of tools for the specification, modelling and analysis of “bottom-up” liposome logic. In particular, simulation and modelling techniques are used to analyse some example liposome logic designs, ranging from relatively simple NOT gates and NAND gates to SR-Latches, D Flip-Flops all the way to 3 bit ripple counters. The approach we propose consists of specifying, by means of P systems, gene regulatory network-like systems operating inside proto-membranes. This P systems specification can be automatically translated and executed through a multiscaled pipeline composed of dissipative particle dynamics (DPD) simulator and Gillespie’s stochastic simulation algorithm (SSA). Finally, model selection and analysis can be performed through a model checking phase. This is the first paper we are aware of that brings to bear formal specifications, DPD, SSA and model checking to the problem of modeling target computational functionality in protocells. Potential chemical routes for the laboratory implementation of these simulations are also discussed thus for the first time suggesting a potentially realistic physiochemical implementation for membrane computing from the bottom-up. PMID:21886681

  18. Simulation Experiment on Landing Site Selection Using a Simple Geometric Approach

    NASA Astrophysics Data System (ADS)

    Zhao, W.; Tong, X.; Xie, H.; Jin, Y.; Liu, S.; Wu, D.; Liu, X.; Guo, L.; Zhou, Q.

    2017-07-01

    Safe landing is an important part of the planetary exploration mission. Even fine scale terrain hazards (such as rocks, small craters, steep slopes, which would not be accurately detected from orbital reconnaissance) could also pose a serious risk on planetary lander or rover and scientific instruments on-board it. In this paper, a simple geometric approach on planetary landing hazard detection and safe landing site selection is proposed. In order to achieve full implementation of this algorithm, two easy-to-compute metrics are presented for extracting the terrain slope and roughness information. Unlike conventional methods which must do the robust plane fitting and elevation interpolation for DEM generation, in this work, hazards is identified through the processing directly on LiDAR point cloud. For safe landing site selection, a Generalized Voronoi Diagram is constructed. Based on the idea of maximum empty circle, the safest landing site can be determined. In this algorithm, hazards are treated as general polygons, without special simplification (e.g. regarding hazards as discrete circles or ellipses). So using the aforementioned method to process hazards is more conforming to the real planetary exploration scenario. For validating the approach mentioned above, a simulated planetary terrain model was constructed using volcanic ash with rocks in indoor environment. A commercial laser scanner mounted on a rail was used to scan the terrain surface at different hanging positions. The results demonstrate that fairly hazard detection capability and reasonable site selection was obtained compared with conventional method, yet less computational time and less memory usage was consumed. Hence, it is a feasible candidate approach for future precision landing selection on planetary surface.

  19. Computation of Sensitivity Derivatives of Navier-Stokes Equations using Complex Variables

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.

    2004-01-01

    Accurate computation of sensitivity derivatives is becoming an important item in Computational Fluid Dynamics (CFD) because of recent emphasis on using nonlinear CFD methods in aerodynamic design, optimization, stability and control related problems. Several techniques are available to compute gradients or sensitivity derivatives of desired flow quantities or cost functions with respect to selected independent (design) variables. Perhaps the most common and oldest method is to use straightforward finite-differences for the evaluation of sensitivity derivatives. Although very simple, this method is prone to errors associated with choice of step sizes and can be cumbersome for geometric variables. The cost per design variable for computing sensitivity derivatives with central differencing is at least equal to the cost of three full analyses, but is usually much larger in practice due to difficulty in choosing step sizes. Another approach gaining popularity is the use of Automatic Differentiation software (such as ADIFOR) to process the source code, which in turn can be used to evaluate the sensitivity derivatives of preselected functions with respect to chosen design variables. In principle, this approach is also very straightforward and quite promising. The main drawback is the large memory requirement because memory use increases linearly with the number of design variables. ADIFOR software can also be cumber-some for large CFD codes and has not yet reached a full maturity level for production codes, especially in parallel computing environments.

  20. The comparison of various approach to evaluation erosion risks and design control erosion measures

    NASA Astrophysics Data System (ADS)

    Kapicka, Jiri

    2015-04-01

    In the present is in the Czech Republic one methodology how to compute and compare erosion risks. This methodology contain also method to design erosion control measures. The base of this methodology is Universal Soil Loss Equation (USLE) and their result long-term average annual rate of erosion (G). This methodology is used for landscape planners. Data and statistics from database of erosion events in the Czech Republic shows that many troubles and damages are from local episodes of erosion events. An extent of these events and theirs impact are conditional to local precipitation events, current plant phase and soil conditions. These erosion events can do troubles and damages on agriculture land, municipally property and hydro components and even in a location is from point of view long-term average annual rate of erosion in good conditions. Other way how to compute and compare erosion risks is episodes approach. In this paper is presented the compare of various approach to compute erosion risks. The comparison was computed to locality from database of erosion events on agricultural land in the Czech Republic where have been records two erosion events. The study area is a simple agriculture land without any barriers that can have high influence to water flow and soil sediment transport. The computation of erosion risks (for all methodology) was based on laboratory analysis of soil samples which was sampled on study area. Results of the methodology USLE, MUSLE and results from mathematical model Erosion 3D have been compared. Variances of the results in space distribution of the places with highest soil erosion where compared and discussed. Other part presents variances of design control erosion measures where their design was done on based different methodology. The results shows variance of computed erosion risks which was done by different methodology. These variances can start discussion about different approach how compute and evaluate erosion risks in areas with different importance.

  1. Efficient integration method for fictitious domain approaches

    NASA Astrophysics Data System (ADS)

    Duczek, Sascha; Gabbert, Ulrich

    2015-10-01

    In the current article, we present an efficient and accurate numerical method for the integration of the system matrices in fictitious domain approaches such as the finite cell method (FCM). In the framework of the FCM, the physical domain is embedded in a geometrically larger domain of simple shape which is discretized using a regular Cartesian grid of cells. Therefore, a spacetree-based adaptive quadrature technique is normally deployed to resolve the geometry of the structure. Depending on the complexity of the structure under investigation this method accounts for most of the computational effort. To reduce the computational costs for computing the system matrices an efficient quadrature scheme based on the divergence theorem (Gauß-Ostrogradsky theorem) is proposed. Using this theorem the dimension of the integral is reduced by one, i.e. instead of solving the integral for the whole domain only its contour needs to be considered. In the current paper, we present the general principles of the integration method and its implementation. The results to several two-dimensional benchmark problems highlight its properties. The efficiency of the proposed method is compared to conventional spacetree-based integration techniques.

  2. Creating a Simple Single Computational Approach to Modeling Rarefied and Continuum Flow About Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Goldstein, David B.; Varghese, Philip L.

    1997-01-01

    We proposed to create a single computational code incorporating methods that can model both rarefied and continuum flow to enable the efficient simulation of flow about space craft and high altitude hypersonic aerospace vehicles. The code was to use a single grid structure that permits a smooth transition between the continuum and rarefied portions of the flow. Developing an appropriate computational boundary between the two regions represented a major challenge. The primary approach chosen involves coupling a four-speed Lattice Boltzmann model for the continuum flow with the DSMC method in the rarefied regime. We also explored the possibility of using a standard finite difference Navier Stokes solver for the continuum flow. With the resulting code we will ultimately investigate three-dimensional plume impingement effects, a subject of critical importance to NASA and related to the work of Drs. Forrest Lumpkin, Steve Fitzgerald and Jay Le Beau at Johnson Space Center. Below is a brief background on the project and a summary of the results as of the end of the grant.

  3. Toward the design of alkynylimidazole fluorophores: computational and experimental characterization of spectroscopic features in solution and in poly(methyl methacrylate).

    PubMed

    Barone, Vincenzo; Bellina, Fabio; Biczysko, Malgorzata; Bloino, Julien; Fornaro, Teresa; Latouche, Camille; Lessi, Marco; Marianetti, Giulia; Minei, Pierpaolo; Panattoni, Alessandro; Pucci, Andrea

    2015-10-28

    The possibilities offered by organic fluorophores in the preparation of advanced plastic materials have been increased by designing novel alkynylimidazole dyes, featuring different push and pull groups. This new family of fluorescent dyes was synthesized by means of a one-pot sequential bromination-alkynylation of the heteroaromatic core, and their optical properties were investigated in tetrahydrofuran and in poly(methyl methacrylate). An efficient in silico pre-screening scheme was devised as consisting of a step-by-step procedure employing computational methodologies by simulation of electronic spectra within simple vertical energy and more sophisticated vibronic approaches. Such an approach was also extended to efficiently simulate one-photon absorption and emission spectra of the dyes in the polymer environment for their potential application in luminescent solar concentrators. Besides the specific applications of this novel material, the integration of computational and experimental techniques reported here provides an efficient protocol that can be applied to make a selection among similar dye candidates, which constitute the essential responsive part of those fluorescent plastic materials.

  4. SeqPig: simple and scalable scripting for large sequencing data sets in Hadoop.

    PubMed

    Schumacher, André; Pireddu, Luca; Niemenmaa, Matti; Kallio, Aleksi; Korpelainen, Eija; Zanetti, Gianluigi; Heljanko, Keijo

    2014-01-01

    Hadoop MapReduce-based approaches have become increasingly popular due to their scalability in processing large sequencing datasets. However, as these methods typically require in-depth expertise in Hadoop and Java, they are still out of reach of many bioinformaticians. To solve this problem, we have created SeqPig, a library and a collection of tools to manipulate, analyze and query sequencing datasets in a scalable and simple manner. SeqPigscripts use the Hadoop-based distributed scripting engine Apache Pig, which automatically parallelizes and distributes data processing tasks. We demonstrate SeqPig's scalability over many computing nodes and illustrate its use with example scripts. Available under the open source MIT license at http://sourceforge.net/projects/seqpig/

  5. Explaining Moral Behavior.

    PubMed

    Osman, Magda; Wiegmann, Alex

    2017-03-01

    In this review we make a simple theoretical argument which is that for theory development, computational modeling, and general frameworks for understanding moral psychology researchers should build on domain-general principles from reasoning, judgment, and decision-making research. Our approach is radical with respect to typical models that exist in moral psychology that tend to propose complex innate moral grammars and even evolutionarily guided moral principles. In support of our argument we show that by using a simple value-based decision model we can capture a range of core moral behaviors. Crucially, the argument we propose is that moral situations per se do not require anything specialized or different from other situations in which we have to make decisions, inferences, and judgments in order to figure out how to act.

  6. Minimalist design of a robust real-time quantum random number generator

    NASA Astrophysics Data System (ADS)

    Kravtsov, K. S.; Radchenko, I. V.; Kulik, S. P.; Molotkov, S. N.

    2015-08-01

    We present a simple and robust construction of a real-time quantum random number generator (QRNG). Our minimalist approach ensures stable operation of the device as well as its simple and straightforward hardware implementation as a stand-alone module. As a source of randomness the device uses measurements of time intervals between clicks of a single-photon detector. The obtained raw sequence is then filtered and processed by a deterministic randomness extractor, which is realized as a look-up table. This enables high speed on-the-fly processing without the need of extensive computations. The overall performance of the device is around 1 random bit per detector click, resulting in 1.2 Mbit/s generation rate in our implementation.

  7. Young Children and Turtle Graphics Programming: Generating and Debugging Simple Turtle Programs.

    ERIC Educational Resources Information Center

    Cuneo, Diane O.

    Turtle graphics is a popular vehicle for introducing children to computer programming. Children combine simple graphic commands to get a display screen cursor (called a turtle) to draw designs on the screen. The purpose of this study was to examine young children's abilities to function in a simple computer programming environment. Four- and…

  8. A method for the modelling of porous and solid wind tunnel walls in computational fluid dynamics codes

    NASA Technical Reports Server (NTRS)

    Beutner, Thomas John

    1993-01-01

    Porous wall wind tunnels have been used for several decades and have proven effective in reducing wall interference effects in both low speed and transonic testing. They allow for testing through Mach 1, reduce blockage effects and reduce shock wave reflections in the test section. Their usefulness in developing computational fluid dynamics (CFD) codes has been limited, however, by the difficulties associated with modelling the effect of a porous wall in CFD codes. Previous approaches to modelling porous wall effects have depended either upon a simplified linear boundary condition, which has proven inadequate, or upon detailed measurements of the normal velocity near the wall, which require extensive wind tunnel time. The current work was initiated in an effort to find a simple, accurate method of modelling a porous wall boundary condition in CFD codes. The development of such a method would allow data from porous wall wind tunnels to be used more readily in validating CFD codes. This would be beneficial when transonic validations are desired, or when large models are used to achieve high Reynolds numbers in testing. A computational and experimental study was undertaken to investigate a new method of modelling solid and porous wall boundary conditions in CFD codes. The method utilized experimental measurements at the walls to develop a flow field solution based on the method of singularities. This flow field solution was then imposed as a pressure boundary condition in a CFD simulation of the internal flow field. The effectiveness of this method in describing the effect of porosity changes on the wall was investigated. Also, the effectiveness of this method when only sparse experimental measurements were available has been investigated. The current work demonstrated this approach for low speed flows and compared the results with experimental data obtained from a heavily instrumented variable porosity test section. The approach developed was simple, computationally inexpensive, and did not require extensive or intrusive measurements of the boundary conditions during the wind tunnel test. It may be applied to both solid and porous wall wind tunnel tests.

  9. Scaling behavior of ground-state energy cluster expansion for linear polyenes

    NASA Astrophysics Data System (ADS)

    Griffin, L. L.; Wu, Jian; Klein, D. J.; Schmalz, T. G.; Bytautas, L.

    Ground-state energies for linear-chain polyenes are additively expanded in a sequence of terms for chemically relevant conjugated substructures of increasing size. The asymptotic behavior of the large-substructure limit (i.e., high-polymer limit) is investigated as a means of characterizing the rapidity of convergence and consequent utility of this energy cluster expansion. Consideration is directed to computations via: simple Hückel theory, a refined Hückel scheme with geometry optimization, restricted Hartree-Fock self-consistent field (RHF-SCF) solutions of fixed bond-length Parisier-Parr-Pople (PPP)/Hubbard models, and ab initio SCF approaches with and without geometry optimization. The cluster expansion in what might be described as the more "refined" approaches appears to lead to qualitatively more rapid convergence: exponentially fast as opposed to an inverse power at the simple Hückel or SCF-Hubbard levels. The substructural energy cluster expansion then seems to merit special attention. Its possible utility in making accurate extrapolations from finite systems to extended polymers is noted.

  10. Beyond Born-Mayer: Improved models for short-range repulsion in ab initio force fields

    DOE PAGES

    Van Vleet, Mary J.; Misquitta, Alston J.; Stone, Anthony J.; ...

    2016-06-23

    Short-range repulsion within inter-molecular force fields is conventionally described by either Lennard-Jones or Born-Mayer forms. Despite their widespread use, these simple functional forms are often unable to describe the interaction energy accurately over a broad range of inter-molecular distances, thus creating challenges in the development of ab initio force fields and potentially leading to decreased accuracy and transferability. Herein, we derive a novel short-range functional form based on a simple Slater-like model of overlapping atomic densities and an iterated stockholder atom (ISA) partitioning of the molecular electron density. We demonstrate that this Slater-ISA methodology yields a more accurate, transferable, andmore » robust description of the short-range interactions at minimal additional computational cost compared to standard Lennard-Jones or Born-Mayer approaches. Lastly, we show how this methodology can be adapted to yield the standard Born-Mayer functional form while still retaining many of the advantages of the Slater-ISA approach.« less

  11. Numerical evaluation of a single ellipsoid motion in Newtonian and power-law fluids

    NASA Astrophysics Data System (ADS)

    Férec, Julien; Ausias, Gilles; Natale, Giovanniantonio

    2018-05-01

    A computational model is developed for simulating the motion of a single ellipsoid suspended in a Newtonian and power-law fluid, respectively. Based on a finite element method (FEM), the approach consists in seeking solutions for the linear and angular particle velocities using a minimization algorithm, such that the net hydrodynamic force and torque acting on the ellipsoid are zero. For a Newtonian fluid subjected to a simple shear flow, the Jeffery's predictions are recovered at any aspect ratios. The motion of a single ellipsoidal fiber is found to be slightly disturbed by the shear-thinning character of the suspending fluid, when compared with the Jeffery's solutions. Surprisingly, the perturbation can be completely neglected for a particle with a large aspect ratio. Furthermore, the particle centroid is also found to translate with the same linear velocity as the undisturbed simple shear flow evaluated at particle centroid. This is confirmed by recent works based on experimental investigations and modeling approach (1-2).

  12. Combinational Reasoning of Quantitative Fuzzy Topological Relations for Simple Fuzzy Regions

    PubMed Central

    Liu, Bo; Li, Dajun; Xia, Yuanping; Ruan, Jian; Xu, Lili; Wu, Huanyi

    2015-01-01

    In recent years, formalization and reasoning of topological relations have become a hot topic as a means to generate knowledge about the relations between spatial objects at the conceptual and geometrical levels. These mechanisms have been widely used in spatial data query, spatial data mining, evaluation of equivalence and similarity in a spatial scene, as well as for consistency assessment of the topological relations of multi-resolution spatial databases. The concept of computational fuzzy topological space is applied to simple fuzzy regions to efficiently and more accurately solve fuzzy topological relations. Thus, extending the existing research and improving upon the previous work, this paper presents a new method to describe fuzzy topological relations between simple spatial regions in Geographic Information Sciences (GIS) and Artificial Intelligence (AI). Firstly, we propose a new definition for simple fuzzy line segments and simple fuzzy regions based on the computational fuzzy topology. And then, based on the new definitions, we also propose a new combinational reasoning method to compute the topological relations between simple fuzzy regions, moreover, this study has discovered that there are (1) 23 different topological relations between a simple crisp region and a simple fuzzy region; (2) 152 different topological relations between two simple fuzzy regions. In the end, we have discussed some examples to demonstrate the validity of the new method, through comparisons with existing fuzzy models, we showed that the proposed method can compute more than the existing models, as it is more expressive than the existing fuzzy models. PMID:25775452

  13. Observability during planetary approach navigation

    NASA Technical Reports Server (NTRS)

    Bishop, Robert H.; Burkhart, P. Daniel; Thurman, Sam W.

    1993-01-01

    The objective of the research is to develop an analytic technique to predict the relative navigation capability of different Earth-based radio navigation measurements. In particular, the problem is to determine the relative ability of geocentric range and Doppler measurements to detect the effects of the target planet gravitational attraction on the spacecraft during the planetary approach and near-encounter mission phases. A complete solution to the two-dimensional problem has been developed. Relatively simple analytic formulas are obtained for range and Doppler measurements which describe the observability content of the measurement data along the approach trajectories. An observability measure is defined which is based on the observability matrix for nonlinear systems. The results show good agreement between the analytic observability analysis and the computational batch processing method.

  14. On the heteroclinic connection problem for multi-well gradient systems

    NASA Astrophysics Data System (ADS)

    Zuniga, Andres; Sternberg, Peter

    2016-10-01

    We revisit the existence problem of heteroclinic connections in RN associated with Hamiltonian systems involving potentials W :RN → R having several global minima. Under very mild assumptions on W we present a simple variational approach to first find geodesics minimizing length of curves joining any two of the potential wells, where length is computed with respect to a degenerate metric having conformal factor √{ W}. Then we show that when such a minimizing geodesic avoids passing through other wells of the potential at intermediate times, it gives rise to a heteroclinic connection between the two wells. This work improves upon the approach of [22] and represents a more geometric alternative to the approaches of e.g. [5,10,14,17] for finding such connections.

  15. Application of Bayesian Approach in Cancer Clinical Trial

    PubMed Central

    Bhattacharjee, Atanu

    2014-01-01

    The application of Bayesian approach in clinical trials becomes more useful over classical method. It is beneficial from design to analysis phase. The straight forward statement is possible to obtain through Bayesian about the drug treatment effect. Complex computational problems are simple to handle with Bayesian techniques. The technique is only feasible to performing presence of prior information of the data. The inference is possible to establish through posterior estimates. However, some limitations are present in this method. The objective of this work was to explore the several merits and demerits of Bayesian approach in cancer research. The review of the technique will be helpful for the clinical researcher involved in the oncology to explore the limitation and power of Bayesian techniques. PMID:29147387

  16. Direction of Coupling from Phases of Interacting Oscillators: A Permutation Information Approach

    NASA Astrophysics Data System (ADS)

    Bahraminasab, A.; Ghasemi, F.; Stefanovska, A.; McClintock, P. V. E.; Kantz, H.

    2008-02-01

    We introduce a directionality index for a time series based on a comparison of neighboring values. It can distinguish unidirectional from bidirectional coupling, as well as reveal and quantify asymmetry in bidirectional coupling. It is tested on a numerical model of coupled van der Pol oscillators, and applied to cardiorespiratory data from healthy subjects. There is no need for preprocessing and fine-tuning the parameters, which makes the method very simple, computationally fast and robust.

  17. A Multiobjective Approach Applied to the Protein Structure Prediction Problem

    DTIC Science & Technology

    2002-03-07

    like a low energy search landscape . 2.1.1 Symbolic/Formalized Problem Domain Description. Every computer representable problem can also be embodied...method [60]. 3.4 Energy Minimization Methods The energy landscape algorithms are based on the idea that a protein’s final resting conformation is...in our GA used to search the PSP problem energy landscape ). 3.5.1 Simple GA. The main routine in a sGA, after encoding the problem, builds a

  18. Three dimensional PNS solutions of hypersonic internal flows with equilibrium chemistry

    NASA Technical Reports Server (NTRS)

    Liou, May-Fun

    1989-01-01

    An implicit procedure for solving parabolized Navier-Stokes equations under the assumption of a general equation of state for a gas in chemical equilibrium is given. A general and consistent approach for the evaluation of Jacobian matrices in the implicit operator avoids the use of unnecessary auxiliary quantities and approximations, and leads to a simple expression. Applications to two- and three-dimensional flow problems show efficiency in computer time and economy in storage.

  19. FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting.

    PubMed

    Alomar, Miquel L; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L

    2016-01-01

    Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting.

  20. Valuation of exotic options in the framework of Levy processes

    NASA Astrophysics Data System (ADS)

    Milev, Mariyan; Georgieva, Svetla; Markovska, Veneta

    2013-12-01

    In this paper we explore a straightforward procedure to price derivatives by using the Monte Carlo approach when the underlying process is a jump-diffusion. We have compared the Black-Scholes model with one of its extensions that is the Merton model. The latter model is better in capturing the market's phenomena and is comparative to stochastic volatility models in terms of pricing accuracy. We have presented simulations of asset paths and pricing of barrier options for both Geometric Brownian motion and exponential Levy processes as it is the concrete case of the Merton model. A desired level of accuracy is obtained with simple computer operations in MATLAB for efficient computational time.

  1. FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting

    PubMed Central

    Alomar, Miquel L.; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L.

    2016-01-01

    Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting. PMID:26880876

  2. Method to compute the stress-energy tensor for a quantum field outside a black hole that forms from collapse

    NASA Astrophysics Data System (ADS)

    Anderson, Paul; Evans, Charles

    2017-01-01

    A method to compute the stress-energy tensor for a quantized massless minimally coupled scalar field outside the event horizon of a 4-D black hole that forms from the collapse of a spherically symmetric null shell is given. The method is illustrated in the corresponding 2-D case which is mathematically similar but is simple enough that the calculations can be done analytically. The approach to the Unruh state at late times is discussed. National Science Foundation Grant No. PHY-1505875 to Wake Forest University and National Science Foundation Grant No. PHY-1506182 to the University of North Carolina, Chapel Hill

  3. Investigation of methods to search for the boundaries on the image and their use on lung hardware of methods finding saliency map

    NASA Astrophysics Data System (ADS)

    Semenishchev, E. A.; Marchuk, V. I.; Fedosov, V. P.; Stradanchenko, S. G.; Ruslyakov, D. V.

    2015-05-01

    This work aimed to study computationally simple method of saliency map calculation. Research in this field received increasing interest for the use of complex techniques in portable devices. A saliency map allows increasing the speed of many subsequent algorithms and reducing the computational complexity. The proposed method of saliency map detection based on both image and frequency space analysis. Several examples of test image from the Kodak dataset with different detalisation considered in this paper demonstrate the effectiveness of the proposed approach. We present experiments which show that the proposed method providing better results than the framework Salience Toolbox in terms of accuracy and speed.

  4. The indexed time table approach for planning and acting

    NASA Technical Reports Server (NTRS)

    Ghallab, Malik; Alaoui, Amine Mounir

    1989-01-01

    A representation is discussed of symbolic temporal relations, called IxTeT, that is both powerful enough at the reasoning level for tasks such as plan generation, refinement and modification, and efficient enough for dealing with real time constraints in action monitoring and reactive planning. Such representation for dealing with time is needed in a teleoperated space robot. After a brief survey of known approaches, the proposed representation shows its computational efficiency for managing a large data base of temporal relations. Reactive planning with IxTeT is described and exemplified through the problem of mission planning and modification for a simple surveying satellite.

  5. Shear velocity criterion for incipient motion of sediment

    USGS Publications Warehouse

    Simoes, Francisco J.

    2014-01-01

    The prediction of incipient motion has had great importance to the theory of sediment transport. The most commonly used methods are based on the concept of critical shear stress and employ an approach similar, or identical, to the Shields diagram. An alternative method that uses the movability number, defined as the ratio of the shear velocity to the particle’s settling velocity, was employed in this study. A large amount of experimental data were used to develop an empirical incipient motion criterion based on the movability number. It is shown that this approach can provide a simple and accurate method of computing the threshold condition for sediment motion.

  6. Canonical quantization of general relativity in discrete space-times.

    PubMed

    Gambini, Rodolfo; Pullin, Jorge

    2003-01-17

    It has long been recognized that lattice gauge theory formulations, when applied to general relativity, conflict with the invariance of the theory under diffeomorphisms. We analyze discrete lattice general relativity and develop a canonical formalism that allows one to treat constrained theories in Lorentzian signature space-times. The presence of the lattice introduces a "dynamical gauge" fixing that makes the quantization of the theories conceptually clear, albeit computationally involved. The problem of a consistent algebra of constraints is automatically solved in our approach. The approach works successfully in other field theories as well, including topological theories. A simple cosmological application exhibits quantum elimination of the singularity at the big bang.

  7. Aerodynamic prediction techniques for hypersonic configuration design

    NASA Technical Reports Server (NTRS)

    1981-01-01

    An investigation of approximate theoretical techniques for predicting aerodynamic characteristics and surface pressures for relatively slender vehicles at moderate hypersonic speeds was performed. Emphasis was placed on approaches that would be responsive to preliminary configuration design level of effort. Potential theory was examined in detail to meet this objective. Numerical pilot codes were developed for relatively simple three dimensional geometries to evaluate the capability of the approximate equations of motion considered. Results from the computations indicate good agreement with higher order solutions and experimental results for a variety of wing, body, and wing-body shapes for values of the hypersonic similarity parameter M delta approaching one.

  8. A Model-Driven Approach for Telecommunications Network Services Definition

    NASA Astrophysics Data System (ADS)

    Chiprianov, Vanea; Kermarrec, Yvon; Alff, Patrick D.

    Present day Telecommunications market imposes a short concept-to-market time for service providers. To reduce it, we propose a computer-aided, model-driven, service-specific tool, with support for collaborative work and for checking properties on models. We started by defining a prototype of the Meta-model (MM) of the service domain. Using this prototype, we defined a simple graphical modeling language specific for service designers. We are currently enlarging the MM of the domain using model transformations from Network Abstractions Layers (NALs). In the future, we will investigate approaches to ensure the support for collaborative work and for checking properties on models.

  9. Constraint Programming to Solve Maximal Density Still Life

    NASA Astrophysics Data System (ADS)

    Chu, Geoffrey; Petrie, Karen Elizabeth; Yorke-Smith, Neil

    The Maximum Density Still Life problem fills a finite Game of Life board with a stable pattern of cells that has as many live cells as possible. Although simple to state, this problem is computationally challenging for any but the smallest sizes of board. Especially difficult is to prove that the maximum number of live cells has been found. Various approaches have been employed. The most successful are approaches based on Constraint Programming (CP). We describe the Maximum Density Still Life problem, introduce the concept of constraint programming, give an overview on how the problem can be modelled and solved with CP, and report on best-known results for the problem.

  10. A computational approach to predicting ligand selectivity for the size-based separation of trivalent lanthanides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ivanov, Alexander S.; Bryantsev, Vyacheslav S.

    An accurate description of solvation effects for trivalent lanthanide ions is a main stumbling block to the qualitative prediction of selectivity trends along the lanthanide series. In this work, we propose a simple model to describe the differential effect of solvation in the competitive binding of a ligand by lanthanide ions by including weakly co-ordinated counterions in the complexes of more than a +1 charge. The success of the approach to quantitatively reproduce selectivities obtained from aqueous phase complexation studies demonstrates its potential for the design and screening of new ligands for efficient size-based separation.

  11. A computational approach to predicting ligand selectivity for the size-based separation of trivalent lanthanides

    DOE PAGES

    Ivanov, Alexander S.; Bryantsev, Vyacheslav S.

    2016-06-20

    An accurate description of solvation effects for trivalent lanthanide ions is a main stumbling block to the qualitative prediction of selectivity trends along the lanthanide series. In this work, we propose a simple model to describe the differential effect of solvation in the competitive binding of a ligand by lanthanide ions by including weakly co-ordinated counterions in the complexes of more than a +1 charge. The success of the approach to quantitatively reproduce selectivities obtained from aqueous phase complexation studies demonstrates its potential for the design and screening of new ligands for efficient size-based separation.

  12. Simple and exact approach to the electronic polarization effect on the solvation free energy: formulation for quantum-mechanical/molecular-mechanical system and its applications to aqueous solutions.

    PubMed

    Takahashi, Hideaki; Omi, Atsushi; Morita, Akihiro; Matubayasi, Nobuyuki

    2012-06-07

    We present a simple and exact numerical approach to compute the free energy contribution δμ in solvation due to the electron density polarization and fluctuation of a quantum-mechanical solute in the quantum-mechanical/molecular-mechanical (QM/MM) simulation combined with the theory of the energy representation (QM/MM-ER). Since the electron density fluctuation is responsible for the many-body QM-MM interactions, the standard version of the energy representation method cannot be applied directly. Instead of decomposing the QM-MM polarization energy into the pairwise additive and non-additive contributions, we take sum of the polarization energies in the QM-MM interaction and adopt it as a new energy coordinate for the method of energy representation. Then, it is demonstrated that the free energy δμ can be exactly formulated in terms of the energy distribution functions for the solution and reference systems with respect to this energy coordinate. The benchmark tests were performed to examine the numerical efficiency of the method with respect to the changes in the individual properties of the solvent and the solute. Explicitly, we computed the solvation free energy of a QM water molecule in ambient and supercritical water, and also the free-energy change associated with the isomerization reaction of glycine from neutral to zwitterionic structure in aqueous solution. In all the systems examined, it was demonstrated that the computed free energy δμ agrees with the experimental value, irrespective of the choice of the reference electron density of the QM solute. The present method was also applied to a prototype reaction of adenosine 5'-triphosphate hydrolysis where the effect of the electron density fluctuation is substantial due to the excess charge. It was demonstrated that the experimental free energy of the reaction has been accurately reproduced with the present approach.

  13. Gray-box reservoir routing to compute flow propagation in operational forecasting and decision support systems

    NASA Astrophysics Data System (ADS)

    Russano, Euan; Schwanenberg, Dirk; Alvarado Montero, Rodolfo

    2017-04-01

    Operational forecasting and decision support systems for flood mitigation and the daily management of water resources require computationally efficient flow routing models. If backwater effects do not play an important role, a hydrological routing approach is often a pragmatic choice. It offers a reasonable accuracy at low computational costs in comparison to a more detailed hydraulic model. This work presents a nonlinear reservoir routing scheme as well as its implementation for the flow propagation between the hydro reservoir Três Marias and a downstream inundation-affected city Pirapora in Brazil. We refer to the model as a gray-box approach due to the identification of the parameter k by a data-driven approach for each reservoir of the cascade, instead of using estimates based on physical characteristics. The model reproduces the discharge at the gauge Pirapora, using 15 reservoirs in the cascade. The obtained results are compared with the ones obtained from the full-hydrodynamic model SOBEK. Results show a relatively good performance for the validation period, with a RMSE of 139.48 for the gray-box model, while the full-hydrodynamic model shows a RMSE of 136.67. The simulation time for a period of several years for the full-hydrodynamic took approximately 64s, while the gray-box model only required about 0.50s. This provides a significant speedup of the computation by only a little trade-off in accuracy, pointing at the potential of the simple approach in the context of time-critical, operational applications. Key-words: flow routing, reservoir routing, gray-box model

  14. An Impulse Based Substructuring approach for impact analysis and load case simulations

    NASA Astrophysics Data System (ADS)

    Rixen, Daniel J.; van der Valk, Paul L. C.

    2013-12-01

    In the present paper we outline the basic theory of assembling substructures for which the dynamics are described as Impulse Response Functions. The assembly procedure computes the time response of a system by evaluating per substructure the convolution product between the Impulse Response Functions and the applied forces, including the interface forces that are computed to satisfy the interface compatibility. We call this approach the Impulse Based Substructuring method since it transposes to the time domain the Frequency Based Substructuring approach. In the Impulse Based Substructuring technique the Impulse Response Functions of the substructures can be gathered either from experimental tests using a hammer impact or from time-integration of numerical submodels. In this paper the implementation of the method is outlined for the case when the impulse responses of the substructures are computed numerically. A simple bar example is shown in order to illustrate the concept. The Impulse Based Substructuring allows fast evaluation of impact response of a structure when the impulse response of its components is known. It can thus be used to efficiently optimize designs of consumer products by including impact behavior at the early stage of the design, but also for performing substructured simulations of complex structures such as offshore wind turbines.

  15. Large eddy simulation of the FDA benchmark nozzle for a Reynolds number of 6500.

    PubMed

    Janiga, Gábor

    2014-04-01

    This work investigates the flow in a benchmark nozzle model of an idealized medical device proposed by the FDA using computational fluid dynamics (CFD). It was in particular shown that a proper modeling of the transitional flow features is particularly challenging, leading to large discrepancies and inaccurate predictions from the different research groups using Reynolds-averaged Navier-Stokes (RANS) modeling. In spite of the relatively simple, axisymmetric computational geometry, the resulting turbulent flow is fairly complex and non-axisymmetric, in particular due to the sudden expansion. The resulting flow cannot be well predicted with simple modeling approaches. Due to the varying diameters and flow velocities encountered in the nozzle, different typical flow regions and regimes can be distinguished, from laminar to transitional and to weakly turbulent. The purpose of the present work is to re-examine the FDA-CFD benchmark nozzle model at a Reynolds number of 6500 using large eddy simulation (LES). The LES results are compared with published experimental data obtained by Particle Image Velocimetry (PIV) and an excellent agreement can be observed considering the temporally averaged flow velocities. Different flow regimes are characterized by computing the temporal energy spectra at different locations along the main axis. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Tchebichef moment transform on image dithering for mobile applications

    NASA Astrophysics Data System (ADS)

    Ernawan, Ferda; Abu, Nur Azman; Rahmalan, Hidayah

    2012-04-01

    Currently, mobile image applications spend a lot of computing process to display images. A true color raw image contains billions of colors and it consumes high computational power in most mobile image applications. At the same time, mobile devices are only expected to be equipped with lower computing process and minimum storage space. Image dithering is a popular technique to reduce the numbers of bit per pixel at the expense of lower quality image displays. This paper proposes a novel approach on image dithering using 2x2 Tchebichef moment transform (TMT). TMT integrates a simple mathematical framework technique using matrices. TMT coefficients consist of real rational numbers. An image dithering based on TMT has the potential to provide better efficiency and simplicity. The preliminary experiment shows a promising result in term of error reconstructions and image visual textures.

  17. COSP - A computer model of cyclic oxidation

    NASA Technical Reports Server (NTRS)

    Lowell, Carl E.; Barrett, Charles A.; Palmer, Raymond W.; Auping, Judith V.; Probst, Hubert B.

    1991-01-01

    A computer model useful in predicting the cyclic oxidation behavior of alloys is presented. The model considers the oxygen uptake due to scale formation during the heating cycle and the loss of oxide due to spalling during the cooling cycle. The balance between scale formation and scale loss is modeled and used to predict weight change and metal loss kinetics. A simple uniform spalling model is compared to a more complex random spall site model. In nearly all cases, the simpler uniform spall model gave predictions as accurate as the more complex model. The model has been applied to several nickel-base alloys which, depending upon composition, form Al2O3 or Cr2O3 during oxidation. The model has been validated by several experimental approaches. Versions of the model that run on a personal computer are available.

  18. A Bayesian Attractor Model for Perceptual Decision Making

    PubMed Central

    Bitzer, Sebastian; Bruineberg, Jelle; Kiebel, Stefan J.

    2015-01-01

    Even for simple perceptual decisions, the mechanisms that the brain employs are still under debate. Although current consensus states that the brain accumulates evidence extracted from noisy sensory information, open questions remain about how this simple model relates to other perceptual phenomena such as flexibility in decisions, decision-dependent modulation of sensory gain, or confidence about a decision. We propose a novel approach of how perceptual decisions are made by combining two influential formalisms into a new model. Specifically, we embed an attractor model of decision making into a probabilistic framework that models decision making as Bayesian inference. We show that the new model can explain decision making behaviour by fitting it to experimental data. In addition, the new model combines for the first time three important features: First, the model can update decisions in response to switches in the underlying stimulus. Second, the probabilistic formulation accounts for top-down effects that may explain recent experimental findings of decision-related gain modulation of sensory neurons. Finally, the model computes an explicit measure of confidence which we relate to recent experimental evidence for confidence computations in perceptual decision tasks. PMID:26267143

  19. Estimating the Diffusion Coefficients of Sugars Using Diffusion Experiments in Agar-Gel and Computer Simulations.

    PubMed

    Miyamoto, Shuichi; Atsuyama, Kenji; Ekino, Keisuke; Shin, Takashi

    2018-01-01

    The isolation of useful microbes is one of the traditional approaches for the lead generation in drug discovery. As an effective technique for microbe isolation, we recently developed a multidimensional diffusion-based gradient culture system of microbes. In order to enhance the utility of the system, it is favorable to have diffusion coefficients of nutrients such as sugars in the culture medium beforehand. We have, therefore, built a simple and convenient experimental system that uses agar-gel to observe diffusion. Next, we performed computer simulations-based on random-walk concepts-of the experimental diffusion system and derived correlation formulas that relate observable diffusion data to diffusion coefficients. Finally, we applied these correlation formulas to our experimentally-determined diffusion data to estimate the diffusion coefficients of sugars. Our values for these coefficients agree reasonably well with values published in the literature. The effectiveness of our simple technique, which has elucidated the diffusion coefficients of some molecules which are rarely reported (e.g., galactose, trehalose, and glycerol) is demonstrated by the strong correspondence between the literature values and those obtained in our experiments.

  20. A simple algorithm to improve the performance of the WENO scheme on non-uniform grids

    NASA Astrophysics Data System (ADS)

    Huang, Wen-Feng; Ren, Yu-Xin; Jiang, Xiong

    2018-02-01

    This paper presents a simple approach for improving the performance of the weighted essentially non-oscillatory (WENO) finite volume scheme on non-uniform grids. This technique relies on the reformulation of the fifth-order WENO-JS (WENO scheme presented by Jiang and Shu in J. Comput. Phys. 126:202-228, 1995) scheme designed on uniform grids in terms of one cell-averaged value and its left and/or right interfacial values of the dependent variable. The effect of grid non-uniformity is taken into consideration by a proper interpolation of the interfacial values. On non-uniform grids, the proposed scheme is much more accurate than the original WENO-JS scheme, which was designed for uniform grids. When the grid is uniform, the resulting scheme reduces to the original WENO-JS scheme. In the meantime, the proposed scheme is computationally much more efficient than the fifth-order WENO scheme designed specifically for the non-uniform grids. A number of numerical test cases are simulated to verify the performance of the present scheme.

  1. Raney Distributions and Random Matrix Theory

    NASA Astrophysics Data System (ADS)

    Forrester, Peter J.; Liu, Dang-Zheng

    2015-03-01

    Recent works have shown that the family of probability distributions with moments given by the Fuss-Catalan numbers permit a simple parameterized form for their density. We extend this result to the Raney distribution which by definition has its moments given by a generalization of the Fuss-Catalan numbers. Such computations begin with an algebraic equation satisfied by the Stieltjes transform, which we show can be derived from the linear differential equation satisfied by the characteristic polynomial of random matrix realizations of the Raney distribution. For the Fuss-Catalan distribution, an equilibrium problem characterizing the density is identified. The Stieltjes transform for the limiting spectral density of the singular values squared of the matrix product formed from inverse standard Gaussian matrices, and standard Gaussian matrices, is shown to satisfy a variant of the algebraic equation relating to the Raney distribution. Supported on , we show that it too permits a simple functional form upon the introduction of an appropriate choice of parameterization. As an application, the leading asymptotic form of the density as the endpoints of the support are approached is computed, and is shown to have some universal features.

  2. Optical Eigenvector.

    DTIC Science & Technology

    1984-10-01

    it necessary and identify by blckci -. mbrr, ’At tile bneginninp, of this contract , bot], -,-j- .lc the rest of the optical community imagined * that...simple analog optical computer,, could produce satisfactory solutions to elgenproblems. Earl’ - in this contract we improved optical computing... contract both we and the rest of the optical community imagined that simple analog optical computers could produce . satisfactory solutions to

  3. Fast algorithms for transforming back and forth between a signed permutation and its equivalent simple permutation.

    PubMed

    Gog, Simon; Bader, Martin

    2008-10-01

    The problem of sorting signed permutations by reversals is a well-studied problem in computational biology. The first polynomial time algorithm was presented by Hannenhalli and Pevzner in 1995. The algorithm was improved several times, and nowadays the most efficient algorithm has a subquadratic running time. Simple permutations played an important role in the development of these algorithms. Although the latest result of Tannier et al. does not require simple permutations, the preliminary version of their algorithm as well as the first polynomial time algorithm of Hannenhalli and Pevzner use the structure of simple permutations. More precisely, the latter algorithms require a precomputation that transforms a permutation into an equivalent simple permutation. To the best of our knowledge, all published algorithms for this transformation have at least a quadratic running time. For further investigations on genome rearrangement problems, the existence of a fast algorithm for the transformation could be crucial. Another important task is the back transformation, i.e. if we have a sorting on the simple permutation, transform it into a sorting on the original permutation. Again, the naive approach results in an algorithm with quadratic running time. In this paper, we present a linear time algorithm for transforming a permutation into an equivalent simple permutation, and an O(n log n) algorithm for the back transformation of the sorting sequence.

  4. Using evolutionary computations to understand the design and evolution of gene and cell regulatory networks.

    PubMed

    Spirov, Alexander; Holloway, David

    2013-07-15

    This paper surveys modeling approaches for studying the evolution of gene regulatory networks (GRNs). Modeling of the design or 'wiring' of GRNs has become increasingly common in developmental and medical biology, as a means of quantifying gene-gene interactions, the response to perturbations, and the overall dynamic motifs of networks. Drawing from developments in GRN 'design' modeling, a number of groups are now using simulations to study how GRNs evolve, both for comparative genomics and to uncover general principles of evolutionary processes. Such work can generally be termed evolution in silico. Complementary to these biologically-focused approaches, a now well-established field of computer science is Evolutionary Computations (ECs), in which highly efficient optimization techniques are inspired from evolutionary principles. In surveying biological simulation approaches, we discuss the considerations that must be taken with respect to: (a) the precision and completeness of the data (e.g. are the simulations for very close matches to anatomical data, or are they for more general exploration of evolutionary principles); (b) the level of detail to model (we proceed from 'coarse-grained' evolution of simple gene-gene interactions to 'fine-grained' evolution at the DNA sequence level); (c) to what degree is it important to include the genome's cellular context; and (d) the efficiency of computation. With respect to the latter, we argue that developments in computer science EC offer the means to perform more complete simulation searches, and will lead to more comprehensive biological predictions. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. On learning navigation behaviors for small mobile robots with reservoir computing architectures.

    PubMed

    Antonelo, Eric Aislan; Schrauwen, Benjamin

    2015-04-01

    This paper proposes a general reservoir computing (RC) learning framework that can be used to learn navigation behaviors for mobile robots in simple and complex unknown partially observable environments. RC provides an efficient way to train recurrent neural networks by letting the recurrent part of the network (called reservoir) be fixed while only a linear readout output layer is trained. The proposed RC framework builds upon the notion of navigation attractor or behavior that can be embedded in the high-dimensional space of the reservoir after learning. The learning of multiple behaviors is possible because the dynamic robot behavior, consisting of a sensory-motor sequence, can be linearly discriminated in the high-dimensional nonlinear space of the dynamic reservoir. Three learning approaches for navigation behaviors are shown in this paper. The first approach learns multiple behaviors based on the examples of navigation behaviors generated by a supervisor, while the second approach learns goal-directed navigation behaviors based only on rewards. The third approach learns complex goal-directed behaviors, in a supervised way, using a hierarchical architecture whose internal predictions of contextual switches guide the sequence of basic navigation behaviors toward the goal.

  6. Automated surface inspection for steel products using computer vision approach.

    PubMed

    Xi, Jiaqi; Shentu, Lifeng; Hu, Jikang; Li, Mian

    2017-01-10

    Surface inspection is a critical step in ensuring the product quality in the steel-making industry. In order to relieve inspectors of laborious work and improve the consistency of inspection, much effort has been dedicated to the automated inspection using computer vision approaches over the past decades. However, due to non-uniform illumination conditions and similarity between the surface textures and defects, the present methods are usually applicable to very specific cases. In this paper a new framework for surface inspection has been proposed to overcome these limitations. By investigating the image formation process, a quantitative model characterizing the impact of illumination on the image quality is developed, based on which the non-uniform brightness in the image can be effectively removed. Then a simple classifier is designed to identify the defects among the surface textures. The significance of this approach lies in its robustness to illumination changes and wide applicability to different inspection scenarios. The proposed approach has been successfully applied to the real-time surface inspection of round billets in real manufacturing. Implemented on a conventional industrial PC, the algorithm can proceed at 12.5 frames per second with the successful detection rate being over 90% for turned and skinned billets.

  7. Probabilistic representation of gene regulatory networks.

    PubMed

    Mao, Linyong; Resat, Haluk

    2004-09-22

    Recent experiments have established unambiguously that biological systems can have significant cell-to-cell variations in gene expression levels even in isogenic populations. Computational approaches to studying gene expression in cellular systems should capture such biological variations for a more realistic representation. In this paper, we present a new fully probabilistic approach to the modeling of gene regulatory networks that allows for fluctuations in the gene expression levels. The new algorithm uses a very simple representation for the genes, and accounts for the repression or induction of the genes and for the biological variations among isogenic populations simultaneously. Because of its simplicity, introduced algorithm is a very promising approach to model large-scale gene regulatory networks. We have tested the new algorithm on the synthetic gene network library bioengineered recently. The good agreement between the computed and the experimental results for this library of networks, and additional tests, demonstrate that the new algorithm is robust and very successful in explaining the experimental data. The simulation software is available upon request. Supplementary material will be made available on the OUP server.

  8. Decentralized Control of Sound Radiation from an Aircraft-Style Panel Using Iterative Loop Recovery

    NASA Technical Reports Server (NTRS)

    Schiller, Noah H.; Cabell, Randolph H.; Fuller, Chris R.

    2008-01-01

    A decentralized LQG-based control strategy is designed to reduce low-frequency sound transmission through periodically stiffened panels. While modern control strategies have been used to reduce sound radiation from relatively simple structural acoustic systems, significant implementation issues have to be addressed before these control strategies can be extended to large systems such as the fuselage of an aircraft. For instance, centralized approaches typically require a high level of connectivity and are computationally intensive, while decentralized strategies face stability problems caused by the unmodeled interaction between neighboring control units. Since accurate uncertainty bounds are not known a priori, it is difficult to ensure the decentralized control system will be robust without making the controller overly conservative. Therefore an iterative approach is suggested, which utilizes frequency-shaped loop recovery. The approach accounts for modeling error introduced by neighboring control loops, requires no communication between subsystems, and is relatively simple. The control strategy is validated using real-time control experiments performed on a built-up aluminum test structure representative of the fuselage of an aircraft. Experiments demonstrate that the iterative approach is capable of achieving 12 dB peak reductions and a 3.6 dB integrated reduction in radiated sound power from the stiffened panel.

  9. Assessment of the magnetic field exposure due to the battery current of digital mobile phones.

    PubMed

    Jokela, Kari; Puranen, Lauri; Sihvonen, Ari-Pekka

    2004-01-01

    Hand-held digital mobile phones generate pulsed magnetic fields associated with the battery current. The peak value and the waveform of the battery current were measured for seven different models of digital mobile phones, and the results were applied to compute approximately the magnetic flux density and induced currents in the phone-user's head. A simple circular loop model was used for the magnetic field source and a homogeneous sphere consisting of average brain tissue equivalent material simulated the head. The broadband magnetic flux density and the maximal induced current density were compared with the guidelines of ICNIRP using two various approaches. In the first approach the relative exposure was determined separately at each frequency and the exposure ratios were summed to obtain the total exposure (multiple-frequency rule). In the second approach the waveform was weighted in the time domain with a simple low-pass RC filter and the peak value was divided by a peak limit, both derived from the guidelines (weighted peak approach). With the maximum transmitting power (2 W) the measured peak current varied from 1 to 2.7 A. The ICNIRP exposure ratio based on the current density varied from 0.04 to 0.14 for the weighted peak approach and from 0.08 to 0.27 for the multiple-frequency rule. The latter values are considerably greater than the corresponding exposure ratios 0.005 (min) to 0.013 (max) obtained by applying the evaluation based on frequency components presented by the new IEEE standard. Hence, the exposure does not seem to exceed the guidelines. The computed peak magnetic flux density exceeded substantially the derived peak reference level of ICNIRP, but it should be noted that in a near-field exposure the external field strengths are not valid indicators of exposure. Currently, no biological data exist to give a reason for concern about the health effects of magnetic field pulses from mobile phones.

  10. Demo of three ways to use a computer to assist in lab

    NASA Technical Reports Server (NTRS)

    Neville, J. P.

    1990-01-01

    The objective is to help the slow learner and students with a language problem, or to challenge the advanced student. Technology has advanced to the point where images generated on a computer can easily be recorded on a VCR and used as a video tutorial. This transfer can be as simple as pointing a video camera at the screen and recording the image. For more clarity and professional results, a board may be inserted into a computer which will convert the signals directly to the TV standard. Using a computer program that generates movies one can animate various principles which would normally be impossible to show or would require time-lapse photography. For example, you might show the change in shape of grains as a piece of metal is cold worked and then show the recrystallization and grain growth as heat is applied. More imaginative titles and graphics are also possible using this technique. Remedial help may also be offered via computer to those who find a specific concept difficult. A printout of specific data, details of the theory or equipment set-up can be offered. Programs are now available that will help as well as test the student in specific areas so that a Keller type approach can be used with each student to insure each knows the subject before going on to the next topic. A computer can serve as an information source and contain the microstructures, physical data and availability of each material tested in the lab. With this source present unknowns can be evaluated and various tests simulated to create a simple or complex case study lab assignment.

  11. Analog synthetic biology.

    PubMed

    Sarpeshkar, R

    2014-03-28

    We analyse the pros and cons of analog versus digital computation in living cells. Our analysis is based on fundamental laws of noise in gene and protein expression, which set limits on the energy, time, space, molecular count and part-count resources needed to compute at a given level of precision. We conclude that analog computation is significantly more efficient in its use of resources than deterministic digital computation even at relatively high levels of precision in the cell. Based on this analysis, we conclude that synthetic biology must use analog, collective analog, probabilistic and hybrid analog-digital computational approaches; otherwise, even relatively simple synthetic computations in cells such as addition will exceed energy and molecular-count budgets. We present schematics for efficiently representing analog DNA-protein computation in cells. Analog electronic flow in subthreshold transistors and analog molecular flux in chemical reactions obey Boltzmann exponential laws of thermodynamics and are described by astoundingly similar logarithmic electrochemical potentials. Therefore, cytomorphic circuits can help to map circuit designs between electronic and biochemical domains. We review recent work that uses positive-feedback linearization circuits to architect wide-dynamic-range logarithmic analog computation in Escherichia coli using three transcription factors, nearly two orders of magnitude more efficient in parts than prior digital implementations.

  12. Analog synthetic biology

    PubMed Central

    Sarpeshkar, R.

    2014-01-01

    We analyse the pros and cons of analog versus digital computation in living cells. Our analysis is based on fundamental laws of noise in gene and protein expression, which set limits on the energy, time, space, molecular count and part-count resources needed to compute at a given level of precision. We conclude that analog computation is significantly more efficient in its use of resources than deterministic digital computation even at relatively high levels of precision in the cell. Based on this analysis, we conclude that synthetic biology must use analog, collective analog, probabilistic and hybrid analog–digital computational approaches; otherwise, even relatively simple synthetic computations in cells such as addition will exceed energy and molecular-count budgets. We present schematics for efficiently representing analog DNA–protein computation in cells. Analog electronic flow in subthreshold transistors and analog molecular flux in chemical reactions obey Boltzmann exponential laws of thermodynamics and are described by astoundingly similar logarithmic electrochemical potentials. Therefore, cytomorphic circuits can help to map circuit designs between electronic and biochemical domains. We review recent work that uses positive-feedback linearization circuits to architect wide-dynamic-range logarithmic analog computation in Escherichia coli using three transcription factors, nearly two orders of magnitude more efficient in parts than prior digital implementations. PMID:24567476

  13. A simple method for EEG guided transcranial electrical stimulation without models

    NASA Astrophysics Data System (ADS)

    Cancelli, Andrea; Cottone, Carlo; Tecchio, Franca; Truong, Dennis Q.; Dmochowski, Jacek; Bikson, Marom

    2016-06-01

    Objective. There is longstanding interest in using EEG measurements to inform transcranial Electrical Stimulation (tES) but adoption is lacking because users need a simple and adaptable recipe. The conventional approach is to use anatomical head-models for both source localization (the EEG inverse problem) and current flow modeling (the tES forward model), but this approach is computationally demanding, requires an anatomical MRI, and strict assumptions about the target brain regions. We evaluate techniques whereby tES dose is derived from EEG without the need for an anatomical head model, target assumptions, difficult case-by-case conjecture, or many stimulation electrodes. Approach. We developed a simple two-step approach to EEG-guided tES that based on the topography of the EEG: (1) selects locations to be used for stimulation; (2) determines current applied to each electrode. Each step is performed based solely on the EEG with no need for head models or source localization. Cortical dipoles represent idealized brain targets. EEG-guided tES strategies are verified using a finite element method simulation of the EEG generated by a dipole, oriented either tangential or radial to the scalp surface, and then simulating the tES-generated electric field produced by each model-free technique. These model-free approaches are compared to a ‘gold standard’ numerically optimized dose of tES that assumes perfect understanding of the dipole location and head anatomy. We vary the number of electrodes from a few to over three hundred, with focality or intensity as optimization criterion. Main results. Model-free approaches evaluated include (1) voltage-to-voltage, (2) voltage-to-current; (3) Laplacian; and two Ad-Hoc techniques (4) dipole sink-to-sink; and (5) sink to concentric. Our results demonstrate that simple ad hoc approaches can achieve reasonable targeting for the case of a cortical dipole, remarkably with only 2-8 electrodes and no need for a model of the head. Significance. Our approach is verified directly only for a theoretically localized source, but may be potentially applied to an arbitrary EEG topography. For its simplicity and linearity, our recipe for model-free EEG guided tES lends itself to broad adoption and can be applied to static (tDCS), time-variant (e.g., tACS, tRNS, tPCS), or closed-loop tES.

  14. Unidata's Vision for Transforming Geoscience by Moving Data Services and Software to the Cloud

    NASA Astrophysics Data System (ADS)

    Ramamurthy, M. K.; Fisher, W.; Yoksas, T.

    2014-12-01

    Universities are facing many challenges: shrinking budgets, rapidly evolving information technologies, exploding data volumes, multidisciplinary science requirements, and high student expectations. These changes are upending traditional approaches to accessing and using data and software. It is clear that Unidata's products and services must evolve to support new approaches to research and education. After years of hype and ambiguity, cloud computing is maturing in usability in many areas of science and education, bringing the benefits of virtualized and elastic remote services to infrastructure, software, computation, and data. Cloud environments reduce the amount of time and money spent to procure, install, and maintain new hardware and software, and reduce costs through resource pooling and shared infrastructure. Cloud services aimed at providing any resource, at any time, from any place, using any device are increasingly being embraced by all types of organizations. Given this trend and the enormous potential of cloud-based services, Unidata is taking moving to augment its products, services, data delivery mechanisms and applications to align with the cloud-computing paradigm. Specifically, Unidata is working toward establishing a community-based development environment that supports the creation and use of software services to build end-to-end data workflows. The design encourages the creation of services that can be broken into small, independent chunks that provide simple capabilities. Chunks could be used individually to perform a task, or chained into simple or elaborate workflows. The services will also be portable, allowing their use in researchers' own cloud-based computing environments. In this talk, we present a vision for Unidata's future in a cloud-enabled data services and discuss our initial efforts to deploy a subset of Unidata data services and tools in the Amazon EC2 and Microsoft Azure cloud environments, including the transfer of real-time meteorological data into its cloud instances, product generation using those data, and the deployment of TDS, McIDAS ADDE and AWIPS II data servers and the Integrated Data Server visualization tool.

  15. Learning Activity Predictors from Sensor Data: Algorithms, Evaluation, and Applications.

    PubMed

    Minor, Bryan; Doppa, Janardhan Rao; Cook, Diane J

    2017-12-01

    Recent progress in Internet of Things (IoT) platforms has allowed us to collect large amounts of sensing data. However, there are significant challenges in converting this large-scale sensing data into decisions for real-world applications. Motivated by applications like health monitoring and intervention and home automation we consider a novel problem called Activity Prediction , where the goal is to predict future activity occurrence times from sensor data. In this paper, we make three main contributions. First, we formulate and solve the activity prediction problem in the framework of imitation learning and reduce it to a simple regression learning problem. This approach allows us to leverage powerful regression learners that can reason about the relational structure of the problem with negligible computational overhead. Second, we present several metrics to evaluate activity predictors in the context of real-world applications. Third, we evaluate our approach using real sensor data collected from 24 smart home testbeds. We also embed the learned predictor into a mobile-device-based activity prompter and evaluate the app for 9 participants living in smart homes. Our results indicate that our activity predictor performs better than the baseline methods, and offers a simple approach for predicting activities from sensor data.

  16. An autonomous molecular computer for logical control of gene expression.

    PubMed

    Benenson, Yaakov; Gil, Binyamin; Ben-Dor, Uri; Adar, Rivka; Shapiro, Ehud

    2004-05-27

    Early biomolecular computer research focused on laboratory-scale, human-operated computers for complex computational problems. Recently, simple molecular-scale autonomous programmable computers were demonstrated allowing both input and output information to be in molecular form. Such computers, using biological molecules as input data and biologically active molecules as outputs, could produce a system for 'logical' control of biological processes. Here we describe an autonomous biomolecular computer that, at least in vitro, logically analyses the levels of messenger RNA species, and in response produces a molecule capable of affecting levels of gene expression. The computer operates at a concentration of close to a trillion computers per microlitre and consists of three programmable modules: a computation module, that is, a stochastic molecular automaton; an input module, by which specific mRNA levels or point mutations regulate software molecule concentrations, and hence automaton transition probabilities; and an output module, capable of controlled release of a short single-stranded DNA molecule. This approach might be applied in vivo to biochemical sensing, genetic engineering and even medical diagnosis and treatment. As a proof of principle we programmed the computer to identify and analyse mRNA of disease-related genes associated with models of small-cell lung cancer and prostate cancer, and to produce a single-stranded DNA molecule modelled after an anticancer drug.

  17. Multiple neural network approaches to clinical expert systems

    NASA Astrophysics Data System (ADS)

    Stubbs, Derek F.

    1990-08-01

    We briefly review the concept of computer aided medical diagnosis and more extensively review the the existing literature on neural network applications in the field. Neural networks can function as simple expert systems for diagnosis or prognosis. Using a public database we develop a neural network for the diagnosis of a major presenting symptom while discussing the development process and possible approaches. MEDICAL EXPERTS SYSTEMS COMPUTER AIDED DIAGNOSIS Biomedicine is an incredibly diverse and multidisciplinary field and it is not surprising that neural networks with their many applications are finding more and more applications in the highly non-linear field of biomedicine. I want to concentrate on neural networks as medical expert systems for clinical diagnosis or prognosis. Expert Systems started out as a set of computerized " ifthen" rules. Everything was reduced to boolean logic and the promised land of computer experts was said to be in sight. It never came. Why? First the computer code explodes as the number of " ifs" increases. All the " ifs" have to interact. Second experts are not very good at reducing expertise to language. It turns out that experts recognize patterns and have non-verbal left-brain intuition decision processes. Third learning by example rather than learning by rule is the way natural brains works and making computers work by rule-learning is hideously labor intensive. Neural networks can learn from example. They learn the results

  18. A simple approach to estimate daily loads of total, refractory, and labile organic carbon from their seasonal loads in a watershed.

    PubMed

    Ouyang, Ying; Grace, Johnny M; Zipperer, Wayne C; Hatten, Jeff; Dewey, Janet

    2018-05-22

    Loads of naturally occurring total organic carbons (TOC), refractory organic carbon (ROC), and labile organic carbon (LOC) in streams control the availability of nutrients and the solubility and toxicity of contaminants and affect biological activities through absorption of light and complex metals with production of carcinogenic compounds. Although computer models have become increasingly popular in understanding and management of TOC, ROC, and LOC loads in streams, the usefulness of these models hinges on the availability of daily data for model calibration and validation. Unfortunately, these daily data are usually insufficient and/or unavailable for most watersheds due to a variety of reasons, such as budget and time constraints. A simple approach was developed here to calculate daily loads of TOC, ROC, and LOC in streams based on their seasonal loads. We concluded that the predictions from our approach adequately match field measurements based on statistical comparisons between model calculations and field measurements. Our approach demonstrates that an increase in stream discharge results in increased stream TOC, ROC, and LOC concentrations and loads, although high peak discharge did not necessarily result in high peaks of TOC, ROC, and LOC concentrations and loads. The approach developed herein is a useful tool to convert seasonal loads of TOC, ROC, and LOC into daily loads in the absence of measured daily load data.

  19. SNMP-SI: A Network Management Tool Based on Slow Intelligence System Approach

    NASA Astrophysics Data System (ADS)

    Colace, Francesco; de Santo, Massimo; Ferrandino, Salvatore

    The last decade has witnessed an intense spread of computer networks that has been further accelerated with the introduction of wireless networks. Simultaneously with, this growth has increased significantly the problems of network management. Especially in small companies, where there is no provision of personnel assigned to these tasks, the management of such networks is often complex and malfunctions can have significant impacts on their businesses. A possible solution is the adoption of Simple Network Management Protocol. Simple Network Management Protocol (SNMP) is a standard protocol used to exchange network management information. It is part of the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite. SNMP provides a tool for network administrators to manage network performance, find and solve network problems, and plan for network growth. SNMP has a big disadvantage: its simple design means that the information it deals with is neither detailed nor well organized enough to deal with the expanding modern networking requirements. Over the past years much efforts has been given to improve the lack of Simple Network Management Protocol and new frameworks has been developed: A promising approach involves the use of Ontology. This is the starting point of this paper where a novel approach to the network management based on the use of the Slow Intelligence System methodologies and Ontology based techniques is proposed. Slow Intelligence Systems is a general-purpose systems characterized by being able to improve performance over time through a process involving enumeration, propagation, adaptation, elimination and concentration. Therefore, the proposed approach aims to develop a system able to acquire, according to an SNMP standard, information from the various hosts that are in the managed networks and apply solutions in order to solve problems. To check the feasibility of this model first experimental results in a real scenario are showed.

  20. Topology-changing shape optimization with the genetic algorithm

    NASA Astrophysics Data System (ADS)

    Lamberson, Steven E., Jr.

    The goal is to take a traditional shape optimization problem statement and modify it slightly to allow for prescribed changes in topology. This modification enables greater flexibility in the choice of parameters for the topology optimization problem, while improving the direct physical relevance of the results. This modification involves changing the optimization problem statement from a nonlinear programming problem into a form of mixed-discrete nonlinear programing problem. The present work demonstrates one possible way of using the Genetic Algorithm (GA) to solve such a problem, including the use of "masking bits" and a new modification to the bit-string affinity (BSA) termination criterion specifically designed for problems with "masking bits." A simple ten-bar truss problem proves the utility of the modified BSA for this type of problem. A more complicated two dimensional bracket problem is solved using both the proposed approach and a more traditional topology optimization approach (Solid Isotropic Microstructure with Penalization or SIMP) to enable comparison. The proposed approach is able to solve problems with both local and global constraints, which is something traditional methods cannot do. The proposed approach has a significantly higher computational burden --- on the order of 100 times larger than SIMP, although the proposed approach is able to offset this with parallel computing.

  1. A Direct and Non-Singular UKF Approach Using Euler Angle Kinematics for Integrated Navigation Systems

    PubMed Central

    Ran, Changyan; Cheng, Xianghong

    2016-01-01

    This paper presents a direct and non-singular approach based on an unscented Kalman filter (UKF) for the integration of strapdown inertial navigation systems (SINSs) with the aid of velocity. The state vector includes velocity and Euler angles, and the system model contains Euler angle kinematics equations. The measured velocity in the body frame is used as the filter measurement. The quaternion nonlinear equality constraint is eliminated, and the cross-noise problem is overcome. The filter model is simple and easy to apply without linearization. Data fusion is performed by an UKF, which directly estimates and outputs the navigation information. There is no need to process navigation computation and error correction separately because the navigation computation is completed synchronously during the filter time updating. In addition, the singularities are avoided with the help of the dual-Euler method. The performance of the proposed approach is verified by road test data from a land vehicle equipped with an odometer aided SINS, and a singularity turntable test is conducted using three-axis turntable test data. The results show that the proposed approach can achieve higher navigation accuracy than the commonly-used indirect approach, and the singularities can be efficiently removed as the result of dual-Euler method. PMID:27598169

  2. On the Stability of Jump-Linear Systems Driven by Finite-State Machines with Markovian Inputs

    NASA Technical Reports Server (NTRS)

    Patilkulkarni, Sudarshan; Herencia-Zapana, Heber; Gray, W. Steven; Gonzalez, Oscar R.

    2004-01-01

    This paper presents two mean-square stability tests for a jump-linear system driven by a finite-state machine with a first-order Markovian input process. The first test is based on conventional Markov jump-linear theory and avoids the use of any higher-order statistics. The second test is developed directly using the higher-order statistics of the machine s output process. The two approaches are illustrated with a simple model for a recoverable computer control system.

  3. Perspectives on jet noise

    NASA Technical Reports Server (NTRS)

    Ribner, H. S.

    1981-01-01

    Jet noise is a byproduct of turbulence. Until recently turbulence was assumed to be known statistically, and jet noise was computed therefrom. As a result of new findings though on the behavior of vortices and instability waves, a more integrated view of the problem has been accepted lately. After presenting a simple view of jet noise, the paper attempts to resolve the apparent differences between Lighthill's and Lilley's interpretations of mean-flow shear, and examines a number of ad hoc approaches to jet noise suppression.

  4. The application of finite volume methods for modelling three-dimensional incompressible flow on an unstructured mesh

    NASA Astrophysics Data System (ADS)

    Lonsdale, R. D.; Webster, R.

    This paper demonstrates the application of a simple finite volume approach to a finite element mesh, combining the economy of the former with the geometrical flexibility of the latter. The procedure is used to model a three-dimensional flow on a mesh of linear eight-node brick (hexahedra). Simulations are performed for a wide range of flow problems, some in excess of 94,000 nodes. The resulting computer code ASTEC that incorporates these procedures is described.

  5. Graph-theoretic strengths of contextuality

    NASA Astrophysics Data System (ADS)

    de Silva, Nadish

    2017-03-01

    Cabello-Severini-Winter and Abramsky-Hardy (building on the framework of Abramsky-Brandenburger) both provide classes of Bell and contextuality inequalities for very general experimental scenarios using vastly different mathematical techniques. We review both approaches, carefully detail the links between them, and give simple, graph-theoretic methods for finding inequality-free proofs of nonlocality and contextuality and for finding states exhibiting strong nonlocality and/or contextuality. Finally, we apply these methods to concrete examples in stabilizer quantum mechanics relevant to understanding contextuality as a resource in quantum computation.

  6. Development of Computer-Based Experiment Set on Simple Harmonic Motion of Mass on Springs

    ERIC Educational Resources Information Center

    Musik, Panjit

    2017-01-01

    The development of computer-based experiment set has become necessary in teaching physics in schools so that students can learn from their real experiences. The purpose of this study is to create and to develop the computer-based experiment set on simple harmonic motion of mass on springs for teaching and learning physics. The average period of…

  7. An efficient hybrid technique in RCS predictions of complex targets at high frequencies

    NASA Astrophysics Data System (ADS)

    Algar, María-Jesús; Lozano, Lorena; Moreno, Javier; González, Iván; Cátedra, Felipe

    2017-09-01

    Most computer codes in Radar Cross Section (RCS) prediction use Physical Optics (PO) and Physical theory of Diffraction (PTD) combined with Geometrical Optics (GO) and Geometrical Theory of Diffraction (GTD). The latter approaches are computationally cheaper and much more accurate for curved surfaces, but not applicable for the computation of the RCS of all surfaces of a complex object due to the presence of caustic problems in the analysis of concave surfaces or flat surfaces in the far field. The main contribution of this paper is the development of a hybrid method based on a new combination of two asymptotic techniques: GTD and PO, considering the advantages and avoiding the disadvantages of each of them. A very efficient and accurate method to analyze the RCS of complex structures at high frequencies is obtained with the new combination. The proposed new method has been validated comparing RCS results obtained for some simple cases using the proposed approach and RCS using the rigorous technique of Method of Moments (MoM). Some complex cases have been examined at high frequencies contrasting the results with PO. This study shows the accuracy and the efficiency of the hybrid method and its suitability for the computation of the RCS at really large and complex targets at high frequencies.

  8. QM/MM free energy simulations: recent progress and challenges

    PubMed Central

    Lu, Xiya; Fang, Dong; Ito, Shingo; Okamoto, Yuko; Ovchinnikov, Victor

    2016-01-01

    Due to the higher computational cost relative to pure molecular mechanical (MM) simulations, hybrid quantum mechanical/molecular mechanical (QM/MM) free energy simulations particularly require a careful consideration of balancing computational cost and accuracy. Here we review several recent developments in free energy methods most relevant to QM/MM simulations and discuss several topics motivated by these developments using simple but informative examples that involve processes in water. For chemical reactions, we highlight the value of invoking enhanced sampling technique (e.g., replica-exchange) in umbrella sampling calculations and the value of including collective environmental variables (e.g., hydration level) in metadynamics simulations; we also illustrate the sensitivity of string calculations, especially free energy along the path, to various parameters in the computation. Alchemical free energy simulations with a specific thermodynamic cycle are used to probe the effect of including the first solvation shell into the QM region when computing solvation free energies. For cases where high-level QM/MM potential functions are needed, we analyze two different approaches: the QM/MM-MFEP method of Yang and co-workers and perturbative correction to low-level QM/MM free energy results. For the examples analyzed here, both approaches seem productive although care needs to be exercised when analyzing the perturbative corrections. PMID:27563170

  9. Too Good to be True? Ideomotor Theory from a Computational Perspective

    PubMed Central

    Herbort, Oliver; Butz, Martin V.

    2012-01-01

    In recent years, Ideomotor Theory has regained widespread attention and sparked the development of a number of theories on goal-directed behavior and learning. However, there are two issues with previous studies’ use of Ideomotor Theory. Although Ideomotor Theory is seen as very general, it is often studied in settings that are considerably more simplistic than most natural situations. Moreover, Ideomotor Theory’s claim that effect anticipations directly trigger actions and that action-effect learning is based on the formation of direct action-effect associations is hard to address empirically. We address these points from a computational perspective. A simple computational model of Ideomotor Theory was tested in tasks with different degrees of complexity. The model evaluation showed that Ideomotor Theory is a computationally feasible approach for understanding efficient action-effect learning for goal-directed behavior if the following preconditions are met: (1) The range of potential actions and effects has to be restricted. (2) Effects have to follow actions within a short time window. (3) Actions have to be simple and may not require sequencing. The first two preconditions also limit human performance and thus support Ideomotor Theory. The last precondition can be circumvented by extending the model with more complex, indirect action generation processes. In conclusion, we suggest that Ideomotor Theory offers a comprehensive framework to understand action-effect learning. However, we also suggest that additional processes may mediate the conversion of effect anticipations into actions in many situations. PMID:23162524

  10. Rating of Dynamic Coefficient for Simple Beam Bridge Design on High-Speed Railways

    NASA Astrophysics Data System (ADS)

    Diachenko, Leonid; Benin, Andrey; Smirnov, Vladimir; Diachenko, Anastasia

    2018-06-01

    The aim of the work is to improve the methodology for the dynamic computation of simple beam spans during the impact of high-speed trains. Mathematical simulation utilizing numerical and analytical methods of structural mechanics is used in the research. The article analyses parameters of the effect of high-speed trains on simple beam spanning bridge structures and suggests a technique of determining of the dynamic index to the live load. Reliability of the proposed methodology is confirmed by results of numerical simulation of high-speed train passage over spans with different speeds. The proposed algorithm of dynamic computation is based on a connection between maximum acceleration of the span in the resonance mode of vibrations and the main factors of stress-strain state. The methodology allows determining maximum and also minimum values of the main efforts in the construction that makes possible to perform endurance tests. It is noted that dynamic additions for the components of the stress-strain state (bending moments, transverse force and vertical deflections) are different. This condition determines the necessity for differentiated approach to evaluation of dynamic coefficients performing design verification of I and II groups of limiting state. The practical importance: the methodology of determining the dynamic coefficients allows making dynamic calculation and determining the main efforts in split beam spans without numerical simulation and direct dynamic analysis that significantly reduces the labour costs for design.

  11. Tunable, Flexible, and Efficient Optimization of Control Pulses for Practical Qubits

    NASA Astrophysics Data System (ADS)

    Machnes, Shai; Assémat, Elie; Tannor, David; Wilhelm, Frank K.

    2018-04-01

    Quantum computation places very stringent demands on gate fidelities, and experimental implementations require both the controls and the resultant dynamics to conform to hardware-specific constraints. Superconducting qubits present the additional requirement that pulses must have simple parameterizations, so they can be further calibrated in the experiment, to compensate for uncertainties in system parameters. Other quantum technologies, such as sensing, require extremely high fidelities. We present a novel, conceptually simple and easy-to-implement gradient-based optimal control technique named gradient optimization of analytic controls (GOAT), which satisfies all the above requirements, unlike previous approaches. To demonstrate GOAT's capabilities, with emphasis on flexibility and ease of subsequent calibration, we optimize fast coherence-limited pulses for two leading superconducting qubits architectures—flux-tunable transmons and fixed-frequency transmons with tunable couplers.

  12. An approach for modeling sediment budgets in supply-limited rivers

    USGS Publications Warehouse

    Wright, Scott A.; Topping, David J.; Rubin, David M.; Melis, Theodore S.

    2010-01-01

    Reliable predictions of sediment transport and river morphology in response to variations in natural and human-induced drivers are necessary for river engineering and management. Because engineering and management applications may span a wide range of space and time scales, a broad spectrum of modeling approaches has been developed, ranging from suspended-sediment "rating curves" to complex three-dimensional morphodynamic models. Suspended sediment rating curves are an attractive approach for evaluating changes in multi-year sediment budgets resulting from changes in flow regimes because they are simple to implement, computationally efficient, and the empirical parameters can be estimated from quantities that are commonly measured in the field (i.e., suspended sediment concentration and water discharge). However, the standard rating curve approach assumes a unique suspended sediment concentration for a given water discharge. This assumption is not valid in rivers where sediment supply varies enough to cause changes in particle size or changes in areal coverage of sediment on the bed; both of these changes cause variations in suspended sediment concentration for a given water discharge. More complex numerical models of hydraulics and morphodynamics have been developed to address such physical changes of the bed. This additional complexity comes at a cost in terms of computations as well as the type and amount of data required for model setup, calibration, and testing. Moreover, application of the resulting sediment-transport models may require observations of bed-sediment boundary conditions that require extensive (and expensive) observations or, alternatively, require the use of an additional model (subject to its own errors) merely to predict the bed-sediment boundary conditions for use by the transport model. In this paper we present a hybrid approach that combines aspects of the rating curve method and the more complex morphodynamic models. Our primary objective was to develop an approach complex enough to capture the processes related to sediment supply limitation but simple enough to allow for rapid calculations of multi-year sediment budgets. The approach relies on empirical relations between suspended sediment concentration and discharge but on a particle size specific basis and also tracks and incorporates the particle size distribution of the bed sediment. We have applied this approach to the Colorado River below Glen Canyon Dam (GCD), a reach that is particularly suited to such an approach because it is substantially sediment supply limited such that transport rates are strongly dependent on both water discharge and sediment supply. The results confirm the ability of the approach to simulate the effects of supply limitation, including periods of accumulation and bed fining as well as erosion and bed coarsening, using a very simple formulation. Although more empirical in nature than standard one-dimensional morphodynamic models, this alternative approach is attractive because its simplicity allows for rapid evaluation of multi-year sediment budgets under a range of flow regimes and sediment supply conditions, and also because it requires substantially less data for model setup and use.

  13. Analyzing C2 Structures and Self-Synchronization with Simple Computational Models

    DTIC Science & Technology

    2011-06-01

    16th ICCRTS “Collective C2 in Multinational Civil-Military Operations” Analyzing C2 Structures and Self- Synchronization with Simple...Self- Synchronization with Simple Computational Models 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT...models. The Kuramoto Model, though with some serious limitations, provides a representation of information flow and self- synchronization in an

  14. Determination of nuclear quadrupolar parameters using singularities in field-swept NMR patterns.

    PubMed

    Ichijo, Naoki; Takeda, Kazuyuki; Yamada, Kazuhiko; Takegoshi, K

    2016-10-07

    We propose a simple data-analysis scheme to determine the coupling constant and the asymmetry parameter of nuclear quadrupolar interactions in field-swept nuclear magnetic resonance (NMR) for static powder samples. This approach correlates the quadrupolar parameters to the positions of the singularities, which can readily be found out as sharp peaks in the field-swept pattern. Moreover, the parameters can be determined without quantitative acquisition and elaborate calculation of the overall profile of the pattern. Since both experimental and computational efforts are significantly reduced, the approach presented in this work will enhance the power of the field-swept NMR for yet unexplored quadrupolar nuclei. We demonstrate this approach in 33 S in α-S 8 and 35 Cl in chloranil. The accuracy of the obtained quadrupolar parameters is also discussed.

  15. Adaptive correlation filter-based video stabilization without accumulative global motion estimation

    NASA Astrophysics Data System (ADS)

    Koh, Eunjin; Lee, Chanyong; Jeong, Dong Gil

    2014-12-01

    We present a digital video stabilization approach that provides both robustness and efficiency for practical applications. In this approach, we adopt a stabilization model that maintains spatio-temporal information of past input frames efficiently and can track original stabilization position. Because of the stabilization model, the proposed method does not need accumulative global motion estimation and can recover the original position even if there is a failure in interframe motion estimation. It can also intelligently overcome the situation of damaged or interrupted video sequences. Moreover, because it is simple and suitable to parallel scheme, we implement it on a commercial field programmable gate array and a graphics processing unit board with compute unified device architecture in a breeze. Experimental results show that the proposed approach is both fast and robust.

  16. The multi-disciplinary design study: A life cycle cost algorithm

    NASA Technical Reports Server (NTRS)

    Harding, R. R.; Pichi, F. J.

    1988-01-01

    The approach and results of a Life Cycle Cost (LCC) analysis of the Space Station Solar Dynamic Power Subsystem (SDPS) including gimbal pointing and power output performance are documented. The Multi-Discipline Design Tool (MDDT) computer program developed during the 1986 study has been modified to include the design, performance, and cost algorithms for the SDPS as described. As with the Space Station structural and control subsystems, the LCC of the SDPS can be computed within the MDDT program as a function of the engineering design variables. Two simple examples of MDDT's capability to evaluate cost sensitivity and design based on LCC are included. MDDT was designed to accept NASA's IMAT computer program data as input so that IMAT's detailed structural and controls design capability can be assessed with expected system LCC as computed by MDDT. No changes to IMAT were required. Detailed knowledge of IMAT is not required to perform the LCC analyses as the interface with IMAT is noninteractive.

  17. A self-synchronized high speed computational ghost imaging system: A leap towards dynamic capturing

    NASA Astrophysics Data System (ADS)

    Suo, Jinli; Bian, Liheng; Xiao, Yudong; Wang, Yongjin; Zhang, Lei; Dai, Qionghai

    2015-11-01

    High quality computational ghost imaging needs to acquire a large number of correlated measurements between the to-be-imaged scene and different reference patterns, thus ultra-high speed data acquisition is of crucial importance in real applications. To raise the acquisition efficiency, this paper reports a high speed computational ghost imaging system using a 20 kHz spatial light modulator together with a 2 MHz photodiode. Technically, the synchronization between such high frequency illumination and bucket detector needs nanosecond trigger precision, so the development of synchronization module is quite challenging. To handle this problem, we propose a simple and effective computational self-synchronization scheme by building a general mathematical model and introducing a high precision synchronization technique. The resulted efficiency is around 14 times faster than state-of-the-arts, and takes an important step towards ghost imaging of dynamic scenes. Besides, the proposed scheme is a general approach with high flexibility for readily incorporating other illuminators and detectors.

  18. A Simple Method for Automated Equilibration Detection in Molecular Simulations.

    PubMed

    Chodera, John D

    2016-04-12

    Molecular simulations intended to compute equilibrium properties are often initiated from configurations that are highly atypical of equilibrium samples, a practice which can generate a distinct initial transient in mechanical observables computed from the simulation trajectory. Traditional practice in simulation data analysis recommends this initial portion be discarded to equilibration, but no simple, general, and automated procedure for this process exists. Here, we suggest a conceptually simple automated procedure that does not make strict assumptions about the distribution of the observable of interest in which the equilibration time is chosen to maximize the number of effectively uncorrelated samples in the production timespan used to compute equilibrium averages. We present a simple Python reference implementation of this procedure and demonstrate its utility on typical molecular simulation data.

  19. A simple method for automated equilibration detection in molecular simulations

    PubMed Central

    Chodera, John D.

    2016-01-01

    Molecular simulations intended to compute equilibrium properties are often initiated from configurations that are highly atypical of equilibrium samples, a practice which can generate a distinct initial transient in mechanical observables computed from the simulation trajectory. Traditional practice in simulation data analysis recommends this initial portion be discarded to equilibration, but no simple, general, and automated procedure for this process exists. Here, we suggest a conceptually simple automated procedure that does not make strict assumptions about the distribution of the observable of interest, in which the equilibration time is chosen to maximize the number of effectively uncorrelated samples in the production timespan used to compute equilibrium averages. We present a simple Python reference implementation of this procedure, and demonstrate its utility on typical molecular simulation data. PMID:26771390

  20. LLSURE: local linear SURE-based edge-preserving image filtering.

    PubMed

    Qiu, Tianshuang; Wang, Aiqi; Yu, Nannan; Song, Aimin

    2013-01-01

    In this paper, we propose a novel approach for performing high-quality edge-preserving image filtering. Based on a local linear model and using the principle of Stein's unbiased risk estimate as an estimator for the mean squared error from the noisy image only, we derive a simple explicit image filter which can filter out noise while preserving edges and fine-scale details. Moreover, this filter has a fast and exact linear-time algorithm whose computational complexity is independent of the filtering kernel size; thus, it can be applied to real time image processing tasks. The experimental results demonstrate the effectiveness of the new filter for various computer vision applications, including noise reduction, detail smoothing and enhancement, high dynamic range compression, and flash/no-flash denoising.

  1. The binding domain of the HMGB1 inhibitor carbenoxolone: Theory and experiment

    NASA Astrophysics Data System (ADS)

    Mollica, Luca; Curioni, Alessandro; Andreoni, Wanda; Bianchi, Marco E.; Musco, Giovanna

    2008-05-01

    We present a combined computational and experimental study of the interaction of the Box A of the HMGB1 protein and carbenoxolone, an inhibitor of its pro-inflammatory activity. The computational approach consists of classical molecular dynamics (MD) simulations based on the GROMOS force field with quantum-refined (QRFF) atomic charges for the ligand. Experimental data consist of fluorescence intensities, chemical shift displacements, saturation transfer differences and intermolecular Nuclear Overhauser Enhancement signals. Good agreement is found between observations and the conformation of the ligand-protein complex resulting from QRFF-MD. In contrast, simple docking procedures and MD based on the unrefined force field provide models inconsistent with experiment. The ligand-protein binding is dominated by non-directional interactions.

  2. Detection of interference phase by digital computation of quadrature signals in homodyne laser interferometry.

    PubMed

    Rerucha, Simon; Buchta, Zdenek; Sarbort, Martin; Lazar, Josef; Cip, Ondrej

    2012-10-19

    We have proposed an approach to the interference phase extraction in the homodyne laser interferometry. The method employs a series of computational steps to reconstruct the signals for quadrature detection from an interference signal from a non-polarising interferometer sampled by a simple photodetector. The complexity trade-off is the use of laser beam with frequency modulation capability. It is analytically derived and its validity and performance is experimentally verified. The method has proven to be a feasible alternative for the traditional homodyne detection since it performs with comparable accuracy, especially where the optical setup complexity is principal issue and the modulation of laser beam is not a heavy burden (e.g., in multi-axis sensor or laser diode based systems).

  3. Sound production due to large-scale coherent structures. [and identification of noise mechanisms in turbulent shear flow

    NASA Technical Reports Server (NTRS)

    Gatski, T. B.

    1979-01-01

    The sound due to the large-scale (wavelike) structure in an infinite free turbulent shear flow is examined. Specifically, a computational study of a plane shear layer is presented, which accounts, by way of triple decomposition of the flow field variables, for three distinct component scales of motion (mean, wave, turbulent), and from which the sound - due to the large-scale wavelike structure - in the acoustic field can be isolated by a simple phase average. The computational approach has allowed for the identification of a specific noise production mechanism, viz the wave-induced stress, and has indicated the effect of coherent structure amplitude and growth and decay characteristics on noise levels produced in the acoustic far field.

  4. Excited state dynamics of thiophene and bithiophene: new insights into theoretically challenging systems.

    PubMed

    Prlj, Antonio; Curchod, Basile F E; Corminboeuf, Clémence

    2015-06-14

    The computational elucidation and proper description of the ultrafast deactivation mechanisms of simple organic electronic units, such as thiophene and its oligomers, is as challenging as it is contentious. A comprehensive excited state dynamics analysis of these systems utilizing reliable electronic structure approaches is currently lacking, with earlier pictures of the photochemistry of these systems being conceived based upon high-level static computations or lower level dynamic trajectories. Here a detailed surface hopping molecular dynamics of thiophene and bithiophene using the algebraic diagrammatic construction to second order (ADC(2)) method is presented. Our findings illustrate that ring puckering plays an important role in thiophene photochemistry and that the photostability increases when going upon dimerization into bithiophene.

  5. [Computers in biomedical research: I. Analysis of bioelectrical signals].

    PubMed

    Vivaldi, E A; Maldonado, P

    2001-08-01

    A personal computer equipped with an analog-to-digital conversion card is able to input, store and display signals of biomedical interest. These signals can additionally be submitted to ad-hoc software for analysis and diagnosis. Data acquisition is based on the sampling of a signal at a given rate and amplitude resolution. The automation of signal processing conveys syntactic aspects (data transduction, conditioning and reduction); and semantic aspects (feature extraction to describe and characterize the signal and diagnostic classification). The analytical approach that is at the basis of computer programming allows for the successful resolution of apparently complex tasks. Two basic principles involved are the definition of simple fundamental functions that are then iterated and the modular subdivision of tasks. These two principles are illustrated, respectively, by presenting the algorithm that detects relevant elements for the analysis of a polysomnogram, and the task flow in systems that automate electrocardiographic reports.

  6. Reconstruction of Sky Illumination Domes from Ground-Based Panoramas

    NASA Astrophysics Data System (ADS)

    Coubard, F.; Lelégard, L.; Brédif, M.; Paparoditis, N.; Briottet, X.

    2012-07-01

    The knowledge of the sky illumination is important for radiometric corrections and for computer graphics applications such as relighting or augmented reality. We propose an approach to compute environment maps, representing the sky radiance, from a set of ground-based images acquired by a panoramic acquisition system, for instance a mobile-mapping system. These images can be affected by important radiometric artifacts, such as bloom or overexposure. A Perez radiance model is estimated with the blue sky pixels of the images, and used to compute additive corrections in order to reduce these radiometric artifacts. The sky pixels are then aggregated in an environment map, which still suffers from discontinuities on stitching edges. The influence of the quality of estimated sky radiance on the simulated light signal is measured quantitatively on a simple synthetic urban scene; in our case, the maximal error for the total sensor radiance is about 10%.

  7. Modeling molecule-plasmon interactions using quantized radiation fields within time-dependent electronic structure theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nascimento, Daniel R.; DePrince, A. Eugene, E-mail: deprince@chem.fsu.edu

    2015-12-07

    We present a combined cavity quantum electrodynamics/ab initio electronic structure approach for simulating plasmon-molecule interactions in the time domain. The simple Jaynes-Cummings-type model Hamiltonian typically utilized in such simulations is replaced with one in which the molecular component of the coupled system is treated in a fully ab initio way, resulting in a computationally efficient description of general plasmon-molecule interactions. Mutual polarization effects are easily incorporated within a standard ground-state Hartree-Fock computation, and time-dependent simulations carry the same formal computational scaling as real-time time-dependent Hartree-Fock theory. As a proof of principle, we apply this generalized method to the emergence ofmore » a Fano-like resonance in coupled molecule-plasmon systems; this feature is quite sensitive to the nanoparticle-molecule separation and the orientation of the molecule relative to the polarization of the external electric field.« less

  8. Multiple Embedded Processors for Fault-Tolerant Computing

    NASA Technical Reports Server (NTRS)

    Bolotin, Gary; Watson, Robert; Katanyoutanant, Sunant; Burke, Gary; Wang, Mandy

    2005-01-01

    A fault-tolerant computer architecture has been conceived in an effort to reduce vulnerability to single-event upsets (spurious bit flips caused by impingement of energetic ionizing particles or photons). As in some prior fault-tolerant architectures, the redundancy needed for fault tolerance is obtained by use of multiple processors in one computer. Unlike prior architectures, the multiple processors are embedded in a single field-programmable gate array (FPGA). What makes this new approach practical is the recent commercial availability of FPGAs that are capable of having multiple embedded processors. A working prototype (see figure) consists of two embedded IBM PowerPC 405 processor cores and a comparator built on a Xilinx Virtex-II Pro FPGA. This relatively simple instantiation of the architecture implements an error-detection scheme. A planned future version, incorporating four processors and two comparators, would correct some errors in addition to detecting them.

  9. High order parallel numerical schemes for solving incompressible flows

    NASA Technical Reports Server (NTRS)

    Lin, Avi; Milner, Edward J.; Liou, May-Fun; Belch, Richard A.

    1992-01-01

    The use of parallel computers for numerically solving flow fields has gained much importance in recent years. This paper introduces a new high order numerical scheme for computational fluid dynamics (CFD) specifically designed for parallel computational environments. A distributed MIMD system gives the flexibility of treating different elements of the governing equations with totally different numerical schemes in different regions of the flow field. The parallel decomposition of the governing operator to be solved is the primary parallel split. The primary parallel split was studied using a hypercube like architecture having clusters of shared memory processors at each node. The approach is demonstrated using examples of simple steady state incompressible flows. Future studies should investigate the secondary split because, depending on the numerical scheme that each of the processors applies and the nature of the flow in the specific subdomain, it may be possible for a processor to seek better, or higher order, schemes for its particular subcase.

  10. Computational Relativistic Astrophysics Using the Flow Field-Dependent Variation Theory

    NASA Technical Reports Server (NTRS)

    Richardson, G. A.; Chung, T. J.

    2002-01-01

    We present our method for solving general relativistic nonideal hydrodynamics. Relativistic effects become pronounced in such cases as jet formation from black hole magnetized accretion disks which may lead to the study of gamma-ray bursts. Nonideal flows are present where radiation, magnetic forces, viscosities, and turbulence play an important role. Our concern in this paper is to reexamine existing numerical simulation tools as to the accuracy and efficiency of computations and introduce a new approach known as the flow field-dependent variation (FDV) method. The main feature of the FDV method consists of accommodating discontinuities of shock waves and high gradients of flow variables such as occur in turbulence and unstable motions. In this paper, the physics involved in the solution of relativistic hydrodynamics and solution strategies of the FDV theory are elaborated. The general relativistic astrophysical flow and shock solver (GRAFSS) is introduced, and some simple example problems for computational relativistic astrophysics (CRA) are demonstrated.

  11. Perspective: Ring-polymer instanton theory

    NASA Astrophysics Data System (ADS)

    Richardson, Jeremy O.

    2018-05-01

    Since the earliest explorations of quantum mechanics, it has been a topic of great interest that quantum tunneling allows particles to penetrate classically insurmountable barriers. Instanton theory provides a simple description of these processes in terms of dominant tunneling pathways. Using a ring-polymer discretization, an efficient computational method is obtained for applying this theory to compute reaction rates and tunneling splittings in molecular systems. Unlike other quantum-dynamics approaches, the method scales well with the number of degrees of freedom, and for many polyatomic systems, the method may provide the most accurate predictions which can be practically computed. Instanton theory thus has the capability to produce useful data for many fields of low-temperature chemistry including spectroscopy, atmospheric and astrochemistry, as well as surface science. There is however still room for improvement in the efficiency of the numerical algorithms, and new theories are under development for describing tunneling in nonadiabatic transitions.

  12. Secure Genomic Computation through Site-Wise Encryption

    PubMed Central

    Zhao, Yongan; Wang, XiaoFeng; Tang, Haixu

    2015-01-01

    Commercial clouds provide on-demand IT services for big-data analysis, which have become an attractive option for users who have no access to comparable infrastructure. However, utilizing these services for human genome analysis is highly risky, as human genomic data contains identifiable information of human individuals and their disease susceptibility. Therefore, currently, no computation on personal human genomic data is conducted on public clouds. To address this issue, here we present a site-wise encryption approach to encrypt whole human genome sequences, which can be subject to secure searching of genomic signatures on public clouds. We implemented this method within the Hadoop framework, and tested it on the case of searching disease markers retrieved from the ClinVar database against patients’ genomic sequences. The secure search runs only one order of magnitude slower than the simple search without encryption, indicating our method is ready to be used for secure genomic computation on public clouds. PMID:26306278

  13. Targeted post-mortem computed tomography cardiac angiography: proof of concept.

    PubMed

    Saunders, Sarah L; Morgan, Bruno; Raj, Vimal; Robinson, Claire E; Rutty, Guy N

    2011-07-01

    With the increasing use and availability of multi-detector computed tomography and magnetic resonance imaging in autopsy practice, there has been an international push towards the development of the so-called near virtual autopsy. However, currently, a significant obstacle to the consideration as to whether or not near virtual autopsies could one day replace the conventional invasive autopsy is the failure of post-mortem imaging to yield detailed information concerning the coronary arteries. To date, a cost-effective, practical solution to allow high throughput imaging has not been presented within the forensic literature. We present a proof of concept paper describing a simple, quick, cost-effective, manual, targeted in situ post-mortem cardiac angiography method using a minimally invasive approach, to be used with multi-detector computed tomography for high throughput cadaveric imaging which can be used in permanent or temporary mortuaries.

  14. Optimizing Integrated Terminal Airspace Operations Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Bosson, Christabelle; Xue, Min; Zelinski, Shannon

    2014-01-01

    In the terminal airspace, integrated departures and arrivals have the potential to increase operations efficiency. Recent research has developed geneticalgorithm- based schedulers for integrated arrival and departure operations under uncertainty. This paper presents an alternate method using a machine jobshop scheduling formulation to model the integrated airspace operations. A multistage stochastic programming approach is chosen to formulate the problem and candidate solutions are obtained by solving sample average approximation problems with finite sample size. Because approximate solutions are computed, the proposed algorithm incorporates the computation of statistical bounds to estimate the optimality of the candidate solutions. A proof-ofconcept study is conducted on a baseline implementation of a simple problem considering a fleet mix of 14 aircraft evolving in a model of the Los Angeles terminal airspace. A more thorough statistical analysis is also performed to evaluate the impact of the number of scenarios considered in the sampled problem. To handle extensive sampling computations, a multithreading technique is introduced.

  15. First principles calculations of thermal conductivity with out of equilibrium molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Puligheddu, Marcello; Gygi, Francois; Galli, Giulia

    The prediction of the thermal properties of solids and liquids is central to numerous problems in condensed matter physics and materials science, including the study of thermal management of opto-electronic and energy conversion devices. We present a method to compute the thermal conductivity of solids by performing ab initio molecular dynamics at non equilibrium conditions. Our formulation is based on a generalization of the approach to equilibrium technique, using sinusoidal temperature gradients, and it only requires calculations of first principles trajectories and atomic forces. We discuss results and computational requirements for a representative, simple oxide, MgO, and compare with experiments and data obtained with classical potentials. This work was supported by MICCoM as part of the Computational Materials Science Program funded by the U.S. Department of Energy (DOE), Office of Science , Basic Energy Sciences (BES), Materials Sciences and Engineering Division under Grant DOE/BES 5J-30.

  16. Secure Genomic Computation through Site-Wise Encryption.

    PubMed

    Zhao, Yongan; Wang, XiaoFeng; Tang, Haixu

    2015-01-01

    Commercial clouds provide on-demand IT services for big-data analysis, which have become an attractive option for users who have no access to comparable infrastructure. However, utilizing these services for human genome analysis is highly risky, as human genomic data contains identifiable information of human individuals and their disease susceptibility. Therefore, currently, no computation on personal human genomic data is conducted on public clouds. To address this issue, here we present a site-wise encryption approach to encrypt whole human genome sequences, which can be subject to secure searching of genomic signatures on public clouds. We implemented this method within the Hadoop framework, and tested it on the case of searching disease markers retrieved from the ClinVar database against patients' genomic sequences. The secure search runs only one order of magnitude slower than the simple search without encryption, indicating our method is ready to be used for secure genomic computation on public clouds.

  17. A Simple Approach to the Technical Aspects of Radiosurgery Treatments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prasad, S.C.; Bassano, D.A.; King, G.A.

    2015-01-15

    An approach to radiosurgery treatment that can be readily adopted in most radiotherapy centers with linear accelerators is presented. In our institution, a Leksell-type of neurosurgical frame, a computed tomography scanner, locally fabricated cones, and 6 MV X-ray beams are used to perform radiosurgery treatments. Collimated arcs with dose distributions that conform to the shape of the lesion in the transverse and the sagittal planes are used. It is argued that the uncertainties in the localization of the isocenter within a lesion and the specifications of the size of the target volume do not justify high precision mechanical devices formore » most radiosurgery treatments.« less

  18. Optimisation by hierarchical search

    NASA Astrophysics Data System (ADS)

    Zintchenko, Ilia; Hastings, Matthew; Troyer, Matthias

    2015-03-01

    Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.

  19. Encryption and decryption algorithm using algebraic matrix approach

    NASA Astrophysics Data System (ADS)

    Thiagarajan, K.; Balasubramanian, P.; Nagaraj, J.; Padmashree, J.

    2018-04-01

    Cryptographic algorithms provide security of data against attacks during encryption and decryption. However, they are computationally intensive process which consume large amount of CPU time and space at time of encryption and decryption. The goal of this paper is to study the encryption and decryption algorithm and to find space complexity of the encrypted and decrypted data by using of algorithm. In this paper, we encrypt and decrypt the message using key with the help of cyclic square matrix provides the approach applicable for any number of words having more number of characters and longest word. Also we discussed about the time complexity of the algorithm. The proposed algorithm is simple but difficult to break the process.

  20. Research on Optimization of Encoding Algorithm of PDF417 Barcodes

    NASA Astrophysics Data System (ADS)

    Sun, Ming; Fu, Longsheng; Han, Shuqing

    The purpose of this research is to develop software to optimize the data compression of a PDF417 barcode using VC++6.0. According to the different compression mode and the particularities of Chinese, the relevant approaches which optimize the encoding algorithm of data compression such as spillage and the Chinese characters encoding are proposed, a simple approach to compute complex polynomial is introduced. After the whole data compression is finished, the number of the codeword is reduced and then the encoding algorithm is optimized. The developed encoding system of PDF 417 barcodes will be applied in the logistics management of fruits, therefore also will promote the fast development of the two-dimensional bar codes.

  1. Offdiagonal complexity: A computationally quick complexity measure for graphs and networks

    NASA Astrophysics Data System (ADS)

    Claussen, Jens Christian

    2007-02-01

    A vast variety of biological, social, and economical networks shows topologies drastically differing from random graphs; yet the quantitative characterization remains unsatisfactory from a conceptual point of view. Motivated from the discussion of small scale-free networks, a biased link distribution entropy is defined, which takes an extremum for a power-law distribution. This approach is extended to the node-node link cross-distribution, whose nondiagonal elements characterize the graph structure beyond link distribution, cluster coefficient and average path length. From here a simple (and computationally cheap) complexity measure can be defined. This offdiagonal complexity (OdC) is proposed as a novel measure to characterize the complexity of an undirected graph, or network. While both for regular lattices and fully connected networks OdC is zero, it takes a moderately low value for a random graph and shows high values for apparently complex structures as scale-free networks and hierarchical trees. The OdC approach is applied to the Helicobacter pylori protein interaction network and randomly rewired surrogates.

  2. Physical-depth architectural requirements for generating universal photonic cluster states

    NASA Astrophysics Data System (ADS)

    Morley-Short, Sam; Bartolucci, Sara; Gimeno-Segovia, Mercedes; Shadbolt, Pete; Cable, Hugo; Rudolph, Terry

    2018-01-01

    Most leading proposals for linear-optical quantum computing (LOQC) use cluster states, which act as a universal resource for measurement-based (one-way) quantum computation. In ballistic approaches to LOQC, cluster states are generated passively from small entangled resource states using so-called fusion operations. Results from percolation theory have previously been used to argue that universal cluster states can be generated in the ballistic approach using schemes which exceed the critical threshold for percolation, but these results consider cluster states with unbounded size. Here we consider how successful percolation can be maintained using a physical architecture with fixed physical depth, assuming that the cluster state is continuously generated and measured, and therefore that only a finite portion of it is visible at any one point in time. We show that universal LOQC can be implemented using a constant-size device with modest physical depth, and that percolation can be exploited using simple pathfinding strategies without the need for high-complexity algorithms.

  3. Advanced Multigrid Solvers for Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Brandt, Achi

    1999-01-01

    The main objective of this project has been to support the development of multigrid techniques in computational fluid dynamics that can achieve "textbook multigrid efficiency" (TME), which is several orders of magnitude faster than current industrial CFD solvers. Toward that goal we have assembled a detailed table which lists every foreseen kind of computational difficulty for achieving it, together with the possible ways for resolving the difficulty, their current state of development, and references. We have developed several codes to test and demonstrate, in the framework of simple model problems, several approaches for overcoming the most important of the listed difficulties that had not been resolved before. In particular, TME has been demonstrated for incompressible flows on one hand, and for near-sonic flows on the other hand. General approaches were advanced for the relaxation of stagnation points and boundary conditions under various situations. Also, new algebraic multigrid techniques were formed for treating unstructured grid formulations. More details on all these are given below.

  4. Controllability and observability of Boolean networks arising from biology

    NASA Astrophysics Data System (ADS)

    Li, Rui; Yang, Meng; Chu, Tianguang

    2015-02-01

    Boolean networks are currently receiving considerable attention as a computational scheme for system level analysis and modeling of biological systems. Studying control-related problems in Boolean networks may reveal new insights into the intrinsic control in complex biological systems and enable us to develop strategies for manipulating biological systems using exogenous inputs. This paper considers controllability and observability of Boolean biological networks. We propose a new approach, which draws from the rich theory of symbolic computation, to solve the problems. Consequently, simple necessary and sufficient conditions for reachability, controllability, and observability are obtained, and algorithmic tests for controllability and observability which are based on the Gröbner basis method are presented. As practical applications, we apply the proposed approach to several different biological systems, namely, the mammalian cell-cycle network, the T-cell activation network, the large granular lymphocyte survival signaling network, and the Drosophila segment polarity network, gaining novel insights into the control and/or monitoring of the specific biological systems.

  5. Effective Iterated Greedy Algorithm for Flow-Shop Scheduling Problems with Time lags

    NASA Astrophysics Data System (ADS)

    ZHAO, Ning; YE, Song; LI, Kaidian; CHEN, Siyu

    2017-05-01

    Flow shop scheduling problem with time lags is a practical scheduling problem and attracts many studies. Permutation problem(PFSP with time lags) is concentrated but non-permutation problem(non-PFSP with time lags) seems to be neglected. With the aim to minimize the makespan and satisfy time lag constraints, efficient algorithms corresponding to PFSP and non-PFSP problems are proposed, which consist of iterated greedy algorithm for permutation(IGTLP) and iterated greedy algorithm for non-permutation (IGTLNP). The proposed algorithms are verified using well-known simple and complex instances of permutation and non-permutation problems with various time lag ranges. The permutation results indicate that the proposed IGTLP can reach near optimal solution within nearly 11% computational time of traditional GA approach. The non-permutation results indicate that the proposed IG can reach nearly same solution within less than 1% computational time compared with traditional GA approach. The proposed research combines PFSP and non-PFSP together with minimal and maximal time lag consideration, which provides an interesting viewpoint for industrial implementation.

  6. Game playing.

    PubMed

    Rosin, Christopher D

    2014-03-01

    Game playing has been a core domain of artificial intelligence research since the beginnings of the field. Game playing provides clearly defined arenas within which computational approaches can be readily compared to human expertise through head-to-head competition and other benchmarks. Game playing research has identified several simple core algorithms that provide successful foundations, with development focused on the challenges of defeating human experts in specific games. Key developments include minimax search in chess, machine learning from self-play in backgammon, and Monte Carlo tree search in Go. These approaches have generalized successfully to additional games. While computers have surpassed human expertise in a wide variety of games, open challenges remain and research focuses on identifying and developing new successful algorithmic foundations. WIREs Cogn Sci 2014, 5:193-205. doi: 10.1002/wcs.1278 CONFLICT OF INTEREST: The author has declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website. © 2014 John Wiley & Sons, Ltd.

  7. A simple method for EEG guided transcranial electrical stimulation without models.

    PubMed

    Cancelli, Andrea; Cottone, Carlo; Tecchio, Franca; Truong, Dennis Q; Dmochowski, Jacek; Bikson, Marom

    2016-06-01

    There is longstanding interest in using EEG measurements to inform transcranial Electrical Stimulation (tES) but adoption is lacking because users need a simple and adaptable recipe. The conventional approach is to use anatomical head-models for both source localization (the EEG inverse problem) and current flow modeling (the tES forward model), but this approach is computationally demanding, requires an anatomical MRI, and strict assumptions about the target brain regions. We evaluate techniques whereby tES dose is derived from EEG without the need for an anatomical head model, target assumptions, difficult case-by-case conjecture, or many stimulation electrodes. We developed a simple two-step approach to EEG-guided tES that based on the topography of the EEG: (1) selects locations to be used for stimulation; (2) determines current applied to each electrode. Each step is performed based solely on the EEG with no need for head models or source localization. Cortical dipoles represent idealized brain targets. EEG-guided tES strategies are verified using a finite element method simulation of the EEG generated by a dipole, oriented either tangential or radial to the scalp surface, and then simulating the tES-generated electric field produced by each model-free technique. These model-free approaches are compared to a 'gold standard' numerically optimized dose of tES that assumes perfect understanding of the dipole location and head anatomy. We vary the number of electrodes from a few to over three hundred, with focality or intensity as optimization criterion. Model-free approaches evaluated include (1) voltage-to-voltage, (2) voltage-to-current; (3) Laplacian; and two Ad-Hoc techniques (4) dipole sink-to-sink; and (5) sink to concentric. Our results demonstrate that simple ad hoc approaches can achieve reasonable targeting for the case of a cortical dipole, remarkably with only 2-8 electrodes and no need for a model of the head. Our approach is verified directly only for a theoretically localized source, but may be potentially applied to an arbitrary EEG topography. For its simplicity and linearity, our recipe for model-free EEG guided tES lends itself to broad adoption and can be applied to static (tDCS), time-variant (e.g., tACS, tRNS, tPCS), or closed-loop tES.

  8. A molecule-centered method for accelerating the calculation of hydrodynamic interactions in Brownian dynamics simulations containing many flexible biomolecules

    PubMed Central

    Elcock, Adrian H.

    2013-01-01

    Inclusion of hydrodynamic interactions (HIs) is essential in simulations of biological macromolecules that treat the solvent implicitly if the macromolecules are to exhibit correct translational and rotational diffusion. The present work describes the development and testing of a simple approach aimed at allowing more rapid computation of HIs in coarse-grained Brownian dynamics simulations of systems that contain large numbers of flexible macromolecules. The method combines a complete treatment of intramolecular HIs with an approximate treatment of the intermolecular HIs which assumes that the molecules are effectively spherical; all of the HIs are calculated at the Rotne-Prager-Yamakawa level of theory. When combined with Fixman’s Chebyshev polynomial method for calculating correlated random displacements, the proposed method provides an approach that is simple to program but sufficiently fast that it makes it computationally viable to include HIs in large-scale simulations. Test calculations performed on very coarse-grained models of the pyruvate dehydrogenase (PDH) E2 complex and on oligomers of ParM (ranging in size from 1 to 20 monomers) indicate that the method reproduces the translational diffusion behavior seen in more complete HI simulations surprisingly well; the method performs less well at capturing rotational diffusion but its discrepancies diminish with increasing size of the simulated assembly. Simulations of residue-level models of two tetrameric protein models demonstrate that the method also works well when more structurally detailed models are used in the simulations. Finally, test simulations of systems containing up to 1024 coarse-grained PDH molecules indicate that the proposed method rapidly becomes more efficient than the conventional BD approach in which correlated random displacements are obtained via a Cholesky decomposition of the complete diffusion tensor. PMID:23914146

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kieselmann, J; Bartzsch, S; Oelfke, U

    Purpose: Microbeam Radiation Therapy is a preclinical method in radiation oncology that modulates radiation fields on a micrometre scale. Dose calculation is challenging due to arising dose gradients and therapeutically important dose ranges. Monte Carlo (MC) simulations, often used as gold standard, are computationally expensive and hence too slow for the optimisation of treatment parameters in future clinical applications. On the other hand, conventional kernel based dose calculation leads to inaccurate results close to material interfaces. The purpose of this work is to overcome these inaccuracies while keeping computation times low. Methods: A point kernel superposition algorithm is modified tomore » account for tissue inhomogeneities. Instead of conventional ray tracing approaches, methods from differential geometry are applied and the space around the primary photon interaction is locally warped. The performance of this approach is compared to MC simulations and a simple convolution algorithm (CA) for two different phantoms and photon spectra. Results: While peak doses of all dose calculation methods agreed within less than 4% deviations, the proposed approach surpassed a simple convolution algorithm in accuracy by a factor of up to 3 in the scatter dose. In a treatment geometry similar to possible future clinical situations differences between Monte Carlo and the differential geometry algorithm were less than 3%. At the same time the calculation time did not exceed 15 minutes. Conclusion: With the developed method it was possible to improve the dose calculation based on the CA method with respect to accuracy especially at sharp tissue boundaries. While the calculation is more extensive than for the CA method and depends on field size, the typical calculation time for a 20×20 mm{sup 2} field on a 3.4 GHz and 8 GByte RAM processor remained below 15 minutes. Parallelisation and optimisation of the algorithm could lead to further significant calculation time reductions.« less

  10. "Tools For Analysis and Visualization of Large Time- Varying CFD Data Sets"

    NASA Technical Reports Server (NTRS)

    Wilhelms, Jane; vanGelder, Allen

    1999-01-01

    During the four years of this grant (including the one year extension), we have explored many aspects of the visualization of large CFD (Computational Fluid Dynamics) datasets. These have included new direct volume rendering approaches, hierarchical methods, volume decimation, error metrics, parallelization, hardware texture mapping, and methods for analyzing and comparing images. First, we implemented an extremely general direct volume rendering approach that can be used to render rectilinear, curvilinear, or tetrahedral grids, including overlapping multiple zone grids, and time-varying grids. Next, we developed techniques for associating the sample data with a k-d tree, a simple hierarchial data model to approximate samples in the regions covered by each node of the tree, and an error metric for the accuracy of the model. We also explored a new method for determining the accuracy of approximate models based on the light field method described at ACM SIGGRAPH (Association for Computing Machinery Special Interest Group on Computer Graphics) '96. In our initial implementation, we automatically image the volume from 32 approximately evenly distributed positions on the surface of an enclosing tessellated sphere. We then calculate differences between these images under different conditions of volume approximation or decimation.

  11. Mapping Quantitative Traits in Unselected Families: Algorithms and Examples

    PubMed Central

    Dupuis, Josée; Shi, Jianxin; Manning, Alisa K.; Benjamin, Emelia J.; Meigs, James B.; Cupples, L. Adrienne; Siegmund, David

    2009-01-01

    Linkage analysis has been widely used to identify from family data genetic variants influencing quantitative traits. Common approaches have both strengths and limitations. Likelihood ratio tests typically computed in variance component analysis can accommodate large families but are highly sensitive to departure from normality assumptions. Regression-based approaches are more robust but their use has primarily been restricted to nuclear families. In this paper, we develop methods for mapping quantitative traits in moderately large pedigrees. Our methods are based on the score statistic which in contrast to the likelihood ratio statistic, can use nonparametric estimators of variability to achieve robustness of the false positive rate against departures from the hypothesized phenotypic model. Because the score statistic is easier to calculate than the likelihood ratio statistic, our basic mapping methods utilize relatively simple computer code that performs statistical analysis on output from any program that computes estimates of identity-by-descent. This simplicity also permits development and evaluation of methods to deal with multivariate and ordinal phenotypes, and with gene-gene and gene-environment interaction. We demonstrate our methods on simulated data and on fasting insulin, a quantitative trait measured in the Framingham Heart Study. PMID:19278016

  12. Computation of deuterium isotope perturbation of 13C NMR chemical shifts of alkanes: a local mode zero-point level approach.

    PubMed

    Yang, Kin S; Hudson, Bruce

    2010-11-25

    Replacement of H by D perturbs the (13)C NMR chemical shifts of an alkane molecule. This effect is largest for the carbon to which the D is attached, diminishing rapidly with intervening bonds. The effect is sensitive to stereochemistry and is large enough to be measured reliably. A simple model based on the ground (zero point) vibrational level and treating only the C-H(D) degrees of freedom (local mode approach) is presented. The change in CH bond length with H/D substitution as well as the reduction in the range of the zero-point level probability distribution for the stretch and both bend degrees of freedom are computed. The (13)C NMR chemical shifts are computed with variation in these three degrees of freedom, and the results are averaged with respect to the H and D distribution functions. The resulting differences in the zero-point averaged chemical shifts are compared with experimental values of the H/D shifts for a series of cycloalkanes, norbornane, adamantane, and protoadamantane. Agreement is generally very good. The remaining differences are discussed. The proton spectrum of cyclohexane- is revisited and updated with improved agreement with experiment.

  13. Digital Morphing Wing: Active Wing Shaping Concept Using Composite Lattice-Based Cellular Structures.

    PubMed

    Jenett, Benjamin; Calisch, Sam; Cellucci, Daniel; Cramer, Nick; Gershenfeld, Neil; Swei, Sean; Cheung, Kenneth C

    2017-03-01

    We describe an approach for the discrete and reversible assembly of tunable and actively deformable structures using modular building block parts for robotic applications. The primary technical challenge addressed by this work is the use of this method to design and fabricate low density, highly compliant robotic structures with spatially tuned stiffness. This approach offers a number of potential advantages over more conventional methods for constructing compliant robots. The discrete assembly reduces manufacturing complexity, as relatively simple parts can be batch-produced and joined to make complex structures. Global mechanical properties can be tuned based on sub-part ordering and geometry, because local stiffness and density can be independently set to a wide range of values and varied spatially. The structure's intrinsic modularity can significantly simplify analysis and simulation. Simple analytical models for the behavior of each building block type can be calibrated with empirical testing and synthesized into a highly accurate and computationally efficient model of the full compliant system. As a case study, we describe a modular and reversibly assembled wing that performs continuous span-wise twist deformation. It exhibits high performance aerodynamic characteristics, is lightweight and simple to fabricate and repair. The wing is constructed from discrete lattice elements, wherein the geometric and mechanical attributes of the building blocks determine the global mechanical properties of the wing. We describe the mechanical design and structural performance of the digital morphing wing, including their relationship to wind tunnel tests that suggest the ability to increase roll efficiency compared to a conventional rigid aileron system. We focus here on describing the approach to design, modeling, and construction as a generalizable approach for robotics that require very lightweight, tunable, and actively deformable structures.

  14. Digital Morphing Wing: Active Wing Shaping Concept Using Composite Lattice-Based Cellular Structures

    PubMed Central

    Jenett, Benjamin; Calisch, Sam; Cellucci, Daniel; Cramer, Nick; Gershenfeld, Neil; Swei, Sean

    2017-01-01

    Abstract We describe an approach for the discrete and reversible assembly of tunable and actively deformable structures using modular building block parts for robotic applications. The primary technical challenge addressed by this work is the use of this method to design and fabricate low density, highly compliant robotic structures with spatially tuned stiffness. This approach offers a number of potential advantages over more conventional methods for constructing compliant robots. The discrete assembly reduces manufacturing complexity, as relatively simple parts can be batch-produced and joined to make complex structures. Global mechanical properties can be tuned based on sub-part ordering and geometry, because local stiffness and density can be independently set to a wide range of values and varied spatially. The structure's intrinsic modularity can significantly simplify analysis and simulation. Simple analytical models for the behavior of each building block type can be calibrated with empirical testing and synthesized into a highly accurate and computationally efficient model of the full compliant system. As a case study, we describe a modular and reversibly assembled wing that performs continuous span-wise twist deformation. It exhibits high performance aerodynamic characteristics, is lightweight and simple to fabricate and repair. The wing is constructed from discrete lattice elements, wherein the geometric and mechanical attributes of the building blocks determine the global mechanical properties of the wing. We describe the mechanical design and structural performance of the digital morphing wing, including their relationship to wind tunnel tests that suggest the ability to increase roll efficiency compared to a conventional rigid aileron system. We focus here on describing the approach to design, modeling, and construction as a generalizable approach for robotics that require very lightweight, tunable, and actively deformable structures. PMID:28289574

  15. Boundary condition computational procedures for inviscid, supersonic steady flow field calculations

    NASA Technical Reports Server (NTRS)

    Abbett, M. J.

    1971-01-01

    Results are given of a comparative study of numerical procedures for computing solid wall boundary points in supersonic inviscid flow calculatons. Twenty five different calculation procedures were tested on two sample problems: a simple expansion wave and a simple compression (two-dimensional steady flow). A simple calculation procedure was developed. The merits and shortcomings of the various procedures are discussed, along with complications for three-dimensional and time-dependent flows.

  16. Evaluation of a computational model to predict elbow range of motion

    PubMed Central

    Nishiwaki, Masao; Johnson, James A.; King, Graham J. W.; Athwal, George S.

    2014-01-01

    Computer models capable of predicting elbow flexion and extension range of motion (ROM) limits would be useful for assisting surgeons in improving the outcomes of surgical treatment of patients with elbow contractures. A simple and robust computer-based model was developed that predicts elbow joint ROM using bone geometries calculated from computed tomography image data. The model assumes a hinge-like flexion-extension axis, and that elbow passive ROM limits can be based on terminal bony impingement. The model was validated against experimental results with a cadaveric specimen, and was able to predict the flexion and extension limits of the intact joint to 0° and 3°, respectively. The model was also able to predict the flexion and extension limits to 1° and 2°, respectively, when simulated osteophytes were inserted into the joint. Future studies based on this approach will be used for the prediction of elbow flexion-extension ROM in patients with primary osteoarthritis to help identify motion-limiting hypertrophic osteophytes, and will eventually permit real-time computer-assisted navigated excisions. PMID:24841799

  17. Towards a cyber-physical era: soft computing framework based multi-sensor array for water quality monitoring

    NASA Astrophysics Data System (ADS)

    Bhardwaj, Jyotirmoy; Gupta, Karunesh K.; Gupta, Rajiv

    2018-02-01

    New concepts and techniques are replacing traditional methods of water quality parameter measurement systems. This paper introduces a cyber-physical system (CPS) approach for water quality assessment in a distribution network. Cyber-physical systems with embedded sensors, processors and actuators can be designed to sense and interact with the water environment. The proposed CPS is comprised of sensing framework integrated with five different water quality parameter sensor nodes and soft computing framework for computational modelling. Soft computing framework utilizes the applications of Python for user interface and fuzzy sciences for decision making. Introduction of multiple sensors in a water distribution network generates a huge number of data matrices, which are sometimes highly complex, difficult to understand and convoluted for effective decision making. Therefore, the proposed system framework also intends to simplify the complexity of obtained sensor data matrices and to support decision making for water engineers through a soft computing framework. The target of this proposed research is to provide a simple and efficient method to identify and detect presence of contamination in a water distribution network using applications of CPS.

  18. Supervised linear dimensionality reduction with robust margins for object recognition

    NASA Astrophysics Data System (ADS)

    Dornaika, F.; Assoum, A.

    2013-01-01

    Linear Dimensionality Reduction (LDR) techniques have been increasingly important in computer vision and pattern recognition since they permit a relatively simple mapping of data onto a lower dimensional subspace, leading to simple and computationally efficient classification strategies. Recently, many linear discriminant methods have been developed in order to reduce the dimensionality of visual data and to enhance the discrimination between different groups or classes. Many existing linear embedding techniques relied on the use of local margins in order to get a good discrimination performance. However, dealing with outliers and within-class diversity has not been addressed by margin-based embedding method. In this paper, we explored the use of different margin-based linear embedding methods. More precisely, we propose to use the concepts of Median miss and Median hit for building robust margin-based criteria. Based on such margins, we seek the projection directions (linear embedding) such that the sum of local margins is maximized. Our proposed approach has been applied to the problem of appearance-based face recognition. Experiments performed on four public face databases show that the proposed approach can give better generalization performance than the classic Average Neighborhood Margin Maximization (ANMM). Moreover, thanks to the use of robust margins, the proposed method down-grades gracefully when label outliers contaminate the training data set. In particular, we show that the concept of Median hit was crucial in order to get robust performance in the presence of outliers.

  19. Evolutionary Study of Interethnic Cooperation

    NASA Astrophysics Data System (ADS)

    Kvasnicka, Vladimir; Pospichal, Jiri

    The purpose of this communication is to present an evolutionary study of cooperation between two ethnic groups. The used model is stimulated by the seminal paper of J. D. Fearon and D. D. Laitin (Explaining Interethnic Cooperation, American Political Science Review, 90 (1996), pp. 715-735), where the iterated prisoner's dilemma was used to model intra- and interethnic interactions. We reformulated their approach in a form of evolutionary prisoner's dilemma method, where a population of strategies is evolved by applying simple reproduction process with a Darwin metaphor of natural selection (a probability of selection to the reproduction is proportional to a fitness). Our computer simulations show that an application of a principle of collective guilt does not lead to an emergence of an interethnic cooperation. When an administrator is introduced, then an emergence of interethnic cooperation may be observed. Furthermore, if the ethnic groups are of very different sizes, then the principle of collective guilt may be very devastating for smaller group so that intraethnic cooperation is destroyed. The second strategy of cooperation is called the personal responsibility, where agents that defected within interethnic interactions are punished inside of their ethnic groups. It means, unlikely to the principle of collective guilt, that there exists only one type of punishment, loosely speaking, agents are punished "personally." All the substantial computational results were checked and interpreted analytically within the theory of evolutionary stable strategies. Moreover, this theoretical approach offers mechanisms of simple scenarios explaining why some particular strategies are stable or not.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan S; Bugbee, Bruce; Gotseff, Peter

    Capturing technical and economic impacts of solar photovoltaics (PV) and other distributed energy resources (DERs) on electric distribution systems can require high-time resolution (e.g. 1 minute), long-duration (e.g. 1 year) simulations. However, such simulations can be computationally prohibitive, particularly when including complex control schemes in quasi-steady-state time series (QSTS) simulation. Various approaches have been used in the literature to down select representative time segments (e.g. days), but typically these are best suited for lower time resolutions or consider only a single data stream (e.g. PV production) for selection. We present a statistical approach that combines stratified sampling and bootstrapping tomore » select representative days while also providing a simple method to reassemble annual results. We describe the approach in the context of a recent study with a utility partner. This approach enables much faster QSTS analysis by simulating only a subset of days, while maintaining accurate annual estimates.« less

  1. A model predictive speed tracking control approach for autonomous ground vehicles

    NASA Astrophysics Data System (ADS)

    Zhu, Min; Chen, Huiyan; Xiong, Guangming

    2017-03-01

    This paper presents a novel speed tracking control approach based on a model predictive control (MPC) framework for autonomous ground vehicles. A switching algorithm without calibration is proposed to determine the drive or brake control. Combined with a simple inverse longitudinal vehicle model and adaptive regulation of MPC, this algorithm can make use of the engine brake torque for various driving conditions and avoid high frequency oscillations automatically. A simplified quadratic program (QP) solving algorithm is used to reduce the computational time, and the approach has been applied in a 16-bit microcontroller. The performance of the proposed approach is evaluated via simulations and vehicle tests, which were carried out in a range of speed-profile tracking tasks. With a well-designed system structure, high-precision speed control is achieved. The system can robustly model uncertainty and external disturbances, and yields a faster response with less overshoot than a PI controller.

  2. Modeling Electronic-Nuclear Interactions for Excitation Energy Transfer Processes in Light-Harvesting Complexes.

    PubMed

    Lee, Mi Kyung; Coker, David F

    2016-08-18

    An accurate approach for computing intermolecular and intrachromophore contributions to spectral densities to describe the electronic-nuclear interactions relevant for modeling excitation energy transfer processes in light harvesting systems is presented. The approach is based on molecular dynamics (MD) calculations of classical correlation functions of long-range contributions to excitation energy fluctuations and a separate harmonic analysis and single-point gradient quantum calculations for electron-intrachromophore vibrational couplings. A simple model is also presented that enables detailed analysis of the shortcomings of standard MD-based excitation energy fluctuation correlation function approaches. The method introduced here avoids these problems, and its reliability is demonstrated in accurate predictions for bacteriochlorophyll molecules in the Fenna-Matthews-Olson pigment-protein complex, where excellent agreement with experimental spectral densities is found. This efficient approach can provide instantaneous spectral densities for treating the influence of fluctuations in environmental dissipation on fast electronic relaxation.

  3. Unified Generic Geometric-Decompositions for Consensus or Flocking Systems of Cooperative Agents and Fast Recalculations of Decomposed Subsystems Under Topology-Adjustments.

    PubMed

    Li, Wei

    2016-06-01

    This paper considers a unified geometric projection approach for: 1) decomposing a general system of cooperative agents coupled via Laplacian matrices or stochastic matrices and 2) deriving a centroid-subsystem and many shape-subsystems, where each shape-subsystem has the distinct properties (e.g., preservation of formation and stability of the original system, sufficiently simple structures and explicit formation evolution of agents, and decoupling from the centroid-subsystem) which will facilitate subsequent analyses. Particularly, this paper provides an additional merit of the approach: considering adjustments of coupling topologies of agents which frequently occur in system design (e.g., to add or remove an edge, to move an edge to a new place, and to change the weight of an edge), the corresponding new shape-subsystems can be derived by a few simple computations merely from the old shape-subsystems and without referring to the original system, which will provide further convenience for analysis and flexibility of choice. Finally, such fast recalculations of new subsystems under topology adjustments are provided with examples.

  4. SASS: A symmetry adapted stochastic search algorithm exploiting site symmetry

    NASA Astrophysics Data System (ADS)

    Wheeler, Steven E.; Schleyer, Paul v. R.; Schaefer, Henry F.

    2007-03-01

    A simple symmetry adapted search algorithm (SASS) exploiting point group symmetry increases the efficiency of systematic explorations of complex quantum mechanical potential energy surfaces. In contrast to previously described stochastic approaches, which do not employ symmetry, candidate structures are generated within simple point groups, such as C2, Cs, and C2v. This facilitates efficient sampling of the 3N-6 Pople's dimensional configuration space and increases the speed and effectiveness of quantum chemical geometry optimizations. Pople's concept of framework groups [J. Am. Chem. Soc. 102, 4615 (1980)] is used to partition the configuration space into structures spanning all possible distributions of sets of symmetry equivalent atoms. This provides an efficient means of computing all structures of a given symmetry with minimum redundancy. This approach also is advantageous for generating initial structures for global optimizations via genetic algorithm and other stochastic global search techniques. Application of the SASS method is illustrated by locating 14 low-lying stationary points on the cc-pwCVDZ ROCCSD(T) potential energy surface of Li5H2. The global minimum structure is identified, along with many unique, nonintuitive, energetically favorable isomers.

  5. Simulation-Guided 3D Nanomanufacturing via Focused Electron Beam Induced Deposition

    DOE PAGES

    Fowlkes, Jason D.; Winkler, Robert; Lewis, Brett B.; ...

    2016-06-10

    Focused electron beam induced deposition (FEBID) is one of the few techniques that enables direct-write synthesis of free-standing 3D nanostructures. While the fabrication of simple architectures such as vertical or curving nanowires has been achieved by simple trial and error, processing complex 3D structures is not tractable with this approach. This is due, inpart, to the dynamic interplay between electron–solid interactions and the transient spatial distribution of absorbed precursor molecules on the solid surface. Here, we demonstrate the ability to controllably deposit 3D lattice structures at the micro/nanoscale, which have received recent interest owing to superior mechanical and optical properties.more » Moreover, a hybrid Monte Carlo–continuum simulation is briefly overviewed, and subsequently FEBID experiments and simulations are directly compared. Finally, a 3D computer-aided design (CAD) program is introduced, which generates the beam parameters necessary for FEBID by both simulation and experiment. In using this approach, we demonstrate the fabrication of various 3D lattice structures using Pt-, Au-, and W-based precursors.« less

  6. An immersed-boundary method for flow–structure interaction in biological systems with application to phonation

    PubMed Central

    Luo, Haoxiang; Mittal, Rajat; Zheng, Xudong; Bielamowicz, Steven A.; Walsh, Raymond J.; Hahn, James K.

    2008-01-01

    A new numerical approach for modeling a class of flow–structure interaction problems typically encountered in biological systems is presented. In this approach, a previously developed, sharp-interface, immersed-boundary method for incompressible flows is used to model the fluid flow and a new, sharp-interface Cartesian grid, immersed boundary method is devised to solve the equations of linear viscoelasticity that governs the solid. The two solvers are coupled to model flow–structure interaction. This coupled solver has the advantage of simple grid generation and efficient computation on simple, single-block structured grids. The accuracy of the solid-mechanics solver is examined by applying it to a canonical problem. The solution methodology is then applied to the problem of laryngeal aerodynamics and vocal fold vibration during human phonation. This includes a three-dimensional eigen analysis for a multi-layered vocal fold prototype as well as two-dimensional, flow-induced vocal fold vibration in a modeled larynx. Several salient features of the aerodynamics as well as vocal-fold dynamics are presented. PMID:19936017

  7. Nonthermal model for ultrafast laser-induced plasma generation around a plasmonic nanorod

    NASA Astrophysics Data System (ADS)

    Labouret, Timothée; Palpant, Bruno

    2016-12-01

    The excitation of plasmonic gold nanoparticles by ultrashort laser pulses can trigger interesting electron-based effects in biological media such as production of reactive oxygen species or cell membrane optoporation. In order to better understand the optical and thermal processes at play, we modeled the interaction of a subpicosecond, near-infrared laser pulse with a gold nanorod in water. A nonthermal model is used and compared to a simple two-temperature thermal approach. For both models, the computation of the transient optical response reveals strong plasmon damping. Electron emission from the metal into the water is also calculated in a specific way for each model. The dynamics of the resulting local plasma in water is assessed by a rate equation model. While both approaches provide similar results for the transient optical properties, the simple thermal one is unable to properly describe electron emission and plasma generation. The latter is shown to mostly originate from electron-electron thermionic emission and photoemission from the metal. Taking into account the transient optical response is mandatory to properly calculate both electron emission and local plasma dynamics in water.

  8. Dynamics and control of quadcopter using linear model predictive control approach

    NASA Astrophysics Data System (ADS)

    Islam, M.; Okasha, M.; Idres, M. M.

    2017-12-01

    This paper investigates the dynamics and control of a quadcopter using the Model Predictive Control (MPC) approach. The dynamic model is of high fidelity and nonlinear, with six degrees of freedom that include disturbances and model uncertainties. The control approach is developed based on MPC to track different reference trajectories ranging from simple ones such as circular to complex helical trajectories. In this control technique, a linearized model is derived and the receding horizon method is applied to generate the optimal control sequence. Although MPC is computer expensive, it is highly effective to deal with the different types of nonlinearities and constraints such as actuators’ saturation and model uncertainties. The MPC parameters (control and prediction horizons) are selected by trial-and-error approach. Several simulation scenarios are performed to examine and evaluate the performance of the proposed control approach using MATLAB and Simulink environment. Simulation results show that this control approach is highly effective to track a given reference trajectory.

  9. Synthetic biology: insights into biological computation.

    PubMed

    Manzoni, Romilde; Urrios, Arturo; Velazquez-Garcia, Silvia; de Nadal, Eulàlia; Posas, Francesc

    2016-04-18

    Organisms have evolved a broad array of complex signaling mechanisms that allow them to survive in a wide range of environmental conditions. They are able to sense external inputs and produce an output response by computing the information. Synthetic biology attempts to rationally engineer biological systems in order to perform desired functions. Our increasing understanding of biological systems guides this rational design, while the huge background in electronics for building circuits defines the methodology. In this context, biocomputation is the branch of synthetic biology aimed at implementing artificial computational devices using engineered biological motifs as building blocks. Biocomputational devices are defined as biological systems that are able to integrate inputs and return outputs following pre-determined rules. Over the last decade the number of available synthetic engineered devices has increased exponentially; simple and complex circuits have been built in bacteria, yeast and mammalian cells. These devices can manage and store information, take decisions based on past and present inputs, and even convert a transient signal into a sustained response. The field is experiencing a fast growth and every day it is easier to implement more complex biological functions. This is mainly due to advances in in vitro DNA synthesis, new genome editing tools, novel molecular cloning techniques, continuously growing part libraries as well as other technological advances. This allows that digital computation can now be engineered and implemented in biological systems. Simple logic gates can be implemented and connected to perform novel desired functions or to better understand and redesign biological processes. Synthetic biological digital circuits could lead to new therapeutic approaches, as well as new and efficient ways to produce complex molecules such as antibiotics, bioplastics or biofuels. Biological computation not only provides possible biomedical and biotechnological applications, but also affords a greater understanding of biological systems.

  10. A General Interface Method for Aeroelastic Analysis of Aircraft

    NASA Technical Reports Server (NTRS)

    Tzong, T.; Chen, H. H.; Chang, K. C.; Wu, T.; Cebeci, T.

    1996-01-01

    The aeroelastic analysis of an aircraft requires an accurate and efficient procedure to couple aerodynamics and structures. The procedure needs an interface method to bridge the gap between the aerodynamic and structural models in order to transform loads and displacements. Such an interface method is described in this report. This interface method transforms loads computed by any aerodynamic code to a structural finite element (FE) model and converts the displacements from the FE model to the aerodynamic model. The approach is based on FE technology in which virtual work is employed to transform the aerodynamic pressures into FE nodal forces. The displacements at the FE nodes are then converted back to aerodynamic grid points on the aircraft surface through the reciprocal theorem in structural engineering. The method allows both high and crude fidelities of both models and does not require an intermediate modeling. In addition, the method performs the conversion of loads and displacements directly between individual aerodynamic grid point and its corresponding structural finite element and, hence, is very efficient for large aircraft models. This report also describes the application of this aero-structure interface method to a simple wing and an MD-90 wing. The results show that the aeroelastic effect is very important. For the simple wing, both linear and nonlinear approaches are used. In the linear approach, the deformation of the structural model is considered small, and the loads from the deformed aerodynamic model are applied to the original geometry of the structure. In the nonlinear approach, the geometry of the structure and its stiffness matrix are updated in every iteration and the increments of loads from the previous iteration are applied to the new structural geometry in order to compute the displacement increments. Additional studies to apply the aero-structure interaction procedure to more complicated geometry will be conducted in the second phase of the present contract.

  11. An autonomous molecular computer for logical control of gene expression

    PubMed Central

    Benenson, Yaakov; Gil, Binyamin; Ben-Dor, Uri; Adar, Rivka; Shapiro, Ehud

    2013-01-01

    Early biomolecular computer research focused on laboratory-scale, human-operated computers for complex computational problems1–7. Recently, simple molecular-scale autonomous programmable computers were demonstrated8–15 allowing both input and output information to be in molecular form. Such computers, using biological molecules as input data and biologically active molecules as outputs, could produce a system for ‘logical’ control of biological processes. Here we describe an autonomous biomolecular computer that, at least in vitro, logically analyses the levels of messenger RNA species, and in response produces a molecule capable of affecting levels of gene expression. The computer operates at a concentration of close to a trillion computers per microlitre and consists of three programmable modules: a computation module, that is, a stochastic molecular automaton12–17; an input module, by which specific mRNA levels or point mutations regulate software molecule concentrations, and hence automaton transition probabilities; and an output module, capable of controlled release of a short single-stranded DNA molecule. This approach might be applied in vivo to biochemical sensing, genetic engineering and even medical diagnosis and treatment. As a proof of principle we programmed the computer to identify and analyse mRNA of disease-related genes18–22 associated with models of small-cell lung cancer and prostate cancer, and to produce a single-stranded DNA molecule modelled after an anticancer drug. PMID:15116117

  12. Harnessing atomistic simulations to predict the rate at which dislocations overcome obstacles

    NASA Astrophysics Data System (ADS)

    Saroukhani, S.; Nguyen, L. D.; Leung, K. W. K.; Singh, C. V.; Warner, D. H.

    2016-05-01

    Predicting the rate at which dislocations overcome obstacles is key to understanding the microscopic features that govern the plastic flow of modern alloys. In this spirit, the current manuscript examines the rate at which an edge dislocation overcomes an obstacle in aluminum. Predictions were made using different popular variants of Harmonic Transition State Theory (HTST) and compared to those of direct Molecular Dynamics (MD) simulations. The HTST predictions were found to be grossly inaccurate due to the large entropy barrier associated with the dislocation-obstacle interaction. Considering the importance of finite temperature effects, the utility of the Finite Temperature String (FTS) method was then explored. While this approach was found capable of identifying a prominent reaction tube, it was not capable of computing the free energy profile along the tube. Lastly, the utility of the Transition Interface Sampling (TIS) approach was explored, which does not need a free energy profile and is known to be less reliant on the choice of reaction coordinate. The TIS approach was found capable of accurately predicting the rate, relative to direct MD simulations. This finding was utilized to examine the temperature and load dependence of the dislocation-obstacle interaction in a simple periodic cell configuration. An attractive rate prediction approach combining TST and simple continuum models is identified, and the strain rate sensitivity of individual dislocation obstacle interactions is predicted.

  13. On the Application of Different Event-Based Sampling Strategies to the Control of a Simple Industrial Process

    PubMed Central

    Sánchez, José; Guarnes, Miguel Ángel; Dormido, Sebastián

    2009-01-01

    This paper is an experimental study of the utilization of different event-based strategies for the automatic control of a simple but very representative industrial process: the level control of a tank. In an event-based control approach it is the triggering of a specific event, and not the time, that instructs the sensor to send the current state of the process to the controller, and the controller to compute a new control action and send it to the actuator. In the document, five control strategies based on different event-based sampling techniques are described, compared, and contrasted with a classical time-based control approach and a hybrid one. The common denominator in the time, the hybrid, and the event-based control approaches is the controller: a proportional-integral algorithm with adaptations depending on the selected control approach. To compare and contrast each one of the hybrid and the pure event-based control algorithms with the time-based counterpart, the two tasks that a control strategy must achieve (set-point following and disturbance rejection) are independently analyzed. The experimental study provides new proof concerning the ability of event-based control strategies to minimize the data exchange among the control agents (sensors, controllers, actuators) when an error-free control of the process is not a hard requirement. PMID:22399975

  14. Ensemble of Chaotic and Naive Approaches for Performance Enhancement in Video Encryption.

    PubMed

    Chandrasekaran, Jeyamala; Thiruvengadam, S J

    2015-01-01

    Owing to the growth of high performance network technologies, multimedia applications over the Internet are increasing exponentially. Applications like video conferencing, video-on-demand, and pay-per-view depend upon encryption algorithms for providing confidentiality. Video communication is characterized by distinct features such as large volume, high redundancy between adjacent frames, video codec compliance, syntax compliance, and application specific requirements. Naive approaches for video encryption encrypt the entire video stream with conventional text based cryptographic algorithms. Although naive approaches are the most secure for video encryption, the computational cost associated with them is very high. This research work aims at enhancing the speed of naive approaches through chaos based S-box design. Chaotic equations are popularly known for randomness, extreme sensitivity to initial conditions, and ergodicity. The proposed methodology employs two-dimensional discrete Henon map for (i) generation of dynamic and key-dependent S-box that could be integrated with symmetric algorithms like Blowfish and Data Encryption Standard (DES) and (ii) generation of one-time keys for simple substitution ciphers. The proposed design is tested for randomness, nonlinearity, avalanche effect, bit independence criterion, and key sensitivity. Experimental results confirm that chaos based S-box design and key generation significantly reduce the computational cost of video encryption with no compromise in security.

  15. Metabolic PathFinding: inferring relevant pathways in biochemical networks.

    PubMed

    Croes, Didier; Couche, Fabian; Wodak, Shoshana J; van Helden, Jacques

    2005-07-01

    Our knowledge of metabolism can be represented as a network comprising several thousands of nodes (compounds and reactions). Several groups applied graph theory to analyse the topological properties of this network and to infer metabolic pathways by path finding. This is, however, not straightforward, with a major problem caused by traversing irrelevant shortcuts through highly connected nodes, which correspond to pool metabolites and co-factors (e.g. H2O, NADP and H+). In this study, we present a web server implementing two simple approaches, which circumvent this problem, thereby improving the relevance of the inferred pathways. In the simplest approach, the shortest path is computed, while filtering out the selection of highly connected compounds. In the second approach, the shortest path is computed on the weighted metabolic graph where each compound is assigned a weight equal to its connectivity in the network. This approach significantly increases the accuracy of the inferred pathways, enabling the correct inference of relatively long pathways (e.g. with as many as eight intermediate reactions). Available options include the calculation of the k-shortest paths between two specified seed nodes (either compounds or reactions). Multiple requests can be submitted in a queue. Results are returned by email, in textual as well as graphical formats (available in http://www.scmbb.ulb.ac.be/pathfinding/).

  16. Ensemble of Chaotic and Naive Approaches for Performance Enhancement in Video Encryption

    PubMed Central

    Chandrasekaran, Jeyamala; Thiruvengadam, S. J.

    2015-01-01

    Owing to the growth of high performance network technologies, multimedia applications over the Internet are increasing exponentially. Applications like video conferencing, video-on-demand, and pay-per-view depend upon encryption algorithms for providing confidentiality. Video communication is characterized by distinct features such as large volume, high redundancy between adjacent frames, video codec compliance, syntax compliance, and application specific requirements. Naive approaches for video encryption encrypt the entire video stream with conventional text based cryptographic algorithms. Although naive approaches are the most secure for video encryption, the computational cost associated with them is very high. This research work aims at enhancing the speed of naive approaches through chaos based S-box design. Chaotic equations are popularly known for randomness, extreme sensitivity to initial conditions, and ergodicity. The proposed methodology employs two-dimensional discrete Henon map for (i) generation of dynamic and key-dependent S-box that could be integrated with symmetric algorithms like Blowfish and Data Encryption Standard (DES) and (ii) generation of one-time keys for simple substitution ciphers. The proposed design is tested for randomness, nonlinearity, avalanche effect, bit independence criterion, and key sensitivity. Experimental results confirm that chaos based S-box design and key generation significantly reduce the computational cost of video encryption with no compromise in security. PMID:26550603

  17. Neurobiological studies of risk assessment: a comparison of expected utility and mean-variance approaches.

    PubMed

    D'Acremont, Mathieu; Bossaerts, Peter

    2008-12-01

    When modeling valuation under uncertainty, economists generally prefer expected utility because it has an axiomatic foundation, meaning that the resulting choices will satisfy a number of rationality requirements. In expected utility theory, values are computed by multiplying probabilities of each possible state of nature by the payoff in that state and summing the results. The drawback of this approach is that all state probabilities need to be dealt with separately, which becomes extremely cumbersome when it comes to learning. Finance academics and professionals, however, prefer to value risky prospects in terms of a trade-off between expected reward and risk, where the latter is usually measured in terms of reward variance. This mean-variance approach is fast and simple and greatly facilitates learning, but it impedes assigning values to new gambles on the basis of those of known ones. To date, it is unclear whether the human brain computes values in accordance with expected utility theory or with mean-variance analysis. In this article, we discuss the theoretical and empirical arguments that favor one or the other theory. We also propose a new experimental paradigm that could determine whether the human brain follows the expected utility or the mean-variance approach. Behavioral results of implementation of the paradigm are discussed.

  18. A real-space stochastic density matrix approach for density functional electronic structure.

    PubMed

    Beck, Thomas L

    2015-12-21

    The recent development of real-space grid methods has led to more efficient, accurate, and adaptable approaches for large-scale electrostatics and density functional electronic structure modeling. With the incorporation of multiscale techniques, linear-scaling real-space solvers are possible for density functional problems if localized orbitals are used to represent the Kohn-Sham energy functional. These methods still suffer from high computational and storage overheads, however, due to extensive matrix operations related to the underlying wave function grid representation. In this paper, an alternative stochastic method is outlined that aims to solve directly for the one-electron density matrix in real space. In order to illustrate aspects of the method, model calculations are performed for simple one-dimensional problems that display some features of the more general problem, such as spatial nodes in the density matrix. This orbital-free approach may prove helpful considering a future involving increasingly parallel computing architectures. Its primary advantage is the near-locality of the random walks, allowing for simultaneous updates of the density matrix in different regions of space partitioned across the processors. In addition, it allows for testing and enforcement of the particle number and idempotency constraints through stabilization of a Feynman-Kac functional integral as opposed to the extensive matrix operations in traditional approaches.

  19. Probability and possibility-based representations of uncertainty in fault tree analysis.

    PubMed

    Flage, Roger; Baraldi, Piero; Zio, Enrico; Aven, Terje

    2013-01-01

    Expert knowledge is an important source of input to risk analysis. In practice, experts might be reluctant to characterize their knowledge and the related (epistemic) uncertainty using precise probabilities. The theory of possibility allows for imprecision in probability assignments. The associated possibilistic representation of epistemic uncertainty can be combined with, and transformed into, a probabilistic representation; in this article, we show this with reference to a simple fault tree analysis. We apply an integrated (hybrid) probabilistic-possibilistic computational framework for the joint propagation of the epistemic uncertainty on the values of the (limiting relative frequency) probabilities of the basic events of the fault tree, and we use possibility-probability (probability-possibility) transformations for propagating the epistemic uncertainty within purely probabilistic and possibilistic settings. The results of the different approaches (hybrid, probabilistic, and possibilistic) are compared with respect to the representation of uncertainty about the top event (limiting relative frequency) probability. Both the rationale underpinning the approaches and the computational efforts they require are critically examined. We conclude that the approaches relevant in a given setting depend on the purpose of the risk analysis, and that further research is required to make the possibilistic approaches operational in a risk analysis context. © 2012 Society for Risk Analysis.

  20. Guidelines and Procedures for Computing Time-Series Suspended-Sediment Concentrations and Loads from In-Stream Turbidity-Sensor and Streamflow Data

    USGS Publications Warehouse

    Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.

    2009-01-01

    In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.

  1. Characterization of biofilms with a fiber optic spectrometer

    NASA Astrophysics Data System (ADS)

    Krautwald, S.; Tonyali, A.; Fellerhoff, B.; Franke, Hilmar; Tamachkiarov, A.; Griebe, T.; Flemming, H. C.

    2000-12-01

    Optical sensing is one promising approach to monitor bioflims in an early stage. Generally, natural bioflims are quite inhomogeneous, therefore we start the investigation with suspensions of dead bacteria in water as a simple model for a bioflim. An experimental arrangement based on a white light fiber optic spectrometer is used for measuring the density of a thin film with a local resolution in the order of several tim. The method is applied with model biofilms. In a computer controlled procedure reflectance spectra may be recorded at different positions in the x-y plane. Scanning through thin suspension regions of bacteria between glass plates allows an estimation of the refractive index of bacteria. Taking advantage of the light collecting property of the glass substrate a simple measurement of the fluorescence with local resolution is demonstrated as well.

  2. SeqPig: simple and scalable scripting for large sequencing data sets in Hadoop

    PubMed Central

    Schumacher, André; Pireddu, Luca; Niemenmaa, Matti; Kallio, Aleksi; Korpelainen, Eija; Zanetti, Gianluigi; Heljanko, Keijo

    2014-01-01

    Summary: Hadoop MapReduce-based approaches have become increasingly popular due to their scalability in processing large sequencing datasets. However, as these methods typically require in-depth expertise in Hadoop and Java, they are still out of reach of many bioinformaticians. To solve this problem, we have created SeqPig, a library and a collection of tools to manipulate, analyze and query sequencing datasets in a scalable and simple manner. SeqPigscripts use the Hadoop-based distributed scripting engine Apache Pig, which automatically parallelizes and distributes data processing tasks. We demonstrate SeqPig’s scalability over many computing nodes and illustrate its use with example scripts. Availability and Implementation: Available under the open source MIT license at http://sourceforge.net/projects/seqpig/ Contact: andre.schumacher@yahoo.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24149054

  3. Prospects for improving the representation of coastal and shelf seas in global ocean models

    NASA Astrophysics Data System (ADS)

    Holt, Jason; Hyder, Patrick; Ashworth, Mike; Harle, James; Hewitt, Helene T.; Liu, Hedong; New, Adrian L.; Pickles, Stephen; Porter, Andrew; Popova, Ekaterina; Icarus Allen, J.; Siddorn, John; Wood, Richard

    2017-02-01

    Accurately representing coastal and shelf seas in global ocean models represents one of the grand challenges of Earth system science. They are regions of immense societal importance through the goods and services they provide, hazards they pose and their role in global-scale processes and cycles, e.g. carbon fluxes and dense water formation. However, they are poorly represented in the current generation of global ocean models. In this contribution, we aim to briefly characterise the problem, and then to identify the important physical processes, and their scales, needed to address this issue in the context of the options available to resolve these scales globally and the evolving computational landscape.We find barotropic and topographic scales are well resolved by the current state-of-the-art model resolutions, e.g. nominal 1/12°, and still reasonably well resolved at 1/4°; here, the focus is on process representation. We identify tides, vertical coordinates, river inflows and mixing schemes as four areas where modelling approaches can readily be transferred from regional to global modelling with substantial benefit. In terms of finer-scale processes, we find that a 1/12° global model resolves the first baroclinic Rossby radius for only ˜ 8 % of regions < 500 m deep, but this increases to ˜ 70 % for a 1/72° model, so resolving scales globally requires substantially finer resolution than the current state of the art.We quantify the benefit of improved resolution and process representation using 1/12° global- and basin-scale northern North Atlantic nucleus for a European model of the ocean (NEMO) simulations; the latter includes tides and a k-ɛ vertical mixing scheme. These are compared with global stratification observations and 19 models from CMIP5. In terms of correlation and basin-wide rms error, the high-resolution models outperform all these CMIP5 models. The model with tides shows improved seasonal cycles compared to the high-resolution model without tides. The benefits of resolution are particularly apparent in eastern boundary upwelling zones.To explore the balance between the size of a globally refined model and that of multiscale modelling options (e.g. finite element, finite volume or a two-way nesting approach), we consider a simple scale analysis and a conceptual grid refining approach. We put this analysis in the context of evolving computer systems, discussing model turnaround time, scalability and resource costs. Using a simple cost model compared to a reference configuration (taken to be a 1/4° global model in 2011) and the increasing performance of the UK Research Councils' computer facility, we estimate an unstructured mesh multiscale approach, resolving process scales down to 1.5 km, would use a comparable share of the computer resource by 2021, the two-way nested multiscale approach by 2022, and a 1/72° global model by 2026. However, we also note that a 1/12° global model would not have a comparable computational cost to a 1° global model in 2017 until 2027. Hence, we conclude that for computationally expensive models (e.g. for oceanographic research or operational oceanography), resolving scales to ˜ 1.5 km would be routinely practical in about a decade given substantial effort on numerical and computational development. For complex Earth system models, this extends to about 2 decades, suggesting the focus here needs to be on improved process parameterisation to meet these challenges.

  4. Low-cost computing and network communication for a point-of-care device to perform a 3-part leukocyte differential

    NASA Astrophysics Data System (ADS)

    Powless, Amy J.; Feekin, Lauren E.; Hutcheson, Joshua A.; Alapat, Daisy V.; Muldoon, Timothy J.

    2016-03-01

    Point-of-care approaches for 3-part leukocyte differentials (granulocyte, monocyte, and lymphocyte), traditionally performed using a hematology analyzer within a panel of tests called a complete blood count (CBC), are essential not only to reduce cost but to provide faster results in low resource areas. Recent developments in lab-on-a-chip devices have shown promise in reducing the size and reagents used, relating to a decrease in overall cost. Furthermore, smartphone diagnostic approaches have shown much promise in the area of point-of-care diagnostics, but the relatively high per-unit cost may limit their utility in some settings. We present here a method to reduce computing cost of a simple epi-fluorescence imaging system using a Raspberry Pi (single-board computer, <$40) to perform a 3-part leukocyte differential comparable to results from a hematology analyzer. This system uses a USB color camera in conjunction with a leukocyte-selective vital dye (acridine orange) in order to determine a leukocyte count and differential from a low volume (<20 microliters) of whole blood obtained via fingerstick. Additionally, the system utilizes a "cloud-based" approach to send image data from the Raspberry Pi to a main server and return results back to the user, exporting the bulk of the computational requirements. Six images were acquired per minute with up to 200 cells per field of view. Preliminary results showed that the differential count varied significantly in monocytes with a 1 minute time difference indicating the importance of time-gating to produce an accurate/consist differential.

  5. Computing local edge probability in natural scenes from a population of oriented simple cells

    PubMed Central

    Ramachandra, Chaithanya A.; Mel, Bartlett W.

    2013-01-01

    A key computation in visual cortex is the extraction of object contours, where the first stage of processing is commonly attributed to V1 simple cells. The standard model of a simple cell—an oriented linear filter followed by a divisive normalization—fits a wide variety of physiological data, but is a poor performing local edge detector when applied to natural images. The brain's ability to finely discriminate edges from nonedges therefore likely depends on information encoded by local simple cell populations. To gain insight into the corresponding decoding problem, we used Bayes's rule to calculate edge probability at a given location/orientation in an image based on a surrounding filter population. Beginning with a set of ∼ 100 filters, we culled out a subset that were maximally informative about edges, and minimally correlated to allow factorization of the joint on- and off-edge likelihood functions. Key features of our approach include a new, efficient method for ground-truth edge labeling, an emphasis on achieving filter independence, including a focus on filters in the region orthogonal rather than tangential to an edge, and the use of a customized parametric model to represent the individual filter likelihood functions. The resulting population-based edge detector has zero parameters, calculates edge probability based on a sum of surrounding filter influences, is much more sharply tuned than the underlying linear filters, and effectively captures fine-scale edge structure in natural scenes. Our findings predict nonmonotonic interactions between cells in visual cortex, wherein a cell may for certain stimuli excite and for other stimuli inhibit the same neighboring cell, depending on the two cells' relative offsets in position and orientation, and their relative activation levels. PMID:24381295

  6. Yield of computed tomography of the cervical spine in cases of simple assault.

    PubMed

    Uriell, Matthew L; Allen, Jason W; Lovasik, Brendan P; Benayoun, Marc D; Spandorfer, Robert M; Holder, Chad A

    2017-01-01

    Computed tomography (CT) of the cervical spine (C-spine) is routinely ordered for low-impact, non-penetrating or "simple" assault at our institution and others. Common clinical decision tools for C-spine imaging in the setting of trauma include the National Emergency X-Radiography Utilization Study (NEXUS) and the Canadian Cervical Spine Rule for Radiography (CCR). While NEXUS and CCR have served to decrease the amount of unnecessary imaging of the C-spine, overutilization of CT is still of concern. A retrospective, cross-sectional study was performed of the electronic medical record (EMR) database at an urban, Level I Trauma Center over a 6-month period for patients receiving a C-spine CT. The primary outcome of interest was prevalence of cervical spine fracture. Secondary outcomes of interest included appropriateness of C-spine imaging after retrospective application of NEXUS and CCR. The hypothesis was that fracture rates within this patient population would be extremely low. No C-spine fractures were identified in the 460 patients who met inclusion criteria. Approximately 29% of patients did not warrant imaging by CCR, and 25% by NEXUS. Of note, approximately 44% of patients were indeterminate for whether imaging was warranted by CCR, with the most common reason being lack of assessment for active neck rotation. Cervical spine CT is overutilized in the setting of simple assault, despite established clinical decision rules. With no fractures identified regardless of other factors, the likelihood that a CT of the cervical spine will identify clinically significant findings in the setting of "simple" assault is extremely low, approaching zero. At minimum, adherence to CCR and NEXUS within this patient population would serve to reduce both imaging costs and population radiation dose exposure. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Implementation of a fully-balanced periodic tridiagonal solver on a parallel distributed memory architecture

    NASA Technical Reports Server (NTRS)

    Eidson, T. M.; Erlebacher, G.

    1994-01-01

    While parallel computers offer significant computational performance, it is generally necessary to evaluate several programming strategies. Two programming strategies for a fairly common problem - a periodic tridiagonal solver - are developed and evaluated. Simple model calculations as well as timing results are presented to evaluate the various strategies. The particular tridiagonal solver evaluated is used in many computational fluid dynamic simulation codes. The feature that makes this algorithm unique is that these simulation codes usually require simultaneous solutions for multiple right-hand-sides (RHS) of the system of equations. Each RHS solutions is independent and thus can be computed in parallel. Thus a Gaussian elimination type algorithm can be used in a parallel computation and the more complicated approaches such as cyclic reduction are not required. The two strategies are a transpose strategy and a distributed solver strategy. For the transpose strategy, the data is moved so that a subset of all the RHS problems is solved on each of the several processors. This usually requires significant data movement between processor memories across a network. The second strategy attempts to have the algorithm allow the data across processor boundaries in a chained manner. This usually requires significantly less data movement. An approach to accomplish this second strategy in a near-perfect load-balanced manner is developed. In addition, an algorithm will be shown to directly transform a sequential Gaussian elimination type algorithm into the parallel chained, load-balanced algorithm.

  8. The effect of denaturant on protein stability: a Monte Carlo lattice simulation

    NASA Astrophysics Data System (ADS)

    Choi, Ho Sup; Huh, June; Jo, Won Ho

    2003-03-01

    Denaturants are the reagents that decrease protein stability by interacting with both nonpolar and polar surfaces of protein when added to the aqueous solvent. However, the physical nature of these interactions has not been clearly understood. It is not easy to elucidate the nature of denaturant theoretically or experimentally. Even in computer simulation, the denaturant atoms are unable to be dealt explicitly due to computationally enormous costs. We have used a lattice model of protein and denaturant. By varying concentration of denaturant and interaction energy between protein and denaturant, we have measured the change of stability of the protein. This simple model reflects the experimental observation that the free energy of unfolding is a linear function of denaturant concentration in the transition range. We have also performed a simulation under isotropic perturbation. In this case, denaturant molecules are not included and a biasing potential is introduced in order to increase the radius of gyration of protein, which incorporates the effect of denaturant implicitly. The calculated free energy landscape and conformational ensembles sampled under this condition is very close to those of simulation using denaturant molecules interacting with protein. We have applied this simple approach for simulating the effect of denaturant to real proteins.

  9. Exact correlators on the Wilson loop in N=4 SYM: localization, defect CFT, and integrability

    NASA Astrophysics Data System (ADS)

    Giombi, Simone; Komatsu, Shota

    2018-05-01

    We compute a set of correlation functions of operator insertions on the 1 /8 BPS Wilson loop in N=4 SYM by employing supersymmetric localization, OPE and the Gram-Schmidt orthogonalization. These correlators exhibit a simple determinant structure, are position-independent and form a topological subsector, but depend nontrivially on the 't Hooft coupling and the rank of the gauge group. When applied to the 1 /2 BPS circular (or straight) Wilson loop, our results provide an infinite family of exact defect CFT data, including the structure constants of protected defect primaries of arbitrary length inserted on the loop. At strong coupling, we show precise agreement with a direct calculation using perturbation theory around the AdS2 string worldsheet. We also explain the connection of our results to the "generalized Bremsstrahlung functions" previously computed from integrability techniques, reproducing the known results in the planar limit as well as obtaining their finite N generalization. Furthermore, we show that the correlators at large N can be recast as simple integrals of products of polynomials (known as Q-functions) that appear in the Quantum Spectral Curve approach. This suggests an interesting interplay between localization, defect CFT and integrability.

  10. Improved algorithm for computerized detection and quantification of pulmonary emphysema at high-resolution computed tomography (HRCT)

    NASA Astrophysics Data System (ADS)

    Tylen, Ulf; Friman, Ola; Borga, Magnus; Angelhed, Jan-Erik

    2001-05-01

    Emphysema is characterized by destruction of lung tissue with development of small or large holes within the lung. These areas will have Hounsfield values (HU) approaching -1000. It is possible to detect and quantificate such areas using simple density mask technique. The edge enhancement reconstruction algorithm, gravity and motion of the heart and vessels during scanning causes artefacts, however. The purpose of our work was to construct an algorithm that detects such image artefacts and corrects them. The first step is to apply inverse filtering to the image removing much of the effect of the edge enhancement reconstruction algorithm. The next step implies computation of the antero-posterior density gradient caused by gravity and correction for that. Motion artefacts are in a third step corrected for by use of normalized averaging, thresholding and region growing. Twenty healthy volunteers were investigated, 10 with slight emphysema and 10 without. Using simple density mask technique it was not possible to separate persons with disease from those without. Our algorithm improved separation of the two groups considerably. Our algorithm needs further refinement, but may form a basis for further development of methods for computerized diagnosis and quantification of emphysema by HRCT.

  11. Video mining using combinations of unsupervised and supervised learning techniques

    NASA Astrophysics Data System (ADS)

    Divakaran, Ajay; Miyahara, Koji; Peker, Kadir A.; Radhakrishnan, Regunathan; Xiong, Ziyou

    2003-12-01

    We discuss the meaning and significance of the video mining problem, and present our work on some aspects of video mining. A simple definition of video mining is unsupervised discovery of patterns in audio-visual content. Such purely unsupervised discovery is readily applicable to video surveillance as well as to consumer video browsing applications. We interpret video mining as content-adaptive or "blind" content processing, in which the first stage is content characterization and the second stage is event discovery based on the characterization obtained in stage 1. We discuss the target applications and find that using a purely unsupervised approach are too computationally complex to be implemented on our product platform. We then describe various combinations of unsupervised and supervised learning techniques that help discover patterns that are useful to the end-user of the application. We target consumer video browsing applications such as commercial message detection, sports highlights extraction etc. We employ both audio and video features. We find that supervised audio classification combined with unsupervised unusual event discovery enables accurate supervised detection of desired events. Our techniques are computationally simple and robust to common variations in production styles etc.

  12. Haldane, Waddington and recombinant inbred lines: extension of their work to any number of genes.

    PubMed

    Samal, Areejit; Martin, Olivier C

    2017-11-01

    In the early 1930s, J. B. S. Haldane and C. H. Waddington collaborated on the consequences of genetic linkage and inbreeding. One elegant mathematical genetics problem solved by them concerns recombinant inbred lines (RILs) produced via repeated self or brother-sister mating. In this classic contribution, Haldane and Waddington derived an analytical formula for the probabilities of 2-locus and 3-locus RIL genotypes. Specifically, the Haldane-Waddington formula gives the recombination rate R in such lines as a simple function of the per generation recombination rate r. Interestingly, for more than 80 years, an extension of this result to four or more loci remained elusive. In 2015, we generalized the Haldane-Waddington self-mating result to any number of loci. Our solution used self-consistent equations of the multi-locus probabilities 'for an infinite number of generations' and solved these by simple algebraic operations. In practice, our approach provides a quantum leap in the systems that can be handled: the cases of up to six loci can be solved by hand while a computer program implementing our mathematical formalism tackles up to 20 loci on standard desktop computers.

  13. Sequence comparison alignment-free approach based on suffix tree and L-words frequency.

    PubMed

    Soares, Inês; Goios, Ana; Amorim, António

    2012-01-01

    The vast majority of methods available for sequence comparison rely on a first sequence alignment step, which requires a number of assumptions on evolutionary history and is sometimes very difficult or impossible to perform due to the abundance of gaps (insertions/deletions). In such cases, an alternative alignment-free method would prove valuable. Our method starts by a computation of a generalized suffix tree of all sequences, which is completed in linear time. Using this tree, the frequency of all possible words with a preset length L-L-words--in each sequence is rapidly calculated. Based on the L-words frequency profile of each sequence, a pairwise standard Euclidean distance is then computed producing a symmetric genetic distance matrix, which can be used to generate a neighbor joining dendrogram or a multidimensional scaling graph. We present an improvement to word counting alignment-free approaches for sequence comparison, by determining a single optimal word length and combining suffix tree structures to the word counting tasks. Our approach is, thus, a fast and simple application that proved to be efficient and powerful when applied to mitochondrial genomes. The algorithm was implemented in Python language and is freely available on the web.

  14. A hybrid approach for nonlinear computational aeroacoustics predictions

    NASA Astrophysics Data System (ADS)

    Sassanis, Vasileios; Sescu, Adrian; Collins, Eric M.; Harris, Robert E.; Luke, Edward A.

    2017-01-01

    In many aeroacoustics applications involving nonlinear waves and obstructions in the far-field, approaches based on the classical acoustic analogy theory or the linearised Euler equations are unable to fully characterise the acoustic field. Therefore, computational aeroacoustics hybrid methods that incorporate nonlinear wave propagation have to be constructed. In this study, a hybrid approach coupling Navier-Stokes equations in the acoustic source region with nonlinear Euler equations in the acoustic propagation region is introduced and tested. The full Navier-Stokes equations are solved in the source region to identify the acoustic sources. The flow variables of interest are then transferred from the source region to the acoustic propagation region, where the full nonlinear Euler equations with source terms are solved. The transition between the two regions is made through a buffer zone where the flow variables are penalised via a source term added to the Euler equations. Tests were conducted on simple acoustic and vorticity disturbances, two-dimensional jets (Mach 0.9 and 2), and a three-dimensional jet (Mach 1.5), impinging on a wall. The method is proven to be effective and accurate in predicting sound pressure levels associated with the propagation of linear and nonlinear waves in the near- and far-field regions.

  15. A systematic way for the cost reduction of density fitting methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kállay, Mihály, E-mail: kallay@mail.bme.hu

    2014-12-28

    We present a simple approach for the reduction of the size of auxiliary basis sets used in methods exploiting the density fitting (resolution of identity) approximation for electron repulsion integrals. Starting out of the singular value decomposition of three-center two-electron integrals, new auxiliary functions are constructed as linear combinations of the original fitting functions. The new functions, which we term natural auxiliary functions (NAFs), are analogous to the natural orbitals widely used for the cost reduction of correlation methods. The use of the NAF basis enables the systematic truncation of the fitting basis, and thereby potentially the reduction of themore » computational expenses of the methods, though the scaling with the system size is not altered. The performance of the new approach has been tested for several quantum chemical methods. It is demonstrated that the most pronounced gain in computational efficiency can be expected for iterative models which scale quadratically with the size of the fitting basis set, such as the direct random phase approximation. The approach also has the promise of accelerating local correlation methods, for which the processing of three-center Coulomb integrals is a bottleneck.« less

  16. Explicit Computations of Instantons and Large Deviations in Beta-Plane Turbulence

    NASA Astrophysics Data System (ADS)

    Laurie, J.; Bouchet, F.; Zaboronski, O.

    2012-12-01

    We use a path integral formalism and instanton theory in order to make explicit analytical predictions about large deviations and rare events in beta-plane turbulence. The path integral formalism is a concise way to get large deviation results in dynamical systems forced by random noise. In the most simple cases, it leads to the same results as the Freidlin-Wentzell theory, but it has a wider range of applicability. This approach is however usually extremely limited, due to the complexity of the theoretical problems. As a consequence it provides explicit results in a fairly limited number of models, often extremely simple ones with only a few degrees of freedom. Few exception exist outside the realm of equilibrium statistical physics. We will show that the barotropic model of beta-plane turbulence is one of these non-equilibrium exceptions. We describe sets of explicit solutions to the instanton equation, and precise derivations of the action functional (or large deviation rate function). The reason why such exact computations are possible is related to the existence of hidden symmetries and conservation laws for the instanton dynamics. We outline several applications of this apporach. For instance, we compute explicitly the very low probability to observe flows with an energy much larger or smaller than the typical one. Moreover, we consider regimes for which the system has multiple attractors (corresponding to different numbers of alternating jets), and discuss the computation of transition probabilities between two such attractors. These extremely rare events are of the utmost importance as the dynamics undergo qualitative macroscopic changes during such transitions.

  17. Thermodynamic efficiency limits of classical and bifacial multi-junction tandem solar cells: An analytical approach

    NASA Astrophysics Data System (ADS)

    Alam, Muhammad Ashraful; Khan, M. Ryyan

    2016-10-01

    Bifacial tandem cells promise to reduce three fundamental losses (i.e., above-bandgap, below bandgap, and the uncollected light between panels) inherent in classical single junction photovoltaic (PV) systems. The successive filtering of light through the bandgap cascade and the requirement of current continuity make optimization of tandem cells difficult and accessible only to numerical solution through computer modeling. The challenge is even more complicated for bifacial design. In this paper, we use an elegantly simple analytical approach to show that the essential physics of optimization is intuitively obvious, and deeply insightful results can be obtained with a few lines of algebra. This powerful approach reproduces, as special cases, all of the known results of conventional and bifacial tandem cells and highlights the asymptotic efficiency gain of these technologies.

  18. A simple approach to polymer mixture miscibility.

    PubMed

    Higgins, Julia S; Lipson, Jane E G; White, Ronald P

    2010-03-13

    Polymeric mixtures are important materials, but the control and understanding of mixing behaviour poses problems. The original Flory-Huggins theoretical approach, using a lattice model to compute the statistical thermodynamics, provides the basic understanding of the thermodynamic processes involved but is deficient in describing most real systems, and has little or no predictive capability. We have developed an approach using a lattice integral equation theory, and in this paper we demonstrate that this not only describes well the literature data on polymer mixtures but allows new insights into the behaviour of polymers and their mixtures. The characteristic parameters obtained by fitting the data have been successfully shown to be transferable from one dataset to another, to be able to correctly predict behaviour outside the experimental range of the original data and to allow meaningful comparisons to be made between different polymer mixtures.

  19. Region of validity of the finite–temperature Thomas–Fermi model with respect to quantum and exchange corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dyachkov, Sergey, E-mail: serj.dyachkov@gmail.com; Moscow Institute of Physics and Technology, 9 Institutskiy per., Dolgoprudny, Moscow Region 141700; Levashov, Pavel, E-mail: pasha@ihed.ras.ru

    We determine the region of applicability of the finite–temperature Thomas–Fermi model and its thermal part with respect to quantum and exchange corrections. Very high accuracy of computations has been achieved by using a special approach for the solution of the boundary problem and numerical integration. We show that the thermal part of the model can be applied at lower temperatures than the full model. Also we offer simple approximations of the boundaries of validity for practical applications.

  20. Path integral Liouville dynamics: Applications to infrared spectra of OH, water, ammonia, and methane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Jian, E-mail: jianliupku@pku.edu.cn; State Key Joint Laboratory of Environmental Simulation and Pollution Control, College of Environmental Sciences and Engineering, Peking University, Beijing 100871; Zhang, Zhijun

    Path integral Liouville dynamics (PILD) is applied to vibrational dynamics of several simple but representative realistic molecular systems (OH, water, ammonia, and methane). The dipole-derivative autocorrelation function is employed to obtain the infrared spectrum as a function of temperature and isotopic substitution. Comparison to the exact vibrational frequency shows that PILD produces a reasonably accurate peak position with a relatively small full width at half maximum. PILD offers a potentially useful trajectory-based quantum dynamics approach to compute vibrational spectra of molecular systems.

  1. Mathematical modelling of risk reduction in reinsurance

    NASA Astrophysics Data System (ADS)

    Balashov, R. B.; Kryanev, A. V.; Sliva, D. E.

    2017-01-01

    The paper presents a mathematical model of efficient portfolio formation in the reinsurance markets. The presented approach provides the optimal ratio between the expected value of return and the risk of yield values below a certain level. The uncertainty in the return values is conditioned by use of expert evaluations and preliminary calculations, which result in expected return values and the corresponding risk levels. The proposed method allows for implementation of computationally simple schemes and algorithms for numerical calculation of the numerical structure of the efficient portfolios of reinsurance contracts of a given insurance company.

  2. Investigation of vertical cavity surface emitting laser dynamics for neuromorphic photonic systems

    NASA Astrophysics Data System (ADS)

    Hurtado, A.; Schires, K.; Henning, I. D.; Adams, M. J.

    2012-03-01

    We report an approach based upon vertical cavity surface emitting lasers (VCSELs) to reproduce optically different behaviors exhibited by biological neurons but on a much faster timescale. The technique proposed is based on the polarization switching and nonlinear dynamics induced in a single VCSEL under polarized optical injection. The particular attributes of VCSELs and the simple experimental configuration used in this work offer prospects of fast, reconfigurable processing elements with excellent fan-out and scaling potentials for use in future computational paradigms and artificial neural networks.

  3. Remodeling a tissue: subtraction adds insight.

    PubMed

    Axelrod, Jeffrey D

    2012-11-27

    Sculpting a body plan requires both patterning of gene expression and translating that pattern into morphogenesis. Developmental biologists have made remarkable strides in understanding gene expression patterning, but despite a long history of fascination with the mechanics of morphogenesis, knowledge of how patterned gene expression drives the emergence of even simple shapes and forms has grown at a slower pace. The successful merging of approaches from cell biology, developmental biology, imaging, engineering, and mathematical and computational sciences is now accelerating progress toward a fuller and better integrated understanding of the forces shaping morphogenesis.

  4. Guide to thoracic imaging.

    PubMed

    Skinner, Sarah

    2015-08-01

    Thoracic imaging is commonly ordered in general practice. Guidelines exist for ordering thoracic imaging but few are specific for general practice. This article summarises current indications for imaging the thorax with chest X-ray and computed tomography. A simple frame-work for interpretation of the chest X-ray, suitable for trainees and practitioners providing primary care imaging in rural and remote locations, is presented. Interpretation of thoracic imaging is best done using a systematic approach. Radiological investigation is not warranted in un-complicated upper respiratory tract infections or asthma, minor trauma or acute-on-chronic chest pain.

  5. Container-Based Clinical Solutions for Portable and Reproducible Image Analysis.

    PubMed

    Matelsky, Jordan; Kiar, Gregory; Johnson, Erik; Rivera, Corban; Toma, Michael; Gray-Roncal, William

    2018-05-08

    Medical imaging analysis depends on the reproducibility of complex computation. Linux containers enable the abstraction, installation, and configuration of environments so that software can be both distributed in self-contained images and used repeatably by tool consumers. While several initiatives in neuroimaging have adopted approaches for creating and sharing more reliable scientific methods and findings, Linux containers are not yet mainstream in clinical settings. We explore related technologies and their efficacy in this setting, highlight important shortcomings, demonstrate a simple use-case, and endorse the use of Linux containers for medical image analysis.

  6. The brain MRI classification problem from wavelets perspective

    NASA Astrophysics Data System (ADS)

    Bendib, Mohamed M.; Merouani, Hayet F.; Diaba, Fatma

    2015-02-01

    Haar and Daubechies 4 (DB4) are the most used wavelets for brain MRI (Magnetic Resonance Imaging) classification. The former is simple and fast to compute while the latter is more complex and offers a better resolution. This paper explores the potential of both of them in performing Normal versus Pathological discrimination on the one hand, and Multiclassification on the other hand. The Whole Brain Atlas is used as a validation database, and the Random Forest (RF) algorithm is employed as a learning approach. The achieved results are discussed and statistically compared.

  7. Accelerated gradient based diffuse optical tomographic image reconstruction.

    PubMed

    Biswas, Samir Kumar; Rajan, K; Vasu, R M

    2011-01-01

    Fast reconstruction of interior optical parameter distribution using a new approach called Broyden-based model iterative image reconstruction (BMOBIIR) and adjoint Broyden-based MOBIIR (ABMOBIIR) of a tissue and a tissue mimicking phantom from boundary measurement data in diffuse optical tomography (DOT). DOT is a nonlinear and ill-posed inverse problem. Newton-based MOBIIR algorithm, which is generally used, requires repeated evaluation of the Jacobian which consumes bulk of the computation time for reconstruction. In this study, we propose a Broyden approach-based accelerated scheme for Jacobian computation and it is combined with conjugate gradient scheme (CGS) for fast reconstruction. The method makes explicit use of secant and adjoint information that can be obtained from forward solution of the diffusion equation. This approach reduces the computational time many fold by approximating the system Jacobian successively through low-rank updates. Simulation studies have been carried out with single as well as multiple inhomogeneities. Algorithms are validated using an experimental study carried out on a pork tissue with fat acting as an inhomogeneity. The results obtained through the proposed BMOBIIR and ABMOBIIR approaches are compared with those of Newton-based MOBIIR algorithm. The mean squared error and execution time are used as metrics for comparing the results of reconstruction. We have shown through experimental and simulation studies that Broyden-based MOBIIR and adjoint Broyden-based methods are capable of reconstructing single as well as multiple inhomogeneities in tissue and a tissue-mimicking phantom. Broyden MOBIIR and adjoint Broyden MOBIIR methods are computationally simple and they result in much faster implementations because they avoid direct evaluation of Jacobian. The image reconstructions have been carried out with different initial values using Newton, Broyden, and adjoint Broyden approaches. These algorithms work well when the initial guess is close to the true solution. However, when initial guess is far away from true solution, Newton-based MOBIIR gives better reconstructed images. The proposed methods are found to be stable with noisy measurement data.

  8. Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.

    PubMed

    Chatzis, Sotirios P; Andreou, Andreas S

    2015-11-01

    Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.

  9. Simple Ion Channels: From Structure to Electrophysiology and Back

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrzej

    2018-01-01

    A reliable way to establish whether our understanding of a channel is satisfactory is to reproduce its measured ionic conductance over a broad range of applied voltages in computer simulations. In molecular dynamics (MD), this can be done by way of applying an external electric field to the system and counting the number of ions that traverse the channel per unit time. Since this approach is computationally very expensive, we have developed a markedly more efficient alternative in which MD is combined with the electrodiffusion (ED) equation. In this approach, the assumptions of the ED equation can be rigorously tested, and the precision and consistency of the calculated conductance can be determined. We have demonstrated that the full current/voltage dependence and the underlying free energy profile for a simple channel can be reliably calculated from equilibrium or non-equilibrium MD simulations at a single voltage. To carry out MD simulations, a structural model of a channel has to be assumed, which is an important constraint, considering that high-resolution structures are available for only very few simple channels. If the comparison of calculated ionic conductance with electrophysiological data is satisfactory, it greatly increases our confidence that the structure and the function are described sufficiently accurately. We examined the validity of the ED for several channels embedded in phospholipid membranes - four naturally occurring channels: trichotoxin, alamethicin, p7 from hepatitis C virus (HCV) and Vpu from the HIV-1 virus, and a synthetic, hexameric channel, formed by a 21-residue peptide that contains only leucine and serine. All these channels mediate transport of potassium and chloride ions. It was found that the ED equation is satisfactory for these systems. In some of them experimental and calculated electrophysiological properties are in good agreement, whereas in others there are strong indications that the structural models are incorrect.

  10. Computing return times or return periods with rare event algorithms

    NASA Astrophysics Data System (ADS)

    Lestang, Thibault; Ragone, Francesco; Bréhier, Charles-Edouard; Herbert, Corentin; Bouchet, Freddy

    2018-04-01

    The average time between two occurrences of the same event, referred to as its return time (or return period), is a useful statistical concept for practical applications. For instance insurances or public agencies may be interested by the return time of a 10 m flood of the Seine river in Paris. However, due to their scarcity, reliably estimating return times for rare events is very difficult using either observational data or direct numerical simulations. For rare events, an estimator for return times can be built from the extrema of the observable on trajectory blocks. Here, we show that this estimator can be improved to remain accurate for return times of the order of the block size. More importantly, we show that this approach can be generalised to estimate return times from numerical algorithms specifically designed to sample rare events. So far those algorithms often compute probabilities, rather than return times. The approach we propose provides a computationally extremely efficient way to estimate numerically the return times of rare events for a dynamical system, gaining several orders of magnitude of computational costs. We illustrate the method on two kinds of observables, instantaneous and time-averaged, using two different rare event algorithms, for a simple stochastic process, the Ornstein–Uhlenbeck process. As an example of realistic applications to complex systems, we finally discuss extreme values of the drag on an object in a turbulent flow.

  11. Accelerating Sequential Gaussian Simulation with a constant path

    NASA Astrophysics Data System (ADS)

    Nussbaumer, Raphaël; Mariethoz, Grégoire; Gravey, Mathieu; Gloaguen, Erwan; Holliger, Klaus

    2018-03-01

    Sequential Gaussian Simulation (SGS) is a stochastic simulation technique commonly employed for generating realizations of Gaussian random fields. Arguably, the main limitation of this technique is the high computational cost associated with determining the kriging weights. This problem is compounded by the fact that often many realizations are required to allow for an adequate uncertainty assessment. A seemingly simple way to address this problem is to keep the same simulation path for all realizations. This results in identical neighbourhood configurations and hence the kriging weights only need to be determined once and can then be re-used in all subsequent realizations. This approach is generally not recommended because it is expected to result in correlation between the realizations. Here, we challenge this common preconception and make the case for the use of a constant path approach in SGS by systematically evaluating the associated benefits and limitations. We present a detailed implementation, particularly regarding parallelization and memory requirements. Extensive numerical tests demonstrate that using a constant path allows for substantial computational gains with very limited loss of simulation accuracy. This is especially the case for a constant multi-grid path. The computational savings can be used to increase the neighbourhood size, thus allowing for a better reproduction of the spatial statistics. The outcome of this study is a recommendation for an optimal implementation of SGS that maximizes accurate reproduction of the covariance structure as well as computational efficiency.

  12. A simple calculation method for determination of equivalent square field.

    PubMed

    Shafiei, Seyed Ali; Hasanzadeh, Hadi; Shafiei, Seyed Ahmad

    2012-04-01

    Determination of the equivalent square fields for rectangular and shielded fields is of great importance in radiotherapy centers and treatment planning software. This is accomplished using standard tables and empirical formulas. The goal of this paper is to present a formula based on analysis of scatter reduction due to inverse square law to obtain equivalent field. Tables are published by different agencies such as ICRU (International Commission on Radiation Units and measurements), which are based on experimental data; but there exist mathematical formulas that yield the equivalent square field of an irregular rectangular field which are used extensively in computation techniques for dose determination. These processes lead to some complicated and time-consuming formulas for which the current study was designed. In this work, considering the portion of scattered radiation in absorbed dose at a point of measurement, a numerical formula was obtained based on which a simple formula was developed to calculate equivalent square field. Using polar coordinate and inverse square law will lead to a simple formula for calculation of equivalent field. The presented method is an analytical approach based on which one can estimate the equivalent square field of a rectangular field and may be used for a shielded field or an off-axis point. Besides, one can calculate equivalent field of rectangular field with the concept of decreased scatter radiation with inverse square law with a good approximation. This method may be useful in computing Percentage Depth Dose and Tissue-Phantom Ratio which are extensively used in treatment planning.

  13. A signal-flow-graph approach to on-line gradient calculation.

    PubMed

    Campolucci, P; Uncini, A; Piazza, F

    2000-08-01

    A large class of nonlinear dynamic adaptive systems such as dynamic recurrent neural networks can be effectively represented by signal flow graphs (SFGs). By this method, complex systems are described as a general connection of many simple components, each of them implementing a simple one-input, one-output transformation, as in an electrical circuit. Even if graph representations are popular in the neural network community, they are often used for qualitative description rather than for rigorous representation and computational purposes. In this article, a method for both on-line and batch-backward gradient computation of a system output or cost function with respect to system parameters is derived by the SFG representation theory and its known properties. The system can be any causal, in general nonlinear and time-variant, dynamic system represented by an SFG, in particular any feedforward, time-delay, or recurrent neural network. In this work, we use discrete-time notation, but the same theory holds for the continuous-time case. The gradient is obtained in a straightforward way by the analysis of two SFGs, the original one and its adjoint (obtained from the first by simple transformations), without the complex chain rule expansions of derivatives usually employed. This method can be used for sensitivity analysis and for learning both off-line and on-line. On-line learning is particularly important since it is required by many real applications, such as digital signal processing, system identification and control, channel equalization, and predistortion.

  14. A root-mean-square approach for predicting fatigue crack growth under random loading

    NASA Technical Reports Server (NTRS)

    Hudson, C. M.

    1981-01-01

    A method for predicting fatigue crack growth under random loading which employs the concept of Barsom (1976) is presented. In accordance with this method, the loading history for each specimen is analyzed to determine the root-mean-square maximum and minimum stresses, and the predictions are made by assuming the tests have been conducted under constant-amplitude loading at the root-mean-square maximum and minimum levels. The procedure requires a simple computer program and a desk-top computer. For the eleven predictions made, the ratios of the predicted lives to the test lives ranged from 2.13 to 0.82, which is a good result, considering that the normal scatter in the fatigue-crack-growth rates may range from a factor of two to four under identical loading conditions.

  15. Approaches in highly parameterized inversion: TSPROC, a general time-series processor to assist in model calibration and result summarization

    USGS Publications Warehouse

    Westenbroek, Stephen M.; Doherty, John; Walker, John F.; Kelson, Victor A.; Hunt, Randall J.; Cera, Timothy B.

    2012-01-01

    The TSPROC (Time Series PROCessor) computer software uses a simple scripting language to process and analyze time series. It was developed primarily to assist in the calibration of environmental models. The software is designed to perform calculations on time-series data commonly associated with surface-water models, including calculation of flow volumes, transformation by means of basic arithmetic operations, and generation of seasonal and annual statistics and hydrologic indices. TSPROC can also be used to generate some of the key input files required to perform parameter optimization by means of the PEST (Parameter ESTimation) computer software. Through the use of TSPROC, the objective function for use in the model-calibration process can be focused on specific components of a hydrograph.

  16. Detection of Interference Phase by Digital Computation of Quadrature Signals in Homodyne Laser Interferometry

    PubMed Central

    Rerucha, Simon; Buchta, Zdenek; Sarbort, Martin; Lazar, Josef; Cip, Ondrej

    2012-01-01

    We have proposed an approach to the interference phase extraction in the homodyne laser interferometry. The method employs a series of computational steps to reconstruct the signals for quadrature detection from an interference signal from a non-polarising interferometer sampled by a simple photodetector. The complexity trade-off is the use of laser beam with frequency modulation capability. It is analytically derived and its validity and performance is experimentally verified. The method has proven to be a feasible alternative for the traditional homodyne detection since it performs with comparable accuracy, especially where the optical setup complexity is principal issue and the modulation of laser beam is not a heavy burden (e.g., in multi-axis sensor or laser diode based systems). PMID:23202038

  17. Epidemic modeling in complex realities.

    PubMed

    Colizza, Vittoria; Barthélemy, Marc; Barrat, Alain; Vespignani, Alessandro

    2007-04-01

    In our global world, the increasing complexity of social relations and transport infrastructures are key factors in the spread of epidemics. In recent years, the increasing availability of computer power has enabled both to obtain reliable data allowing one to quantify the complexity of the networks on which epidemics may propagate and to envision computational tools able to tackle the analysis of such propagation phenomena. These advances have put in evidence the limits of homogeneous assumptions and simple spatial diffusion approaches, and stimulated the inclusion of complex features and heterogeneities relevant in the description of epidemic diffusion. In this paper, we review recent progresses that integrate complex systems and networks analysis with epidemic modelling and focus on the impact of the various complex features of real systems on the dynamics of epidemic spreading.

  18. Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.

    NASA Astrophysics Data System (ADS)

    Battiti, Roberto

    1990-01-01

    This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from multiple-purpose modules. In the last part of the thesis a well known optimization method (the Broyden-Fletcher-Goldfarb-Shanno memoryless quasi -Newton method) is applied to simple classification problems and shown to be superior to the "error back-propagation" algorithm for numerical stability, automatic selection of parameters, and convergence properties.

  19. Controlling the Universe

    ERIC Educational Resources Information Center

    Evanson, Nick

    2004-01-01

    Basic electronic devices have been used to great effect with console computer games. This paper looks at a range of devices from the very simple, such as microswitches and potentiometers, up to the more complex Hall effect probe. There is a great deal of relatively straightforward use of simple devices in computer games systems, and having read…

  20. 5 CFR 532.257 - Regular nonappropriated fund wage schedules in foreign areas.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    .... These schedules will provide rates of pay for nonsupervisory, leader, and supervisory employees. (b) Schedules will be— (1) Computed on the basis of a simple average of all regular nonappropriated fund wage... each nonsupervisory grade will be derived by computing a simple average of each step 2 rate for each of...

  1. 5 CFR 532.257 - Regular nonappropriated fund wage schedules in foreign areas.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    .... These schedules will provide rates of pay for nonsupervisory, leader, and supervisory employees. (b) Schedules will be— (1) Computed on the basis of a simple average of all regular nonappropriated fund wage... each nonsupervisory grade will be derived by computing a simple average of each step 2 rate for each of...

  2. Calculation of tip clearance effects in a transonic compressor rotor

    NASA Technical Reports Server (NTRS)

    Chima, R. V.

    1996-01-01

    The flow through the tip clearance region of a transonic compressor rotor (NASA rotor 37) was computed and compared to aerodynamic probe and laser anemometer data. Tip clearance effects were modeled both by gridding the clearance gap and by using a simple periodicity model across the ungridded gap. The simple model was run with both the full gap height, and with half the gap height to simulate a vena-contracta effect. Comparisons between computed and measured performance maps and downstream profiles were used to validate the models and to assess the effects of gap height on the simple clearance model. Recommendations were made concerning the use of the simple clearance model. Detailed comparisons were made between the gridded clearance gap solution and the laser anemometer data near the tip at two operating points. The computer results agreed fairly well with the data but overpredicted the extent of the casing separation and underpredicted the wake decay rate. The computations were then used to describe the interaction of the tip vortex, the passage shock, and the casing boundary layer.

  3. Computational evolution: taking liberties.

    PubMed

    Correia, Luís

    2010-09-01

    Evolution has, for a long time, inspired computer scientists to produce computer models mimicking its behavior. Evolutionary algorithm (EA) is one of the areas where this approach has flourished. EAs have been used to model and study evolution, but they have been especially developed for their aptitude as optimization tools for engineering. Developed models are quite simple in comparison with their natural sources of inspiration. However, since EAs run on computers, we have the freedom, especially in optimization models, to test approaches both realistic and outright speculative, from the biological point of view. In this article, we discuss different common evolutionary algorithm models, and then present some alternatives of interest. These include biologically inspired models, such as co-evolution and, in particular, symbiogenetics and outright artificial operators and representations. In each case, the advantages of the modifications to the standard model are identified. The other area of computational evolution, which has allowed us to study basic principles of evolution and ecology dynamics, is the development of artificial life platforms for open-ended evolution of artificial organisms. With these platforms, biologists can test theories by directly manipulating individuals and operators, observing the resulting effects in a realistic way. An overview of the most prominent of such environments is also presented. If instead of artificial platforms we use the real world for evolving artificial life, then we are dealing with evolutionary robotics (ERs). A brief description of this area is presented, analyzing its relations to biology. Finally, we present the conclusions and identify future research avenues in the frontier of computation and biology. Hopefully, this will help to draw the attention of more biologists and computer scientists to the benefits of such interdisciplinary research.

  4. Improving Conceptual Design for Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Olds, John R.

    1998-01-01

    This report summarizes activities performed during the second year of a three year cooperative agreement between NASA - Langley Research Center and Georgia Tech. Year 1 of the project resulted in the creation of a new Cost and Business Assessment Model (CABAM) for estimating the economic performance of advanced reusable launch vehicles including non-recurring costs, recurring costs, and revenue. The current year (second year) activities were focused on the evaluation of automated, collaborative design frameworks (computation architectures or computational frameworks) for automating the design process in advanced space vehicle design. Consistent with NASA's new thrust area in developing and understanding Intelligent Synthesis Environments (ISE), the goals of this year's research efforts were to develop and apply computer integration techniques and near-term computational frameworks for conducting advanced space vehicle design. NASA - Langley (VAB) has taken a lead role in developing a web-based computing architectures within which the designer can interact with disciplinary analysis tools through a flexible web interface. The advantages of this approach are, 1) flexible access to the designer interface through a simple web browser (e.g. Netscape Navigator), 2) ability to include existing 'legacy' codes, and 3) ability to include distributed analysis tools running on remote computers. To date, VAB's internal emphasis has been on developing this test system for the planetary entry mission under the joint Integrated Design System (IDS) program with NASA - Ames and JPL. Georgia Tech's complementary goals this year were to: 1) Examine an alternate 'custom' computational architecture for the three-discipline IDS planetary entry problem to assess the advantages and disadvantages relative to the web-based approach.and 2) Develop and examine a web-based interface and framework for a typical launch vehicle design problem.

  5. Modular Approaches to Earth Science Scientific Computing: 3D Electromagnetic Induction Modeling as an Example

    NASA Astrophysics Data System (ADS)

    Tandon, K.; Egbert, G.; Siripunvaraporn, W.

    2003-12-01

    We are developing a modular system for three-dimensional inversion of electromagnetic (EM) induction data, using an object oriented programming approach. This approach allows us to modify the individual components of the inversion scheme proposed, and also reuse the components for variety of problems in earth science computing howsoever diverse they might be. In particular, the modularity allows us to (a) change modeling codes independently of inversion algorithm details; (b) experiment with new inversion algorithms; and (c) modify the way prior information is imposed in the inversion to test competing hypothesis and techniques required to solve an earth science problem. Our initial code development is for EM induction equations on a staggered grid, using iterative solution techniques in 3D. An example illustrated here is an experiment with the sensitivity of 3D magnetotelluric inversion to uncertainties in the boundary conditions required for regional induction problems. These boundary conditions should reflect the large-scale geoelectric structure of the study area, which is usually poorly constrained. In general for inversion of MT data, one fixes boundary conditions at the edge of the model domain, and adjusts the earth?s conductivity structure within the modeling domain. Allowing for errors in specification of the open boundary values is simple in principle, but no existing inversion codes that we are aware of have this feature. Adding a feature such as this is straightforward within the context of the modular approach. More generally, a modular approach provides an efficient methodology for setting up earth science computing problems to test various ideas. As a concrete illustration relevant to EM induction problems, we investigate the sensitivity of MT data near San Andreas Fault at Parkfield (California) to uncertainties in the regional geoelectric structure.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weinstein, Marvin; /SLAC

    It is apparent to anyone who thinks about it that, to a large degree, the basic concepts of Newtonian physics are quite intuitive, but quantum mechanics is not. My purpose in this talk is to introduce you to a new, much more intuitive way to understand how quantum mechanics works. I begin with an incredibly easy way to derive the time evolution of a Gaussian wave-packet for the case free and harmonic motion without any need to know the eigenstates of the Hamiltonian. This discussion is completely analytic and I will later use it to relate the solution for themore » behavior of the Gaussian packet to the Feynman path-integral and stationary phase approximation. It will be clear that using the information about the evolution of the Gaussian in this way goes far beyond what the stationary phase approximation tells us. Next, I introduce the concept of the bucket brigade approach to dealing with problems that cannot be handled totally analytically. This approach combines the intuition obtained in the initial discussion, as well as the intuition obtained from the path-integral, with simple numerical tools. My goal is to show that, for any specific process, there is a simple Hilbert space interpretation of the stationary phase approximation. I will then argue that, from the point of view of numerical approximations, the trajectory obtained from my generalization of the stationary phase approximation specifies that subspace of the full Hilbert space that is needed to compute the time evolution of the particular state under the full Hamiltonian. The prescription I will give is totally non-perturbative and we will see, by the grace of Maple animations computed for the case of the anharmonic oscillator Hamiltonian, that this approach allows surprisingly accurate computations to be performed with very little work. I think of this approach to the path-integral as defining what I call a guided numerical approximation scheme. After the discussion of the anharmonic oscillator I will turn to tunneling problems and show that the instanton can also be though of in the same way. I will do this for the classic problem of a double well potential in the extreme limit when the splitting between the two lowest levels is extremely small and the tunneling rate from one well to another is also very small.« less

  7. Tau-independent Phase Analysis: A Novel Method for Accurately Determining Phase Shifts.

    PubMed

    Tackenberg, Michael C; Jones, Jeff R; Page, Terry L; Hughey, Jacob J

    2018-06-01

    Estimations of period and phase are essential in circadian biology. While many techniques exist for estimating period, comparatively few methods are available for estimating phase. Current approaches to analyzing phase often vary between studies and are sensitive to coincident changes in period and the stage of the circadian cycle at which the stimulus occurs. Here we propose a new technique, tau-independent phase analysis (TIPA), for quantifying phase shifts in multiple types of circadian time-course data. Through comprehensive simulations, we show that TIPA is both more accurate and more precise than the standard actogram approach. TIPA is computationally simple and therefore will enable accurate and reproducible quantification of phase shifts across multiple subfields of chronobiology.

  8. Formulation of aerodynamic prediction techniques for hypersonic configuration design

    NASA Technical Reports Server (NTRS)

    1979-01-01

    An investigation of approximate theoretical techniques for predicting aerodynamic characteristics and surface pressures for relatively slender vehicles at moderate hypersonic speeds was performed. Emphasis was placed on approaches that would be responsive to preliminary configuration design level of effort. Supersonic second order potential theory was examined in detail to meet this objective. Shock layer integral techniques were considered as an alternative means of predicting gross aerodynamic characteristics. Several numerical pilot codes were developed for simple three dimensional geometries to evaluate the capability of the approximate equations of motion considered. Results from the second order computations indicated good agreement with higher order solutions and experimental results for a variety of wing like shapes and values of the hypersonic similarity parameter M delta approaching one.

  9. A preliminary evaluation of nearhore extreme sea level and wave models for fringing reef environments

    NASA Astrophysics Data System (ADS)

    Hoeke, R. K.; Reyns, J.; O'Grady, J.; Becker, J. M.; Merrifield, M. A.; Roelvink, J. A.

    2016-02-01

    Oceanic islands are widely perceived as vulnerable to sea level rise and are characterized by steep nearshore topography and fringing reefs. In such settings, near shore dynamics and (non-tidal) water level variability tends to be dominated by wind-wave processes. These processes are highly sensitive to reef morphology and roughness and to regional wave climate. Thus sea level extremes tend to be highly localized and their likelihood can be expected to change in the future (beyond simple extrapolation of sea level rise scenarios): e.g. sea level rise may increase the effective mean depth of reef crests and flats and ocean acidification and/or increased temperatures may lead to changes in reef structure. The problem is sufficiently complex that analytic or numerical approaches are necessary to estimate current hazards and explore potential future changes. In this study, we evaluate the capacity of several analytic/empirical approaches and phase-averaged and phase-resolved numerical models at sites in the insular tropical Pacific. We consider their ability to predict time-averaged wave setup and instantaneous water level exceedance probability (or dynamic wave run-up) as well as computational cost; where possible, we compare the model results with in situ observations from a number of previous studies. Preliminary results indicate analytic approaches are by far the most computationally efficient, but tend to perform poorly when alongshore straight and parallel morphology cannot be assumed. Phase-averaged models tend to perform well with respect to wave setup in such situations, but are unable to predict processes related to individual waves or wave groups, such as infragravity motions or wave run-up. Phase-resolved models tend to perform best, but come at high computational cost, an important consideration when exploring possible future scenarios. A new approach of combining an unstructured computational grid with a quasi-phase averaged approach (i.e. only phase resolving motions below a frequency cutoff) shows promise as a good compromise between computational efficiency and resolving processes such as wave runup and overtopping in more complex bathymetric situations.

  10. A Two-Step Approach for Analysis of Nonignorable Missing Outcomes in Longitudinal Regression: an Application to Upstate KIDS Study.

    PubMed

    Liu, Danping; Yeung, Edwina H; McLain, Alexander C; Xie, Yunlong; Buck Louis, Germaine M; Sundaram, Rajeshwari

    2017-09-01

    Imperfect follow-up in longitudinal studies commonly leads to missing outcome data that can potentially bias the inference when the missingness is nonignorable; that is, the propensity of missingness depends on missing values in the data. In the Upstate KIDS Study, we seek to determine if the missingness of child development outcomes is nonignorable, and how a simple model assuming ignorable missingness would compare with more complicated models for a nonignorable mechanism. To correct for nonignorable missingness, the shared random effects model (SREM) jointly models the outcome and the missing mechanism. However, the computational complexity and lack of software packages has limited its practical applications. This paper proposes a novel two-step approach to handle nonignorable missing outcomes in generalized linear mixed models. We first analyse the missing mechanism with a generalized linear mixed model and predict values of the random effects; then, the outcome model is fitted adjusting for the predicted random effects to account for heterogeneity in the missingness propensity. Extensive simulation studies suggest that the proposed method is a reliable approximation to SREM, with a much faster computation. The nonignorability of missing data in the Upstate KIDS Study is estimated to be mild to moderate, and the analyses using the two-step approach or SREM are similar to the model assuming ignorable missingness. The two-step approach is a computationally straightforward method that can be conducted as sensitivity analyses in longitudinal studies to examine violations to the ignorable missingness assumption and the implications relative to health outcomes. © 2017 John Wiley & Sons Ltd.

  11. Flowing Hot or Cold: User-Friendly Computational Models of Terrestrial and Planetary Lava Channels and Lakes

    NASA Astrophysics Data System (ADS)

    Sakimoto, S. E. H.

    2016-12-01

    Planetary volcanism has redefined what is considered volcanism. "Magma" now may be considered to be anything from the molten rock familiar at terrestrial volcanoes to cryovolcanic ammonia-water mixes erupted on an outer solar system moon. However, even with unfamiliar compositions and source mechanisms, we find familiar landforms such as volcanic channels, lakes, flows, and domes and thus a multitude of possibilities for modeling. As on Earth, these landforms lend themselves to analysis for estimating storage, eruption and/or flow rates. This has potential pitfalls, as extension of the simplified analytic models we often use for terrestrial features into unfamiliar parameter space might yield misleading results. Our most commonly used tools for estimating flow and cooling have tended to lag significantly behind state-of-the-art; the easiest methods to use are neither realistic or accurate, but the more realistic and accurate computational methods are not simple to use. Since the latter computational tools tend to be both expensive and require a significant learning curve, there is a need for a user-friendly approach that still takes advantage of their accuracy. One method is use of the computational package for generation of a server-based tool that allows less computationally inclined users to get accurate results over their range of input parameters for a given problem geometry. A second method is to use the computational package for the generation of a polynomial empirical solution for each class of flow geometry that can be fairly easily solved by anyone with a spreadsheet. In this study, we demonstrate both approaches for several channel flow and lava lake geometries with terrestrial and extraterrestrial examples and compare their results. Specifically, we model cooling rectangular channel flow with a yield strength material, with applications to Mauna Loa, Kilauea, Venus, and Mars. This approach also shows promise with model applications to lava lakes, magma flow through cracks, and volcanic dome formation.

  12. Finite Element Method (FEM) Modeling of Freeze-drying: Monitoring Pharmaceutical Product Robustness During Lyophilization.

    PubMed

    Chen, Xiaodong; Sadineni, Vikram; Maity, Mita; Quan, Yong; Enterline, Matthew; Mantri, Rao V

    2015-12-01

    Lyophilization is an approach commonly undertaken to formulate drugs that are unstable to be commercialized as ready to use (RTU) solutions. One of the important aspects of commercializing a lyophilized product is to transfer the process parameters that are developed in lab scale lyophilizer to commercial scale without a loss in product quality. This process is often accomplished by costly engineering runs or through an iterative process at the commercial scale. Here, we are highlighting a combination of computational and experimental approach to predict commercial process parameters for the primary drying phase of lyophilization. Heat and mass transfer coefficients are determined experimentally either by manometric temperature measurement (MTM) or sublimation tests and used as inputs for the finite element model (FEM)-based software called PASSAGE, which computes various primary drying parameters such as primary drying time and product temperature. The heat and mass transfer coefficients will vary at different lyophilization scales; hence, we present an approach to use appropriate factors while scaling-up from lab scale to commercial scale. As a result, one can predict commercial scale primary drying time based on these parameters. Additionally, the model-based approach presented in this study provides a process to monitor pharmaceutical product robustness and accidental process deviations during Lyophilization to support commercial supply chain continuity. The approach presented here provides a robust lyophilization scale-up strategy; and because of the simple and minimalistic approach, it will also be less capital intensive path with minimal use of expensive drug substance/active material.

  13. Simple video format for mobile applications

    NASA Astrophysics Data System (ADS)

    Smith, John R.; Miao, Zhourong; Li, Chung-Sheng

    2000-04-01

    With the advent of pervasive computing, there is a growing demand for enabling multimedia applications on mobile devices. Large numbers of pervasive computing devices, such as personal digital assistants (PDAs), hand-held computer (HHC), smart phones, portable audio players, automotive computing devices, and wearable computers are gaining access to online information sources. However, the pervasive computing devices are often constrained along a number of dimensions, such as processing power, local storage, display size and depth, connectivity, and communication bandwidth, which makes it difficult to access rich image and video content. In this paper, we report on our initial efforts in designing a simple scalable video format with low-decoding and transcoding complexity for pervasive computing. The goal is to enable image and video access for mobile applications such as electronic catalog shopping, video conferencing, remote surveillance and video mail using pervasive computing devices.

  14. Algorithms for computing the geopotential using a simple density layer

    NASA Technical Reports Server (NTRS)

    Morrison, F.

    1976-01-01

    Several algorithms have been developed for computing the potential and attraction of a simple density layer. These are numerical cubature, Taylor series, and a mixed analytic and numerical integration using a singularity-matching technique. A computer program has been written to combine these techniques for computing the disturbing acceleration on an artificial earth satellite. A total of 1640 equal-area, constant surface density blocks on an oblate spheroid are used. The singularity-matching algorithm is used in the subsatellite region, Taylor series in the surrounding zone, and numerical cubature on the rest of the earth.

  15. Simple, efficient allocation of modelling runs on heterogeneous clusters with MPI

    USGS Publications Warehouse

    Donato, David I.

    2017-01-01

    In scientific modelling and computation, the choice of an appropriate method for allocating tasks for parallel processing depends on the computational setting and on the nature of the computation. The allocation of independent but similar computational tasks, such as modelling runs or Monte Carlo trials, among the nodes of a heterogeneous computational cluster is a special case that has not been specifically evaluated previously. A simulation study shows that a method of on-demand (that is, worker-initiated) pulling from a bag of tasks in this case leads to reliably short makespans for computational jobs despite heterogeneity both within and between cluster nodes. A simple reference implementation in the C programming language with the Message Passing Interface (MPI) is provided.

  16. Validity of a Simple Method for Measuring Force-Velocity-Power Profile in Countermovement Jump.

    PubMed

    Jiménez-Reyes, Pedro; Samozino, Pierre; Pareja-Blanco, Fernando; Conceição, Filipe; Cuadrado-Peñafiel, Víctor; González-Badillo, Juan José; Morin, Jean-Benoît

    2017-01-01

    To analyze the reliability and validity of a simple computation method to evaluate force (F), velocity (v), and power (P) output during a countermovement jump (CMJ) suitable for use in field conditions and to verify the validity of this computation method to compute the CMJ force-velocity (F-v) profile (including unloaded and loaded jumps) in trained athletes. Sixteen high-level male sprinters and jumpers performed maximal CMJs under 6 different load conditions (0-87 kg). A force plate sampling at 1000 Hz was used to record vertical ground-reaction force and derive vertical-displacement data during CMJ trials. For each condition, mean F, v, and P of the push-off phase were determined from both force-plate data (reference method) and simple computation measures based on body mass, jump height (from flight time), and push-off distance and used to establish the linear F-v relationship for each individual. Mean absolute bias values were 0.9% (± 1.6%), 4.7% (± 6.2%), 3.7% (± 4.8%), and 5% (± 6.8%) for F, v, P, and slope of the F-v relationship (S Fv ), respectively. Both methods showed high correlations for F-v-profile-related variables (r = .985-.991). Finally, all variables computed from the simple method showed high reliability, with ICC >.980 and CV <1.0%. These results suggest that the simple method presented here is valid and reliable for computing CMJ force, velocity, power, and F-v profiles in athletes and could be used in practice under field conditions when body mass, push-off distance, and jump height are known.

  17. Zombie states for description of structure and dynamics of multi-electron systems

    NASA Astrophysics Data System (ADS)

    Shalashilin, Dmitrii V.

    2018-05-01

    Canonical Coherent States (CSs) of Harmonic Oscillator have been extensively used as a basis in a number of computational methods of quantum dynamics. However, generalising such techniques for fermionic systems is difficult because Fermionic Coherent States (FCSs) require complicated algebra of Grassmann numbers not well suited for numerical calculations. This paper introduces a coherent antisymmetrised superposition of "dead" and "alive" electronic states called here Zombie State (ZS), which can be used in a manner of FCSs but without Grassmann algebra. Instead, for Zombie States, a very simple sign-changing rule is used in the definition of creation and annihilation operators. Then, calculation of electronic structure Hamiltonian matrix elements between two ZSs becomes very simple and a straightforward technique for time propagation of fermionic wave functions can be developed. By analogy with the existing methods based on Canonical Coherent States of Harmonic Oscillator, fermionic wave functions can be propagated using a set of randomly selected Zombie States as a basis. As a proof of principles, the proposed Coupled Zombie States approach is tested on a simple example showing that the technique is exact.

  18. Simple Deterministically Constructed Recurrent Neural Networks

    NASA Astrophysics Data System (ADS)

    Rodan, Ali; Tiňo, Peter

    A large number of models for time series processing, forecasting or modeling follows a state-space formulation. Models in the specific class of state-space approaches, referred to as Reservoir Computing, fix their state-transition function. The state space with the associated state transition structure forms a reservoir, which is supposed to be sufficiently complex so as to capture a large number of features of the input stream that can be potentially exploited by the reservoir-to-output readout mapping. The largely "black box" character of reservoirs prevents us from performing a deeper theoretical investigation of the dynamical properties of successful reservoirs. Reservoir construction is largely driven by a series of (more-or-less) ad-hoc randomized model building stages, with both the researchers and practitioners having to rely on a series of trials and errors. We show that a very simple deterministically constructed reservoir with simple cycle topology gives performances comparable to those of the Echo State Network (ESN) on a number of time series benchmarks. Moreover, we argue that the memory capacity of such a model can be made arbitrarily close to the proved theoretical limit.

  19. Shot-Noise Limited Single-Molecule FRET Histograms: Comparison between Theory and Experiments†

    PubMed Central

    Nir, Eyal; Michalet, Xavier; Hamadani, Kambiz M.; Laurence, Ted A.; Neuhauser, Daniel; Kovchegov, Yevgeniy; Weiss, Shimon

    2011-01-01

    We describe a simple approach and present a straightforward numerical algorithm to compute the best fit shot-noise limited proximity ratio histogram (PRH) in single-molecule fluorescence resonant energy transfer diffusion experiments. The key ingredient is the use of the experimental burst size distribution, as obtained after burst search through the photon data streams. We show how the use of an alternated laser excitation scheme and a correspondingly optimized burst search algorithm eliminates several potential artifacts affecting the calculation of the best fit shot-noise limited PRH. This algorithm is tested extensively on simulations and simple experimental systems. We find that dsDNA data exhibit a wider PRH than expected from shot noise only and hypothetically account for it by assuming a small Gaussian distribution of distances with an average standard deviation of 1.6 Å. Finally, we briefly mention the results of a future publication and illustrate them with a simple two-state model system (DNA hairpin), for which the kinetic transition rates between the open and closed conformations are extracted. PMID:17078646

  20. Towards the computation of time-periodic inertial range dynamics

    NASA Astrophysics Data System (ADS)

    van Veen, L.; Vela-Martín, A.; Kawahara, G.

    2018-04-01

    We explore the possibility of computing simple invariant solutions, like travelling waves or periodic orbits, in Large Eddy Simulation (LES) on a periodic domain with constant external forcing. The absence of material boundaries and the simple forcing mechanism make this system a comparatively simple target for the study of turbulent dynamics through invariant solutions. We show, that in spite of the application of eddy viscosity the computations are still rather challenging and must be performed on GPU cards rather than conventional coupled CPUs. We investigate the onset of turbulence in this system by means of bifurcation analysis, and present a long-period, large-amplitude unstable periodic orbit that is filtered from a turbulent time series. Although this orbit is computed on a coarse grid, with only a small separation between the integral scale and the LES filter length, the periodic dynamics seem to capture a regeneration process of the large-scale vortices.

Top