Sample records for computer experiment method

  1. Response Surface Model Building Using Orthogonal Arrays for Computer Experiments

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Braun, Robert D.; Moore, Arlene A.; Lepsch, Roger A.

    1997-01-01

    This study investigates response surface methods for computer experiments and discusses some of the approaches available. Orthogonal arrays constructed for computer experiments are studied and an example application to a technology selection and optimization study for a reusable launch vehicle is presented.

  2. Modelling and Simulation as a Recognizing Method in Education

    ERIC Educational Resources Information Center

    Stoffa, Veronika

    2004-01-01

    Computer animation-simulation models of complex processes and events, which are the method of instruction, can be an effective didactic device. Gaining deeper knowledge about objects modelled helps to plan simulation experiments oriented on processes and events researched. Animation experiments realized on multimedia computers can aid easier…

  3. Analyzing the security of an existing computer system

    NASA Technical Reports Server (NTRS)

    Bishop, M.

    1986-01-01

    Most work concerning secure computer systems has dealt with the design, verification, and implementation of provably secure computer systems, or has explored ways of making existing computer systems more secure. The problem of locating security holes in existing systems has received considerably less attention; methods generally rely on thought experiments as a critical step in the procedure. The difficulty is that such experiments require that a large amount of information be available in a format that makes correlating the details of various programs straightforward. This paper describes a method of providing such a basis for the thought experiment by writing a special manual for parts of the operating system, system programs, and library subroutines.

  4. The Relationship Between Computer Experience and Computerized Cognitive Test Performance Among Older Adults

    PubMed Central

    2013-01-01

    Objective. This study compared the relationship between computer experience and performance on computerized cognitive tests and a traditional paper-and-pencil cognitive test in a sample of older adults (N = 634). Method. Participants completed computer experience and computer attitudes questionnaires, three computerized cognitive tests (Useful Field of View (UFOV) Test, Road Sign Test, and Stroop task) and a paper-and-pencil cognitive measure (Trail Making Test). Multivariate analysis of covariance was used to examine differences in cognitive performance across the four measures between those with and without computer experience after adjusting for confounding variables. Results. Although computer experience had a significant main effect across all cognitive measures, the effect sizes were similar. After controlling for computer attitudes, the relationship between computer experience and UFOV was fully attenuated. Discussion. Findings suggest that computer experience is not uniquely related to performance on computerized cognitive measures compared with paper-and-pencil measures. Because the relationship between computer experience and UFOV was fully attenuated by computer attitudes, this may imply that motivational factors are more influential to UFOV performance than computer experience. Our findings support the hypothesis that computer use is related to cognitive performance, and this relationship is not stronger for computerized cognitive measures. Implications and directions for future research are provided. PMID:22929395

  5. Comparing In-Class and Out-of-Class Computer-Based Tests to Traditional Paper-and-Pencil Tests in Introductory Psychology Courses

    ERIC Educational Resources Information Center

    Frein, Scott T.

    2011-01-01

    This article describes three experiments comparing paper-and-pencil tests (PPTs) to computer-based tests (CBTs) in terms of test method preferences and student performance. In Experiment 1, students took tests using three methods: PPT in class, CBT in class, and CBT at the time and place of their choosing. Results indicate that test method did not…

  6. Computer Literacy Learning Emotions of ODL Teacher-Students

    ERIC Educational Resources Information Center

    Esterhuizen, Hendrik D.; Blignaut, A. Seugnet; Els, Christo J.; Ellis, Suria M.

    2012-01-01

    This paper addresses the affective human experiences in terms of the emotions of South African teacher-students while attaining computer competencies for teaching and learning, and for ODL. The full mixed method study investigated how computers contribute towards affective experiences of disadvantaged teacher-students. The purposive sample related…

  7. A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.; Markos, A. T.

    1975-01-01

    A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.

  8. Propagation of Computational Uncertainty Using the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2007-01-01

    This paper describes the use of formally designed experiments to aid in the error analysis of a computational experiment. A method is described by which the underlying code is approximated with relatively low-order polynomial graduating functions represented by truncated Taylor series approximations to the true underlying response function. A resource-minimal approach is outlined by which such graduating functions can be estimated from a minimum number of case runs of the underlying computational code. Certain practical considerations are discussed, including ways and means of coping with high-order response functions. The distributional properties of prediction residuals are presented and discussed. A practical method is presented for quantifying that component of the prediction uncertainty of a computational code that can be attributed to imperfect knowledge of independent variable levels. This method is illustrated with a recent assessment of uncertainty in computational estimates of Space Shuttle thermal and structural reentry loads attributable to ice and foam debris impact on ascent.

  9. Towards Better Human Robot Interaction: Understand Human Computer Interaction in Social Gaming Using a Video-Enhanced Diary Method

    NASA Astrophysics Data System (ADS)

    See, Swee Lan; Tan, Mitchell; Looi, Qin En

    This paper presents findings from a descriptive research on social gaming. A video-enhanced diary method was used to understand the user experience in social gaming. From this experiment, we found that natural human behavior and gamer’s decision making process can be elicited and speculated during human computer interaction. These are new information that we should consider as they can help us build better human computer interfaces and human robotic interfaces in future.

  10. Tensor methodology and computational geometry in direct computational experiments in fluid mechanics

    NASA Astrophysics Data System (ADS)

    Degtyarev, Alexander; Khramushin, Vasily; Shichkina, Julia

    2017-07-01

    The paper considers a generalized functional and algorithmic construction of direct computational experiments in fluid dynamics. Notation of tensor mathematics is naturally embedded in the finite - element operation in the construction of numerical schemes. Large fluid particle, which have a finite size, its own weight, internal displacement and deformation is considered as an elementary computing object. Tensor representation of computational objects becomes strait linear and uniquely approximation of elementary volumes and fluid particles inside them. The proposed approach allows the use of explicit numerical scheme, which is an important condition for increasing the efficiency of the algorithms developed by numerical procedures with natural parallelism. It is shown that advantages of the proposed approach are achieved among them by considering representation of large particles of a continuous medium motion in dual coordinate systems and computing operations in the projections of these two coordinate systems with direct and inverse transformations. So new method for mathematical representation and synthesis of computational experiment based on large particle method is proposed.

  11. High resolution frequency analysis techniques with application to the redshift experiment

    NASA Technical Reports Server (NTRS)

    Decher, R.; Teuber, D.

    1975-01-01

    High resolution frequency analysis methods, with application to the gravitational probe redshift experiment, are discussed. For this experiment a resolution of .00001 Hz is required to measure a slowly varying, low frequency signal of approximately 1 Hz. Major building blocks include fast Fourier transform, discrete Fourier transform, Lagrange interpolation, golden section search, and adaptive matched filter technique. Accuracy, resolution, and computer effort of these methods are investigated, including test runs on an IBM 360/65 computer.

  12. Numerical Optimization Using Computer Experiments

    NASA Technical Reports Server (NTRS)

    Trosset, Michael W.; Torczon, Virginia

    1997-01-01

    Engineering design optimization often gives rise to problems in which expensive objective functions are minimized by derivative-free methods. We propose a method for solving such problems that synthesizes ideas from the numerical optimization and computer experiment literatures. Our approach relies on kriging known function values to construct a sequence of surrogate models of the objective function that are used to guide a grid search for a minimizer. Results from numerical experiments on a standard test problem are presented.

  13. Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Xu; Tuo, Rui; Jeff Wu, C. F.

    Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less

  14. Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion

    DOE PAGES

    He, Xu; Tuo, Rui; Jeff Wu, C. F.

    2017-01-31

    Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less

  15. Statistical Methodologies to Integrate Experimental and Computational Research

    NASA Technical Reports Server (NTRS)

    Parker, P. A.; Johnson, R. T.; Montgomery, D. C.

    2008-01-01

    Development of advanced algorithms for simulating engine flow paths requires the integration of fundamental experiments with the validation of enhanced mathematical models. In this paper, we provide an overview of statistical methods to strategically and efficiently conduct experiments and computational model refinement. Moreover, the integration of experimental and computational research efforts is emphasized. With a statistical engineering perspective, scientific and engineering expertise is combined with statistical sciences to gain deeper insights into experimental phenomenon and code development performance; supporting the overall research objectives. The particular statistical methods discussed are design of experiments, response surface methodology, and uncertainty analysis and planning. Their application is illustrated with a coaxial free jet experiment and a turbulence model refinement investigation. Our goal is to provide an overview, focusing on concepts rather than practice, to demonstrate the benefits of using statistical methods in research and development, thereby encouraging their broader and more systematic application.

  16. Estimations of global warming potentials from computational chemistry calculations for CH(2)F(2) and other fluorinated methyl species verified by comparison to experiment.

    PubMed

    Blowers, Paul; Hollingshead, Kyle

    2009-05-21

    In this work, the global warming potential (GWP) of methylene fluoride (CH(2)F(2)), or HFC-32, is estimated through computational chemistry methods. We find our computational chemistry approach reproduces well all phenomena important for predicting global warming potentials. Geometries predicted using the B3LYP/6-311g** method were in good agreement with experiment, although some other computational methods performed slightly better. Frequencies needed for both partition function calculations in transition-state theory and infrared intensities needed for radiative forcing estimates agreed well with experiment compared to other computational methods. A modified CBS-RAD method used to obtain energies led to superior results to all other previous heat of reaction estimates and most barrier height calculations when the B3LYP/6-311g** optimized geometry was used as the base structure. Use of the small-curvature tunneling correction and a hindered rotor treatment where appropriate led to accurate reaction rate constants and radiative forcing estimates without requiring any experimental data. Atmospheric lifetimes from theory at 277 K were indistinguishable from experimental results, as were the final global warming potentials compared to experiment. This is the first time entirely computational methods have been applied to estimate a global warming potential for a chemical, and we have found the approach to be robust, inexpensive, and accurate compared to prior experimental results. This methodology was subsequently used to estimate GWPs for three additional species [methane (CH(4)); fluoromethane (CH(3)F), or HFC-41; and fluoroform (CHF(3)), or HFC-23], where estimations also compare favorably to experimental values.

  17. Research and Teaching: Computational Methods in General Chemistry--Perceptions of Programming, Prior Experience, and Student Outcomes

    ERIC Educational Resources Information Center

    Wheeler, Lindsay B.; Chiu, Jennie L.; Grisham, Charles M.

    2016-01-01

    This article explores how integrating computational tools into a general chemistry laboratory course can influence student perceptions of programming and investigates relationships among student perceptions, prior experience, and student outcomes.

  18. Flow and Turbulence Modeling and Computation of Shock Buffet Onset for Conventional and Supercritical Airfoils

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    1998-01-01

    Flow and turbulence models applied to the problem of shock buffet onset are studied. The accuracy of the interactive boundary layer and the thin-layer Navier-Stokes equations solved with recent upwind techniques using similar transport field equation turbulence models is assessed for standard steady test cases, including conditions having significant shock separation. The two methods are found to compare well in the shock buffet onset region of a supercritical airfoil that involves strong trailing-edge separation. A computational analysis using the interactive-boundary layer has revealed a Reynolds scaling effect in the shock buffet onset of the supercritical airfoil, which compares well with experiment. The methods are next applied to a conventional airfoil. Steady shock-separated computations of the conventional airfoil with the two methods compare well with experiment. Although the interactive boundary layer computations in the shock buffet region compare well with experiment for the conventional airfoil, the thin-layer Navier-Stokes computations do not. These findings are discussed in connection with possible mechanisms important in the onset of shock buffet and the constraints imposed by current numerical modeling techniques.

  19. Learning Experiences in Medical Education.

    ERIC Educational Resources Information Center

    Leggat, Peter A.

    2000-01-01

    Discusses the learning experience from both traditional and computer-assisted instructional methods. Describes the environments in which these methods are effective. Focuses on learning experiences in medical education and describes educational strategies, particularly the 'SPICES' model. Discusses the importance of mentoring in the psychosocial…

  20. Numerical study of the vortex tube reconnection using vortex particle method on many graphics cards

    NASA Astrophysics Data System (ADS)

    Kudela, Henryk; Kosior, Andrzej

    2014-08-01

    Vortex Particle Methods are one of the most convenient ways of tracking the vorticity evolution. In the article we presented numerical recreation of the real life experiment concerning head-on collision of two vortex rings. In the experiment the evolution and reconnection of the vortex structures is tracked with passive markers (paint particles) which in viscous fluid does not follow the evolution of vorticity field. In numerical computations we showed the difference between vorticity evolution and movement of passive markers. The agreement with the experiment was very good. Due to problems with very long time of computations on a single processor the Vortex-in-Cell method was implemented on the multicore architecture of the graphics cards (GPUs). Vortex Particle Methods are very well suited for parallel computations. As there are myriads of particles in the flow and for each of them the same equations of motion have to be solved the SIMD architecture used in GPUs seems to be perfect. The main disadvantage in this case is the small amount of the RAM memory. To overcome this problem we created a multiGPU implementation of the VIC method. Some remarks on parallel computing are given in the article.

  1. A method of non-contact reading code based on computer vision

    NASA Astrophysics Data System (ADS)

    Zhang, Chunsen; Zong, Xiaoyu; Guo, Bingxuan

    2018-03-01

    With the purpose of guarantee the computer information exchange security between internal and external network (trusted network and un-trusted network), A non-contact Reading code method based on machine vision has been proposed. Which is different from the existing network physical isolation method. By using the computer monitors, camera and other equipment. Deal with the information which will be on exchanged, Include image coding ,Generate the standard image , Display and get the actual image , Calculate homography matrix, Image distort correction and decoding in calibration, To achieve the computer information security, Non-contact, One-way transmission between the internal and external network , The effectiveness of the proposed method is verified by experiments on real computer text data, The speed of data transfer can be achieved 24kb/s. The experiment shows that this algorithm has the characteristics of high security, fast velocity and less loss of information. Which can meet the daily needs of the confidentiality department to update the data effectively and reliably, Solved the difficulty of computer information exchange between Secret network and non-secret network, With distinctive originality, practicability, and practical research value.

  2. A distributed computing environment with support for constraint-based task scheduling and scientific experimentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahrens, J.P.; Shapiro, L.G.; Tanimoto, S.L.

    1997-04-01

    This paper describes a computing environment which supports computer-based scientific research work. Key features include support for automatic distributed scheduling and execution and computer-based scientific experimentation. A new flexible and extensible scheduling technique that is responsive to a user`s scheduling constraints, such as the ordering of program results and the specification of task assignments and processor utilization levels, is presented. An easy-to-use constraint language for specifying scheduling constraints, based on the relational database query language SQL, is described along with a search-based algorithm for fulfilling these constraints. A set of performance studies show that the environment can schedule and executemore » program graphs on a network of workstations as the user requests. A method for automatically generating computer-based scientific experiments is described. Experiments provide a concise method of specifying a large collection of parameterized program executions. The environment achieved significant speedups when executing experiments; for a large collection of scientific experiments an average speedup of 3.4 on an average of 5.5 scheduled processors was obtained.« less

  3. An experiment for determining the Euler load by direct computation

    NASA Technical Reports Server (NTRS)

    Thurston, Gaylen A.; Stein, Peter A.

    1986-01-01

    A direct algorithm is presented for computing the Euler load of a column from experimental data. The method is based on exact inextensional theory for imperfect columns, which predicts two distinct deflected shapes at loads near the Euler load. The bending stiffness of the column appears in the expression for the Euler load along with the column length, therefore the experimental data allows a direct computation of bending stiffness. Experiments on graphite-epoxy columns of rectangular cross-section are reported in the paper. The bending stiffness of each composite column computed from experiment is compared with predictions from laminated plate theory.

  4. Computational simulations of supersonic magnetohydrodynamic flow control, power and propulsion systems

    NASA Astrophysics Data System (ADS)

    Wan, Tian

    This work is motivated by the lack of fully coupled computational tool that solves successfully the turbulent chemically reacting Navier-Stokes equation, the electron energy conservation equation and the electric current Poisson equation. In the present work, the abovementioned equations are solved in a fully coupled manner using fully implicit parallel GMRES methods. The system of Navier-Stokes equations are solved using a GMRES method with combined Schwarz and ILU(0) preconditioners. The electron energy equation and the electric current Poisson equation are solved using a GMRES method with combined SOR and Jacobi preconditioners. The fully coupled method has also been implemented successfully in an unstructured solver, US3D, and convergence test results were presented. This new method is shown two to five times faster than the original DPLR method. The Poisson solver is validated with analytic test problems. Then, four problems are selected; two of them are computed to explore the possibility of onboard MHD control and power generation, and the other two are simulation of experiments. First, the possibility of onboard reentry shock control by a magnetic field is explored. As part of a previous project, MHD power generation onboard a re-entry vehicle is also simulated. Then, the MHD acceleration experiments conducted at NASA Ames research center are simulated. Lastly, the MHD power generation experiments known as the HVEPS project are simulated. For code validation, the scramjet experiments at University of Queensland are simulated first. The generator section of the HVEPS test facility is computed then. The main conclusion is that the computational tool is accurate for different types of problems and flow conditions, and its accuracy and efficiency are necessary when the flow complexity increases.

  5. BioLab: Using Yeast Fermentation as a Model for the Scientific Method.

    ERIC Educational Resources Information Center

    Pigage, Helen K.; Neilson, Milton C.; Greeder, Michele M.

    This document presents a science experiment demonstrating the scientific method. The experiment consists of testing the fermentation capabilities of yeasts under different circumstances. The experiment is supported with computer software called BioLab which demonstrates yeast's response to different environments. (YDS)

  6. LASER APPLICATIONS AND OTHER TOPICS IN QUANTUM ELECTRONICS: Methods of computational physics in the problem of mathematical interpretation of laser investigations

    NASA Astrophysics Data System (ADS)

    Brodyn, M. S.; Starkov, V. N.

    2007-07-01

    It is shown that in laser experiments performed by using an 'imperfect' setup when instrumental distortions are considerable, sufficiently accurate results can be obtained by the modern methods of computational physics. It is found for the first time that a new instrumental function — the 'cap' function — a 'sister' of a Gaussian curve proved to be demanded namely in laser experiments. A new mathematical model of a measurement path and carefully performed computational experiment show that a light beam transmitted through a mesoporous film has actually a narrower intensity distribution than the detected beam, and the amplitude of the real intensity distribution is twice as large as that for measured intensity distributions.

  7. Statistical Surrogate Modeling of Atmospheric Dispersion Events Using Bayesian Adaptive Splines

    NASA Astrophysics Data System (ADS)

    Francom, D.; Sansó, B.; Bulaevskaya, V.; Lucas, D. D.

    2016-12-01

    Uncertainty in the inputs of complex computer models, including atmospheric dispersion and transport codes, is often assessed via statistical surrogate models. Surrogate models are computationally efficient statistical approximations of expensive computer models that enable uncertainty analysis. We introduce Bayesian adaptive spline methods for producing surrogate models that capture the major spatiotemporal patterns of the parent model, while satisfying all the necessities of flexibility, accuracy and computational feasibility. We present novel methodological and computational approaches motivated by a controlled atmospheric tracer release experiment conducted at the Diablo Canyon nuclear power plant in California. Traditional methods for building statistical surrogate models often do not scale well to experiments with large amounts of data. Our approach is well suited to experiments involving large numbers of model inputs, large numbers of simulations, and functional output for each simulation. Our approach allows us to perform global sensitivity analysis with ease. We also present an approach to calibration of simulators using field data.

  8. A Theoretical Framework for Calibration in Computer Models: Parametrization, Estimation and Convergence Properties

    DOE PAGES

    Tuo, Rui; Jeff Wu, C. F.

    2016-07-19

    Calibration parameters in deterministic computer experiments are those attributes that cannot be measured or available in physical experiments. Here, an approach to estimate them by using data from physical experiments and computer simulations. A theoretical framework is given which allows us to study the issues of parameter identifiability and estimation. We define the L 2-consistency for calibration as a justification for calibration methods. It is shown that a simplified version of the original KO method leads to asymptotically L 2-inconsistent calibration. This L 2-inconsistency can be remedied by modifying the original estimation procedure. A novel calibration method, called the Lmore » 2 calibration, is proposed and proven to be L 2-consistent and enjoys optimal convergence rate. Furthermore a numerical example and some mathematical analysis are used to illustrate the source of the L 2-inconsistency problem.« less

  9. Stretching the Traditional Notion of Experiment in Computing: Explorative Experiments.

    PubMed

    Schiaffonati, Viola

    2016-06-01

    Experimentation represents today a 'hot' topic in computing. If experiments made with the support of computers, such as computer simulations, have received increasing attention from philosophers of science and technology, questions such as "what does it mean to do experiments in computer science and engineering and what are their benefits?" emerged only recently as central in the debate over the disciplinary status of the discipline. In this work we aim at showing, also by means of paradigmatic examples, how the traditional notion of controlled experiment should be revised to take into account a part of the experimental practice in computing along the lines of experimentation as exploration. Taking inspiration from the discussion on exploratory experimentation in the philosophy of science-experimentation that is not theory-driven-we advance the idea of explorative experiments that, although not new, can contribute to enlarge the debate about the nature and role of experimental methods in computing. In order to further refine this concept we recast explorative experiments as socio-technical experiments, that test new technologies in their socio-technical contexts. We suggest that, when experiments are explorative, control should be intended in a posteriori form, in opposition to the a priori form that usually takes place in traditional experimental contexts.

  10. A meta-analysis of outcomes from the use of computer-simulated experiments in science education

    NASA Astrophysics Data System (ADS)

    Lejeune, John Van

    The purpose of this study was to synthesize the findings from existing research on the effects of computer simulated experiments on students in science education. Results from 40 reports were integrated by the process of meta-analysis to examine the effect of computer-simulated experiments and interactive videodisc simulations on student achievement and attitudes. Findings indicated significant positive differences in both low-level and high-level achievement of students who use computer-simulated experiments and interactive videodisc simulations as compared to students who used more traditional learning activities. No significant differences in retention, student attitudes toward the subject, or toward the educational method were found. Based on the findings of this study, computer-simulated experiments and interactive videodisc simulations should be used to enhance students' learning in science, especially in cases where the use of traditional laboratory activities are expensive, dangerous, or impractical.

  11. Experience with Aero- and Fluid-Dynamic Testing for Engineering and CFD Validation

    NASA Technical Reports Server (NTRS)

    Ross, James C.

    2016-01-01

    Ever since computations have been used to simulate aerodynamics the need to ensure that the computations adequately represent real life has followed. Many experiments have been performed specifically for validation and as computational methods have improved, so have the validation experiments. Validation is also a moving target because computational methods improve requiring validation for the new aspect of flow physics that the computations aim to capture. Concurrently, new measurement techniques are being developed that can help capture more detailed flow features pressure sensitive paint (PSP) and particle image velocimetry (PIV) come to mind. This paper will present various wind-tunnel tests the author has been involved with and how they were used for validation of various kinds of CFD. A particular focus is the application of advanced measurement techniques to flow fields (and geometries) that had proven to be difficult to predict computationally. Many of these difficult flow problems arose from engineering and development problems that needed to be solved for a particular vehicle or research program. In some cases the experiments required to solve the engineering problems were refined to provide valuable CFD validation data in addition to the primary engineering data. All of these experiments have provided physical insight and validation data for a wide range of aerodynamic and acoustic phenomena for vehicles ranging from tractor-trailers to crewed spacecraft.

  12. Post-Fisherian Experimentation: From Physical to Virtual

    DOE PAGES

    Jeff Wu, C. F.

    2014-04-24

    Fisher's pioneering work in design of experiments has inspired further work with broader applications, especially in industrial experimentation. Three topics in physical experiments are discussed: principles of effect hierarchy, sparsity, and heredity for factorial designs, a new method called CME for de-aliasing aliased effects, and robust parameter design. The recent emergence of virtual experiments on a computer is reviewed. Here, some major challenges in computer experiments, which must go beyond Fisherian principles, are outlined.

  13. Ontological and Epistemological Issues Regarding Climate Models and Computer Experiments

    NASA Astrophysics Data System (ADS)

    Vezer, M. A.

    2010-12-01

    Recent philosophical discussions (Parker 2009; Frigg and Reiss 2009; Winsberg, 2009; Morgon 2002, 2003, 2005; Gula 2002) about the ontology of computer simulation experiments and the epistemology of inferences drawn from them are of particular relevance to climate science as computer modeling and analysis are instrumental in understanding climatic systems. How do computer simulation experiments compare with traditional experiments? Is there an ontological difference between these two methods of inquiry? Are there epistemological considerations that result in one type of inference being more reliable than the other? What are the implications of these questions with respect to climate studies that rely on computer simulation analysis? In this paper, I examine these philosophical questions within the context of climate science, instantiating concerns in the philosophical literature with examples found in analysis of global climate change. I concentrate on Wendy Parker’s (2009) account of computer simulation studies, which offers a treatment of these and other questions relevant to investigations of climate change involving such modelling. Two theses at the center of Parker’s account will be the focus of this paper. The first is that computer simulation experiments ought to be regarded as straightforward material experiments; which is to say, there is no significant ontological difference between computer and traditional experimentation. Parker’s second thesis is that some of the emphasis on the epistemological importance of materiality has been misplaced. I examine both of these claims. First, I inquire as to whether viewing computer and traditional experiments as ontologically similar in the way she does implies that there is no proper distinction between abstract experiments (such as ‘thought experiments’ as well as computer experiments) and traditional ‘concrete’ ones. Second, I examine the notion of materiality (i.e., the material commonality between object and target systems) and some arguments for the claim that materiality entails some inferential advantage to traditional experimentation. I maintain that Parker’s account of the ontology of computer simulations has some interesting though potentially problematic implications regarding conventional distinctions between abstract and concrete methods of inquiry. With respect to her account of materiality, I outline and defend an alternative account, posited by Mary Morgan (2002, 2003, 2005), which holds that ontological similarity between target and object systems confers some epistemological advantage to traditional forms of experimental inquiry.

  14. Effects of Learning Style and Training Method on Computer Attitude and Performance in World Wide Web Page Design Training.

    ERIC Educational Resources Information Center

    Chou, Huey-Wen; Wang, Yu-Fang

    1999-01-01

    Compares the effects of two training methods on computer attitude and performance in a World Wide Web page design program in a field experiment with high school students in Taiwan. Discusses individual differences, Kolb's Experiential Learning Theory and Learning Style Inventory, Computer Attitude Scale, and results of statistical analyses.…

  15. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  16. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE PAGES

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban; ...

    2018-05-01

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  17. Linear programming computational experience with onyx

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atrek, E.

    1994-12-31

    ONYX is a linear programming software package based on an efficient variation of the gradient projection method. When fully configured, it is intended for application to industrial size problems. While the computational experience is limited at the time of this abstract, the technique is found to be robust and competitive with existing methodology in terms of both accuracy and speed. An overview of the approach is presented together with a description of program capabilities, followed by a discussion of up-to-date computational experience with the program. Conclusions include advantages of the approach and envisioned future developments.

  18. Freshman year computer engineering students' experiences for flipped physics lab class: An action research

    NASA Astrophysics Data System (ADS)

    Akı, Fatma Nur; Gürel, Zeynep

    2017-02-01

    The purpose of this research is to determine the university students' learning experiences about flipped-physics laboratory class. The research has been completed during the fall semester of 2015 at Computer Engineering Department of Istanbul Commerce University. In this research, also known as a teacher qualitative research design, action research method is preferred to use. The participants are ten people, including seven freshman and three junior year students of Computer Engineering Department. The research data was collected at the end of the semester with the focus group interview which includes structured and open-ended questions. And data was evaluated with categorical content analysis. According to the results, students have some similar and different learning experiences to flipped education method for physics laboratory class.

  19. Computational and mathematical methods in brain atlasing.

    PubMed

    Nowinski, Wieslaw L

    2017-12-01

    Brain atlases have a wide range of use from education to research to clinical applications. Mathematical methods as well as computational methods and tools play a major role in the process of brain atlas building and developing atlas-based applications. Computational methods and tools cover three areas: dedicated editors for brain model creation, brain navigators supporting multiple platforms, and atlas-assisted specific applications. Mathematical methods in atlas building and developing atlas-aided applications deal with problems in image segmentation, geometric body modelling, physical modelling, atlas-to-scan registration, visualisation, interaction and virtual reality. Here I overview computational and mathematical methods in atlas building and developing atlas-assisted applications, and share my contribution to and experience in this field.

  20. A large-scale evaluation of computational protein function prediction

    PubMed Central

    Radivojac, Predrag; Clark, Wyatt T; Ronnen Oron, Tal; Schnoes, Alexandra M; Wittkop, Tobias; Sokolov, Artem; Graim, Kiley; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa; Pandey, Gaurav; Yunes, Jeffrey M; Talwalkar, Ameet S; Repo, Susanna; Souza, Michael L; Piovesan, Damiano; Casadio, Rita; Wang, Zheng; Cheng, Jianlin; Fang, Hai; Gough, Julian; Koskinen, Patrik; Törönen, Petri; Nokso-Koivisto, Jussi; Holm, Liisa; Cozzetto, Domenico; Buchan, Daniel W A; Bryson, Kevin; Jones, David T; Limaye, Bhakti; Inamdar, Harshal; Datta, Avik; Manjari, Sunitha K; Joshi, Rajendra; Chitale, Meghana; Kihara, Daisuke; Lisewski, Andreas M; Erdin, Serkan; Venner, Eric; Lichtarge, Olivier; Rentzsch, Robert; Yang, Haixuan; Romero, Alfonso E; Bhat, Prajwal; Paccanaro, Alberto; Hamp, Tobias; Kassner, Rebecca; Seemayer, Stefan; Vicedo, Esmeralda; Schaefer, Christian; Achten, Dominik; Auer, Florian; Böhm, Ariane; Braun, Tatjana; Hecht, Maximilian; Heron, Mark; Hönigschmid, Peter; Hopf, Thomas; Kaufmann, Stefanie; Kiening, Michael; Krompass, Denis; Landerer, Cedric; Mahlich, Yannick; Roos, Manfred; Björne, Jari; Salakoski, Tapio; Wong, Andrew; Shatkay, Hagit; Gatzmann, Fanny; Sommer, Ingolf; Wass, Mark N; Sternberg, Michael J E; Škunca, Nives; Supek, Fran; Bošnjak, Matko; Panov, Panče; Džeroski, Sašo; Šmuc, Tomislav; Kourmpetis, Yiannis A I; van Dijk, Aalt D J; ter Braak, Cajo J F; Zhou, Yuanpeng; Gong, Qingtian; Dong, Xinran; Tian, Weidong; Falda, Marco; Fontana, Paolo; Lavezzo, Enrico; Di Camillo, Barbara; Toppo, Stefano; Lan, Liang; Djuric, Nemanja; Guo, Yuhong; Vucetic, Slobodan; Bairoch, Amos; Linial, Michal; Babbitt, Patricia C; Brenner, Steven E; Orengo, Christine; Rost, Burkhard; Mooney, Sean D; Friedberg, Iddo

    2013-01-01

    Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be high. Here we report the results from the first large-scale community-based Critical Assessment of protein Function Annotation (CAFA) experiment. Fifty-four methods representing the state-of-the-art for protein function prediction were evaluated on a target set of 866 proteins from eleven organisms. Two findings stand out: (i) today’s best protein function prediction algorithms significantly outperformed widely-used first-generation methods, with large gains on all types of targets; and (ii) although the top methods perform well enough to guide experiments, there is significant need for improvement of currently available tools. PMID:23353650

  1. Method of mobile robot indoor navigation by artificial landmarks with use of computer vision

    NASA Astrophysics Data System (ADS)

    Glibin, E. S.; Shevtsov, A. A.; Enik, O. A.

    2018-05-01

    The article describes an algorithm of the mobile robot indoor navigation based on the use of visual odometry. The results of the experiment identifying calculation errors in the distance traveled on a slip are presented. It is shown that the use of computer vision allows one to correct erroneous coordinates of the robot with the help of artificial landmarks. The control system utilizing the proposed method has been realized on the basis of Arduino Mego 2560 controller and a single-board computer Raspberry Pi 3. The results of the experiment on the mobile robot navigation with the use of this control system are presented.

  2. Using Computer-Based "Experiments" in the Analysis of Chemical Reaction Equilibria

    ERIC Educational Resources Information Center

    Li, Zhao; Corti, David S.

    2018-01-01

    The application of the Reaction Monte Carlo (RxMC) algorithm to standard textbook problems in chemical reaction equilibria is discussed. The RxMC method is a molecular simulation algorithm for studying the equilibrium properties of reactive systems, and therefore provides the opportunity to develop computer-based "experiments" for the…

  3. Depth compensating calculation method of computer-generated holograms using symmetry and similarity of zone plates

    NASA Astrophysics Data System (ADS)

    Wei, Hui; Gong, Guanghong; Li, Ni

    2017-10-01

    Computer-generated hologram (CGH) is a promising 3D display technology while it is challenged by heavy computation load and vast memory requirement. To solve these problems, a depth compensating CGH calculation method based on symmetry and similarity of zone plates is proposed and implemented on graphics processing unit (GPU). An improved LUT method is put forward to compute the distances between object points and hologram pixels in the XY direction. The concept of depth compensating factor is defined and used for calculating the holograms of points with different depth positions instead of layer-based methods. The proposed method is suitable for arbitrary sampling objects with lower memory usage and higher computational efficiency compared to other CGH methods. The effectiveness of the proposed method is validated by numerical and optical experiments.

  4. Computer-aided visualization and analysis system for sequence evaluation

    DOEpatents

    Chee, M.S.

    1998-08-18

    A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments are improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device. 27 figs.

  5. Computer-aided visualization and analysis system for sequence evaluation

    DOEpatents

    Chee, Mark S.; Wang, Chunwei; Jevons, Luis C.; Bernhart, Derek H.; Lipshutz, Robert J.

    2004-05-11

    A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments are improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device.

  6. Computer-aided visualization and analysis system for sequence evaluation

    DOEpatents

    Chee, Mark S.

    1998-08-18

    A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments are improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device.

  7. Computer-aided visualization and analysis system for sequence evaluation

    DOEpatents

    Chee, Mark S.

    2003-08-19

    A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments may be improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device.

  8. Control mechanism of double-rotator-structure ternary optical computer

    NASA Astrophysics Data System (ADS)

    Kai, SONG; Liping, YAN

    2017-03-01

    Double-rotator-structure ternary optical processor (DRSTOP) has two characteristics, namely, giant data-bits parallel computing and reconfigurable processor, which can handle thousands of data bits in parallel, and can run much faster than computers and other optical computer systems so far. In order to put DRSTOP into practical application, this paper established a series of methods, namely, task classification method, data-bits allocation method, control information generation method, control information formatting and sending method, and decoded results obtaining method and so on. These methods form the control mechanism of DRSTOP. This control mechanism makes DRSTOP become an automated computing platform. Compared with the traditional calculation tools, DRSTOP computing platform can ease the contradiction between high energy consumption and big data computing due to greatly reducing the cost of communications and I/O. Finally, the paper designed a set of experiments for DRSTOP control mechanism to verify its feasibility and correctness. Experimental results showed that the control mechanism is correct, feasible and efficient.

  9. Computer Mediated Communication: Online Instruction and Interactivity.

    ERIC Educational Resources Information Center

    Lavooy, Maria J.; Newlin, Michael H.

    2003-01-01

    Explores the different forms and potential applications of computer mediated communication (CMC) for Web-based and Web-enhanced courses. Based on their experiences with three different Web courses (Research Methods in Psychology, Statistical Methods in Psychology, and Basic Learning Processes) taught repeatedly over the last five years, the…

  10. Using a computer simulation for teaching communication skills: A blinded multisite mixed methods randomized controlled trial

    PubMed Central

    Kron, Frederick W.; Fetters, Michael D.; Scerbo, Mark W.; White, Casey B.; Lypson, Monica L.; Padilla, Miguel A.; Gliva-McConvey, Gayle A.; Belfore, Lee A.; West, Temple; Wallace, Amelia M.; Guetterman, Timothy C.; Schleicher, Lauren S.; Kennedy, Rebecca A.; Mangrulkar, Rajesh S.; Cleary, James F.; Marsella, Stacy C.; Becker, Daniel M.

    2016-01-01

    Objectives To assess advanced communication skills among second-year medical students exposed either to a computer simulation (MPathic-VR) featuring virtual humans, or to a multimedia computer-based learning module, and to understand each group’s experiences and learning preferences. Methods A single-blinded, mixed methods, randomized, multisite trial compared MPathic-VR (N=210) to computer-based learning (N=211). Primary outcomes: communication scores during repeat interactions with MPathic-VR’s intercultural and interprofessional communication scenarios and scores on a subsequent advanced communication skills objective structured clinical examination (OSCE). Multivariate analysis of variance was used to compare outcomes. Secondary outcomes: student attitude surveys and qualitative assessments of their experiences with MPathic-VR or computer-based learning. Results MPathic-VR-trained students improved their intercultural and interprofessional communication performance between their first and second interactions with each scenario. They also achieved significantly higher composite scores on the OSCE than computer-based learning-trained students. Attitudes and experiences were more positive among students trained with MPathic-VR, who valued its providing immediate feedback, teaching nonverbal communication skills, and preparing them for emotion-charged patient encounters. Conclusions MPathic-VR was effective in training advanced communication skills and in enabling knowledge transfer into a more realistic clinical situation. Practice Implications MPathic-VR’s virtual human simulation offers an effective and engaging means of advanced communication training. PMID:27939846

  11. Efficient experimental design for uncertainty reduction in gene regulatory networks.

    PubMed

    Dehghannasiri, Roozbeh; Yoon, Byung-Jun; Dougherty, Edward R

    2015-01-01

    An accurate understanding of interactions among genes plays a major role in developing therapeutic intervention methods. Gene regulatory networks often contain a significant amount of uncertainty. The process of prioritizing biological experiments to reduce the uncertainty of gene regulatory networks is called experimental design. Under such a strategy, the experiments with high priority are suggested to be conducted first. The authors have already proposed an optimal experimental design method based upon the objective for modeling gene regulatory networks, such as deriving therapeutic interventions. The experimental design method utilizes the concept of mean objective cost of uncertainty (MOCU). MOCU quantifies the expected increase of cost resulting from uncertainty. The optimal experiment to be conducted first is the one which leads to the minimum expected remaining MOCU subsequent to the experiment. In the process, one must find the optimal intervention for every gene regulatory network compatible with the prior knowledge, which can be prohibitively expensive when the size of the network is large. In this paper, we propose a computationally efficient experimental design method. This method incorporates a network reduction scheme by introducing a novel cost function that takes into account the disruption in the ranking of potential experiments. We then estimate the approximate expected remaining MOCU at a lower computational cost using the reduced networks. Simulation results based on synthetic and real gene regulatory networks show that the proposed approximate method has close performance to that of the optimal method but at lower computational cost. The proposed approximate method also outperforms the random selection policy significantly. A MATLAB software implementing the proposed experimental design method is available at http://gsp.tamu.edu/Publications/supplementary/roozbeh15a/.

  12. Efficient experimental design for uncertainty reduction in gene regulatory networks

    PubMed Central

    2015-01-01

    Background An accurate understanding of interactions among genes plays a major role in developing therapeutic intervention methods. Gene regulatory networks often contain a significant amount of uncertainty. The process of prioritizing biological experiments to reduce the uncertainty of gene regulatory networks is called experimental design. Under such a strategy, the experiments with high priority are suggested to be conducted first. Results The authors have already proposed an optimal experimental design method based upon the objective for modeling gene regulatory networks, such as deriving therapeutic interventions. The experimental design method utilizes the concept of mean objective cost of uncertainty (MOCU). MOCU quantifies the expected increase of cost resulting from uncertainty. The optimal experiment to be conducted first is the one which leads to the minimum expected remaining MOCU subsequent to the experiment. In the process, one must find the optimal intervention for every gene regulatory network compatible with the prior knowledge, which can be prohibitively expensive when the size of the network is large. In this paper, we propose a computationally efficient experimental design method. This method incorporates a network reduction scheme by introducing a novel cost function that takes into account the disruption in the ranking of potential experiments. We then estimate the approximate expected remaining MOCU at a lower computational cost using the reduced networks. Conclusions Simulation results based on synthetic and real gene regulatory networks show that the proposed approximate method has close performance to that of the optimal method but at lower computational cost. The proposed approximate method also outperforms the random selection policy significantly. A MATLAB software implementing the proposed experimental design method is available at http://gsp.tamu.edu/Publications/supplementary/roozbeh15a/. PMID:26423515

  13. A Method for Selection of Appropriate Assistive Technology for Computer Access

    ERIC Educational Resources Information Center

    Jenko, Mojca

    2010-01-01

    Assistive technologies (ATs) for computer access enable people with disabilities to be included in the information society. Current methods for assessment and selection of the most appropriate AT for each individual are nonstandardized, lengthy, subjective, and require substantial clinical experience of a multidisciplinary team. This manuscript…

  14. Quake Final Video

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Critical infrastructures of the world are at constant risks for earthquakes. Most of these critical structures are designed using archaic, seismic, simulation methods that were built from early digital computers from the 1970s. Idaho National Laboratory’s Seismic Research Group are working to modernize the simulation methods through computational research and large-scale laboratory experiments.

  15. Mitigating Communication Delays in Remotely Connected Hardware-in-the-loop Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cale, James; Johnson, Brian; Dall'Anese, Emiliano

    Here, this paper introduces a potential approach for mitigating the effects of communication delays between multiple, closed-loop hardware-in-the-loop experiments which are virtually connected, yet physically separated. The method consists of an analytical method for the compensation of communication delays, along with the supporting computational and communication infrastructure. The control design leverages tools for the design of observers for the compensation of measurement errors in systems with time-varying delays. The proposed methodology is validated through computer simulation and hardware experimentation connecting hardware-in-the-loop experiments conducted between laboratories separated by a distance of over 100 km.

  16. Mitigating Communication Delays in Remotely Connected Hardware-in-the-loop Experiments

    DOE PAGES

    Cale, James; Johnson, Brian; Dall'Anese, Emiliano; ...

    2018-03-30

    Here, this paper introduces a potential approach for mitigating the effects of communication delays between multiple, closed-loop hardware-in-the-loop experiments which are virtually connected, yet physically separated. The method consists of an analytical method for the compensation of communication delays, along with the supporting computational and communication infrastructure. The control design leverages tools for the design of observers for the compensation of measurement errors in systems with time-varying delays. The proposed methodology is validated through computer simulation and hardware experimentation connecting hardware-in-the-loop experiments conducted between laboratories separated by a distance of over 100 km.

  17. Computations in Plasma Physics.

    ERIC Educational Resources Information Center

    Cohen, Bruce I.; Killeen, John

    1983-01-01

    Discusses contributions of computers to research in magnetic and inertial-confinement fusion, charged-particle-beam propogation, and space sciences. Considers use in design/control of laboratory and spacecraft experiments and in data acquisition; and reviews major plasma computational methods and some of the important physics problems they…

  18. Computational analysis of conserved RNA secondary structure in transcriptomes and genomes.

    PubMed

    Eddy, Sean R

    2014-01-01

    Transcriptomics experiments and computational predictions both enable systematic discovery of new functional RNAs. However, many putative noncoding transcripts arise instead from artifacts and biological noise, and current computational prediction methods have high false positive rates. I discuss prospects for improving computational methods for analyzing and identifying functional RNAs, with a focus on detecting signatures of conserved RNA secondary structure. An interesting new front is the application of chemical and enzymatic experiments that probe RNA structure on a transcriptome-wide scale. I review several proposed approaches for incorporating structure probing data into the computational prediction of RNA secondary structure. Using probabilistic inference formalisms, I show how all these approaches can be unified in a well-principled framework, which in turn allows RNA probing data to be easily integrated into a wide range of analyses that depend on RNA secondary structure inference. Such analyses include homology search and genome-wide detection of new structural RNAs.

  19. Computer-aided visualization and analysis system for sequence evaluation

    DOEpatents

    Chee, Mark S.

    1999-10-26

    A computer system (1) for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments may be improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area (814) and sample sequences in another area (816) on a display device (3).

  20. Computer-aided visualization and analysis system for sequence evaluation

    DOEpatents

    Chee, Mark S.

    2001-06-05

    A computer system (1) for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments may be improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area (814) and sample sequences in another area (816) on a display device (3).

  1. Solution of the Schrodinger Equation for a Diatomic Oscillator Using Linear Algebra: An Undergraduate Computational Experiment

    ERIC Educational Resources Information Center

    Gasyna, Zbigniew L.

    2008-01-01

    Computational experiment is proposed in which a linear algebra method is applied to the solution of the Schrodinger equation for a diatomic oscillator. Calculations of the vibration-rotation spectrum for the HCl molecule are presented and the results show excellent agreement with experimental data. (Contains 1 table and 1 figure.)

  2. An evaluation method of computer usability based on human-to-computer information transmission model.

    PubMed

    Ogawa, K

    1992-01-01

    This paper proposes a new evaluation and prediction method for computer usability. This method is based on our two previously proposed information transmission measures created from a human-to-computer information transmission model. The model has three information transmission levels: the device, software, and task content levels. Two measures, called the device independent information measure (DI) and the computer independent information measure (CI), defined on the software and task content levels respectively, are given as the amount of information transmitted. Two information transmission rates are defined as DI/T and CI/T, where T is the task completion time: the device independent information transmission rate (RDI), and the computer independent information transmission rate (RCI). The method utilizes the RDI and RCI rates to evaluate relatively the usability of software and device operations on different computer systems. Experiments using three different systems, in this case a graphical information input task, confirm that the method offers an efficient way of determining computer usability.

  3. A fast point-cloud computing method based on spatial symmetry of Fresnel field

    NASA Astrophysics Data System (ADS)

    Wang, Xiangxiang; Zhang, Kai; Shen, Chuan; Zhu, Wenliang; Wei, Sui

    2017-10-01

    Aiming at the great challenge for Computer Generated Hologram (CGH) duo to the production of high spatial-bandwidth product (SBP) is required in the real-time holographic video display systems. The paper is based on point-cloud method and it takes advantage of the propagating reversibility of Fresnel diffraction in the propagating direction and the fringe pattern of a point source, known as Gabor zone plate has spatial symmetry, so it can be used as a basis for fast calculation of diffraction field in CGH. A fast Fresnel CGH method based on the novel look-up table (N-LUT) method is proposed, the principle fringe patterns (PFPs) at the virtual plane is pre-calculated by the acceleration algorithm and be stored. Secondly, the Fresnel diffraction fringe pattern at dummy plane can be obtained. Finally, the Fresnel propagation from dummy plan to hologram plane. The simulation experiments and optical experiments based on Liquid Crystal On Silicon (LCOS) is setup to demonstrate the validity of the proposed method under the premise of ensuring the quality of 3D reconstruction the method proposed in the paper can be applied to shorten the computational time and improve computational efficiency.

  4. vFitness: a web-based computing tool for improving estimation of in vitro HIV-1 fitness experiments

    PubMed Central

    2010-01-01

    Background The replication rate (or fitness) between viral variants has been investigated in vivo and in vitro for human immunodeficiency virus (HIV). HIV fitness plays an important role in the development and persistence of drug resistance. The accurate estimation of viral fitness relies on complicated computations based on statistical methods. This calls for tools that are easy to access and intuitive to use for various experiments of viral fitness. Results Based on a mathematical model and several statistical methods (least-squares approach and measurement error models), a Web-based computing tool has been developed for improving estimation of virus fitness in growth competition assays of human immunodeficiency virus type 1 (HIV-1). Conclusions Unlike the two-point calculation used in previous studies, the estimation here uses linear regression methods with all observed data in the competition experiment to more accurately estimate relative viral fitness parameters. The dilution factor is introduced for making the computational tool more flexible to accommodate various experimental conditions. This Web-based tool is implemented in C# language with Microsoft ASP.NET, and is publicly available on the Web at http://bis.urmc.rochester.edu/vFitness/. PMID:20482791

  5. vFitness: a web-based computing tool for improving estimation of in vitro HIV-1 fitness experiments.

    PubMed

    Ma, Jingming; Dykes, Carrie; Wu, Tao; Huang, Yangxin; Demeter, Lisa; Wu, Hulin

    2010-05-18

    The replication rate (or fitness) between viral variants has been investigated in vivo and in vitro for human immunodeficiency virus (HIV). HIV fitness plays an important role in the development and persistence of drug resistance. The accurate estimation of viral fitness relies on complicated computations based on statistical methods. This calls for tools that are easy to access and intuitive to use for various experiments of viral fitness. Based on a mathematical model and several statistical methods (least-squares approach and measurement error models), a Web-based computing tool has been developed for improving estimation of virus fitness in growth competition assays of human immunodeficiency virus type 1 (HIV-1). Unlike the two-point calculation used in previous studies, the estimation here uses linear regression methods with all observed data in the competition experiment to more accurately estimate relative viral fitness parameters. The dilution factor is introduced for making the computational tool more flexible to accommodate various experimental conditions. This Web-based tool is implemented in C# language with Microsoft ASP.NET, and is publicly available on the Web at http://bis.urmc.rochester.edu/vFitness/.

  6. Databases, data integration, and expert systems: new directions in mineral resource assessment and mineral exploration

    USGS Publications Warehouse

    McCammon, Richard B.; Ramani, Raja V.; Mozumdar, Bijoy K.; Samaddar, Arun B.

    1994-01-01

    Overcoming future difficulties in searching for ore deposits deeper in the earth's crust will require closer attention to the collection and analysis of more diverse types of data and to more efficient use of current computer technologies. Computer technologies of greatest interest include methods of storage and retrieval of resource information, methods for integrating geologic, geochemical, and geophysical data, and the introduction of advanced computer technologies such as expert systems, multivariate techniques, and neural networks. Much experience has been gained in the past few years in applying these technologies. More experience is needed if they are to be implemented for everyday use in future assessments and exploration.

  7. Measurement of information and communication technology experience and attitudes to e-learning of students in the healthcare professions: integrative review.

    PubMed

    Wilkinson, Ann; While, Alison E; Roberts, Julia

    2009-04-01

    This paper is a report of a review to describe and discuss the psychometric properties of instruments used in healthcare education settings measuring experience and attitudes of healthcare students regarding their information and communication technology skills and their use of computers and the Internet for education. Healthcare professionals are expected to be computer and information literate at registration. A previous review of evaluative studies of computer-based learning suggests that methods of measuring learners' attitudes to computers and computer aided learning are problematic. A search of eight health and social science databases located 49 papers, the majority published between 1995 and January 2007, focusing on the experience and attitudes of students in the healthcare professions towards computers and e-learning. An integrative approach was adopted, with narrative description of findings. Criteria for inclusion were quantitative studies using survey tools with samples of healthcare students and concerning computer and information literacy skills, access to computers, experience with computers and use of computers and the Internet for education purposes. Since the 1980s a number of instruments have been developed, mostly in the United States of America, to measure attitudes to computers, anxiety about computer use, information and communication technology skills, satisfaction and more recently attitudes to the Internet and computers for education. The psychometric properties are poorly described. Advances in computers and technology mean that many earlier tools are no longer valid. Measures of the experience and attitudes of healthcare students to the increased use of e-learning require development in line with computer and technology advances.

  8. New method of processing heat treatment experiments with numerical simulation support

    NASA Astrophysics Data System (ADS)

    Kik, T.; Moravec, J.; Novakova, I.

    2017-08-01

    In this work, benefits of combining modern software for numerical simulations of welding processes with laboratory research was described. Proposed new method of processing heat treatment experiments leading to obtaining relevant input data for numerical simulations of heat treatment of large parts was presented. It is now possible, by using experiments on small tested samples, to simulate cooling conditions comparable with cooling of bigger parts. Results from this method of testing makes current boundary conditions during real cooling process more accurate, but also can be used for improvement of software databases and optimization of a computational models. The point is to precise the computation of temperature fields for large scale hardening parts based on new method of temperature dependence determination of the heat transfer coefficient into hardening media for the particular material, defined maximal thickness of processed part and cooling conditions. In the paper we will also present an example of the comparison standard and modified (according to newly suggested methodology) heat transfer coefficient data’s and theirs influence on the simulation results. It shows how even the small changes influence mainly on distribution of temperature, metallurgical phases, hardness and stresses distribution. By this experiment it is also possible to obtain not only input data and data enabling optimization of computational model but at the same time also verification data. The greatest advantage of described method is independence of used cooling media type.

  9. Decomposition method for fast computation of gigapixel-sized Fresnel holograms on a graphics processing unit cluster.

    PubMed

    Jackin, Boaz Jessie; Watanabe, Shinpei; Ootsu, Kanemitsu; Ohkawa, Takeshi; Yokota, Takashi; Hayasaki, Yoshio; Yatagai, Toyohiko; Baba, Takanobu

    2018-04-20

    A parallel computation method for large-size Fresnel computer-generated hologram (CGH) is reported. The method was introduced by us in an earlier report as a technique for calculating Fourier CGH from 2D object data. In this paper we extend the method to compute Fresnel CGH from 3D object data. The scale of the computation problem is also expanded to 2 gigapixels, making it closer to real application requirements. The significant feature of the reported method is its ability to avoid communication overhead and thereby fully utilize the computing power of parallel devices. The method exhibits three layers of parallelism that favor small to large scale parallel computing machines. Simulation and optical experiments were conducted to demonstrate the workability and to evaluate the efficiency of the proposed technique. A two-times improvement in computation speed has been achieved compared to the conventional method, on a 16-node cluster (one GPU per node) utilizing only one layer of parallelism. A 20-times improvement in computation speed has been estimated utilizing two layers of parallelism on a very large-scale parallel machine with 16 nodes, where each node has 16 GPUs.

  10. Study of Wind Effects on Unique Buildings

    NASA Astrophysics Data System (ADS)

    Olenkov, V.; Puzyrev, P.

    2017-11-01

    The article deals with a numerical simulation of wind effects on the building of the Church of the Intercession of the Holy Virgin in the village Bulzi of the Chelyabinsk region. We presented a calculation algorithm and obtained pressure fields, velocity fields and the fields of kinetic energy of a wind stream, as well as streamlines. Computational fluid dynamic (CFD) evolved three decades ago at the interfaces of calculus mathematics and theoretical hydromechanics and has become a separate branch of science the subject of which is a numerical simulation of different fluid and gas flows as well as the solution of arising problems with the help of methods that involve computer systems. This scientific field which is of a great practical value is intensively developing. The increase in CFD-calculations is caused by the improvement of computer technologies, creation of multipurpose easy-to-use CFD-packagers that are available to a wide group of researchers and cope with various tasks. Such programs are not only competitive in comparison with physical experiments but sometimes they provide the only opportunity to answer the research questions. The following advantages of computer simulation can be pointed out: a) Reduction in time spent on design and development of a model in comparison with a real experiment (variation of boundary conditions). b) Numerical experiment allows for the simulation of conditions that are not reproducible with environmental tests (use of ideal gas as environment). c) Use of computational gas dynamics methods provides a researcher with a complete and ample information that is necessary to fully describe different processes of the experiment. d) Economic efficiency of computer calculations is more attractive than an experiment. e) Possibility to modify a computational model which ensures efficient timing (change of the sizes of wall layer cells in accordance with the chosen turbulence model).

  11. Model-Based and Model-Free Pavlovian Reward Learning: Revaluation, Revision and Revelation

    PubMed Central

    Dayan, Peter; Berridge, Kent C.

    2014-01-01

    Evidence supports at least two methods for learning about reward and punishment and making predictions for guiding actions. One method, called model-free, progressively acquires cached estimates of the long-run values of circumstances and actions from retrospective experience. The other method, called model-based, uses representations of the environment, expectations and prospective calculations to make cognitive predictions of future value. Extensive attention has been paid to both methods in computational analyses of instrumental learning. By contrast, although a full computational analysis has been lacking, Pavlovian learning and prediction has typically been presumed to be solely model-free. Here, we revise that presumption and review compelling evidence from Pavlovian revaluation experiments showing that Pavlovian predictions can involve their own form of model-based evaluation. In model-based Pavlovian evaluation, prevailing states of the body and brain influence value computations, and thereby produce powerful incentive motivations that can sometimes be quite new. We consider the consequences of this revised Pavlovian view for the computational landscape of prediction, response and choice. We also revisit differences between Pavlovian and instrumental learning in the control of incentive motivation. PMID:24647659

  12. Model-based and model-free Pavlovian reward learning: revaluation, revision, and revelation.

    PubMed

    Dayan, Peter; Berridge, Kent C

    2014-06-01

    Evidence supports at least two methods for learning about reward and punishment and making predictions for guiding actions. One method, called model-free, progressively acquires cached estimates of the long-run values of circumstances and actions from retrospective experience. The other method, called model-based, uses representations of the environment, expectations, and prospective calculations to make cognitive predictions of future value. Extensive attention has been paid to both methods in computational analyses of instrumental learning. By contrast, although a full computational analysis has been lacking, Pavlovian learning and prediction has typically been presumed to be solely model-free. Here, we revise that presumption and review compelling evidence from Pavlovian revaluation experiments showing that Pavlovian predictions can involve their own form of model-based evaluation. In model-based Pavlovian evaluation, prevailing states of the body and brain influence value computations, and thereby produce powerful incentive motivations that can sometimes be quite new. We consider the consequences of this revised Pavlovian view for the computational landscape of prediction, response, and choice. We also revisit differences between Pavlovian and instrumental learning in the control of incentive motivation.

  13. Increasing the computational efficient of digital cross correlation by a vectorization method

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Yuan; Ma, Chien-Ching

    2017-08-01

    This study presents a vectorization method for use in MATLAB programming aimed at increasing the computational efficiency of digital cross correlation in sound and images, resulting in a speedup of 6.387 and 36.044 times compared with performance values obtained from looped expression. This work bridges the gap between matrix operations and loop iteration, preserving flexibility and efficiency in program testing. This paper uses numerical simulation to verify the speedup of the proposed vectorization method as well as experiments to measure the quantitative transient displacement response subjected to dynamic impact loading. The experiment involved the use of a high speed camera as well as a fiber optic system to measure the transient displacement in a cantilever beam under impact from a steel ball. Experimental measurement data obtained from the two methods are in excellent agreement in both the time and frequency domain, with discrepancies of only 0.68%. Numerical and experiment results demonstrate the efficacy of the proposed vectorization method with regard to computational speed in signal processing and high precision in the correlation algorithm. We also present the source code with which to build MATLAB-executable functions on Windows as well as Linux platforms, and provide a series of examples to demonstrate the application of the proposed vectorization method.

  14. XDesign: an open-source software package for designing X-ray imaging phantoms and experiments.

    PubMed

    Ching, Daniel J; Gürsoy, Dogˇa

    2017-03-01

    The development of new methods or utilization of current X-ray computed tomography methods is impeded by the substantial amount of expertise required to design an X-ray computed tomography experiment from beginning to end. In an attempt to make material models, data acquisition schemes and reconstruction algorithms more accessible to researchers lacking expertise in some of these areas, a software package is described here which can generate complex simulated phantoms and quantitatively evaluate new or existing data acquisition schemes and image reconstruction algorithms for targeted applications.

  15. Markov Jump-Linear Performance Models for Recoverable Flight Control Computers

    NASA Technical Reports Server (NTRS)

    Zhang, Hong; Gray, W. Steven; Gonzalez, Oscar R.

    2004-01-01

    Single event upsets in digital flight control hardware induced by atmospheric neutrons can reduce system performance and possibly introduce a safety hazard. One method currently under investigation to help mitigate the effects of these upsets is NASA Langley s Recoverable Computer System. In this paper, a Markov jump-linear model is developed for a recoverable flight control system, which will be validated using data from future experiments with simulated and real neutron environments. The method of tracking error analysis and the plan for the experiments are also described.

  16. XDesign: An open-source software package for designing X-ray imaging phantoms and experiments

    DOE PAGES

    Ching, Daniel J.; Gursoy, Dogˇa

    2017-02-21

    Here, the development of new methods or utilization of current X-ray computed tomography methods is impeded by the substantial amount of expertise required to design an X-ray computed tomography experiment from beginning to end. In an attempt to make material models, data acquisition schemes and reconstruction algorithms more accessible to researchers lacking expertise in some of these areas, a software package is described here which can generate complex simulated phantoms and quantitatively evaluate new or existing data acquisition schemes and image reconstruction algorithms for targeted applications.

  17. Meshfree and efficient modeling of swimming cells

    NASA Astrophysics Data System (ADS)

    Gallagher, Meurig T.; Smith, David J.

    2018-05-01

    Locomotion in Stokes flow is an intensively studied problem because it describes important biological phenomena such as the motility of many species' sperm, bacteria, algae, and protozoa. Numerical computations can be challenging, particularly in three dimensions, due to the presence of moving boundaries and complex geometries; methods which combine ease of implementation and computational efficiency are therefore needed. A recently proposed method to discretize the regularized Stokeslet boundary integral equation without the need for a connected mesh is applied to the inertialess locomotion problem in Stokes flow. The mathematical formulation and key aspects of the computational implementation in matlab® or GNU Octave are described, followed by numerical experiments with biflagellate algae and multiple uniflagellate sperm swimming between no-slip surfaces, for which both swimming trajectories and flow fields are calculated. These computational experiments required minutes of time on modest hardware; an extensible implementation is provided in a GitHub repository. The nearest-neighbor discretization dramatically improves convergence and robustness, a key challenge in extending the regularized Stokeslet method to complicated three-dimensional biological fluid problems.

  18. Improved look-up table method of computer-generated holograms.

    PubMed

    Wei, Hui; Gong, Guanghong; Li, Ni

    2016-11-10

    Heavy computation load and vast memory requirements are major bottlenecks of computer-generated holograms (CGHs), which are promising and challenging in three-dimensional displays. To solve these problems, an improved look-up table (LUT) method suitable for arbitrarily sampled object points is proposed and implemented on a graphics processing unit (GPU) whose reconstructed object quality is consistent with that of the coherent ray-trace (CRT) method. The concept of distance factor is defined, and the distance factors are pre-computed off-line and stored in a look-up table. The results show that while reconstruction quality close to that of the CRT method is obtained, the on-line computation time is dramatically reduced compared with the LUT method on the GPU and the memory usage is lower than that of the novel-LUT considerably. Optical experiments are carried out to validate the effectiveness of the proposed method.

  19. Research approaches to mass casualty incidents response: development from routine perspectives to complexity science.

    PubMed

    Shen, Weifeng; Jiang, Libing; Zhang, Mao; Ma, Yuefeng; Jiang, Guanyu; He, Xiaojun

    2014-01-01

    To review the research methods of mass casualty incident (MCI) systematically and introduce the concept and characteristics of complexity science and artificial system, computational experiments and parallel execution (ACP) method. We searched PubMed, Web of Knowledge, China Wanfang and China Biology Medicine (CBM) databases for relevant studies. Searches were performed without year or language restrictions and used the combinations of the following key words: "mass casualty incident", "MCI", "research method", "complexity science", "ACP", "approach", "science", "model", "system" and "response". Articles were searched using the above keywords and only those involving the research methods of mass casualty incident (MCI) were enrolled. Research methods of MCI have increased markedly over the past few decades. For now, dominating research methods of MCI are theory-based approach, empirical approach, evidence-based science, mathematical modeling and computer simulation, simulation experiment, experimental methods, scenario approach and complexity science. This article provides an overview of the development of research methodology for MCI. The progresses of routine research approaches and complexity science are briefly presented in this paper. Furthermore, the authors conclude that the reductionism underlying the exact science is not suitable for MCI complex systems. And the only feasible alternative is complexity science. Finally, this summary is followed by a review that ACP method combining artificial systems, computational experiments and parallel execution provides a new idea to address researches for complex MCI.

  20. Case Study Discussion Experiences of Computer Education and Instructional Technologies Students about Instructional Design on an Asynchronous Environment

    ERIC Educational Resources Information Center

    Baran, Bahar; Keles, Esra

    2011-01-01

    The aim of this study is to reveal opinions and experiences of two Computer Education and Instructional Technologies Departments' students about case study discussion method after they discussed in online asynchronous environment about Instructional Design (ID). Totally, 80 second year students, 40 from Dokuz Eylul University and 40 from Karadeniz…

  1. Two inviscid computational simulations of separated flow about airfoils

    NASA Technical Reports Server (NTRS)

    Barnwell, R. W.

    1976-01-01

    Two inviscid computational simulations of separated flow about airfoils are described. The basic computational method is the line relaxation finite-difference method. Viscous separation is approximated with inviscid free-streamline separation. The point of separation is specified, and the pressure in the separation region is calculated. In the first simulation, the empiricism of constant pressure in the separation region is employed. This empiricism is easier to implement with the present method than with singularity methods. In the second simulation, acoustic theory is used to determine the pressure in the separation region. The results of both simulations are compared with experiment.

  2. The CSM testbed software system: A development environment for structural analysis methods on the NAS CRAY-2

    NASA Technical Reports Server (NTRS)

    Gillian, Ronnie E.; Lotts, Christine G.

    1988-01-01

    The Computational Structural Mechanics (CSM) Activity at Langley Research Center is developing methods for structural analysis on modern computers. To facilitate that research effort, an applications development environment has been constructed to insulate the researcher from the many computer operating systems of a widely distributed computer network. The CSM Testbed development system was ported to the Numerical Aerodynamic Simulator (NAS) Cray-2, at the Ames Research Center, to provide a high end computational capability. This paper describes the implementation experiences, the resulting capability, and the future directions for the Testbed on supercomputers.

  3. Computer-Based Molecular Modelling: Finnish School Teachers' Experiences and Views

    ERIC Educational Resources Information Center

    Aksela, Maija; Lundell, Jan

    2008-01-01

    Modern computer-based molecular modelling opens up new possibilities for chemistry teaching at different levels. This article presents a case study seeking insight into Finnish school teachers' use of computer-based molecular modelling in teaching chemistry, into the different working and teaching methods used, and their opinions about necessary…

  4. Analyses of requirements for computer control and data processing experiment subsystems. Volume 1: ATM experiment S-056 image data processing system techniques development

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The solar imaging X-ray telescope experiment (designated the S-056 experiment) is described. It will photograph the sun in the far ultraviolet or soft X-ray region. Because of the imaging characteristics of this telescope and the necessity of using special techniques for capturing images on film at these wave lengths, methods were developed for computer processing of the photographs. The problems of image restoration were addressed to develop and test digital computer techniques for applying a deconvolution process to restore overall S-056 image quality. Additional techniques for reducing or eliminating the effects of noise and nonlinearity in S-056 photographs were developed.

  5. Multiview Drawing Instruction: A Two-Location Experiment

    ERIC Educational Resources Information Center

    Connolly, Patrick; Holliday-Darr, Kathryn; Blasko, Dawn G.

    2006-01-01

    Several methods have been developed, presented, and discussed at recent ASEE and EDGD conferences on the topic of computer-based multiview drawing instruction. While small-scale and localized testing of these instruments and methods has been undertaken, no larger-scale or multi-location experiments have been attempted. This paper describes an…

  6. A general method for calculating three-dimensional compressible laminar and turbulent boundary layers on arbitrary wings

    NASA Technical Reports Server (NTRS)

    Cebeci, T.; Kaups, K.; Ramsey, J. A.

    1977-01-01

    The method described utilizes a nonorthogonal coordinate system for boundary-layer calculations. It includes a geometry program that represents the wing analytically, and a velocity program that computes the external velocity components from a given experimental pressure distribution when the external velocity distribution is not computed theoretically. The boundary layer method is general, however, and can also be used for an external velocity distribution computed theoretically. Several test cases were computed by this method and the results were checked with other numerical calculations and with experiments when available. A typical computation time (CPU) on an IBM 370/165 computer for one surface of a wing which roughly consist of 30 spanwise stations and 25 streamwise stations, with 30 points across the boundary layer is less than 30 seconds for an incompressible flow and a little more for a compressible flow.

  7. Reconstruction of coded aperture images

    NASA Technical Reports Server (NTRS)

    Bielefeld, Michael J.; Yin, Lo I.

    1987-01-01

    Balanced correlation method and the Maximum Entropy Method (MEM) were implemented to reconstruct a laboratory X-ray source as imaged by a Uniformly Redundant Array (URA) system. Although the MEM method has advantages over the balanced correlation method, it is computationally time consuming because of the iterative nature of its solution. Massively Parallel Processing, with its parallel array structure is ideally suited for such computations. These preliminary results indicate that it is possible to use the MEM method in future coded-aperture experiments with the help of the MPP.

  8. Modified Method of Adaptive Artificial Viscosity for Solution of Gas Dynamics Problems on Parallel Computer Systems

    NASA Astrophysics Data System (ADS)

    Popov, Igor; Sukov, Sergey

    2018-02-01

    A modification of the adaptive artificial viscosity (AAV) method is considered. This modification is based on one stage time approximation and is adopted to calculation of gasdynamics problems on unstructured grids with an arbitrary type of grid elements. The proposed numerical method has simplified logic, better performance and parallel efficiency compared to the implementation of the original AAV method. Computer experiments evidence the robustness and convergence of the method to difference solution.

  9. Computational methods for fracture analysis of heavy-section steel technology (HSST) pressure vessel experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bass, B.R.; Bryan, R.H.; Bryson, J.W.

    This paper summarizes the capabilities and applications of the general-purpose and special-purpose computer programs that have been developed for use in fracture mechanics analyses of HSST pressure vessel experiments. Emphasis is placed on the OCA/USA code, which is designed for analysis of pressurized-thermal-shock (PTS) conditions, and on the ORMGEN/ADINA/ORVIRT system which is used for more general analysis. Fundamental features of these programs are discussed, along with applications to pressure vessel experiments.

  10. DSMC Simulations of Hypersonic Flows and Comparison With Experiments

    NASA Technical Reports Server (NTRS)

    Moss, James N.; Bird, Graeme A.; Markelov, Gennady N.

    2004-01-01

    This paper presents computational results obtained with the direct simulation Monte Carlo (DSMC) method for several biconic test cases in which shock interactions and flow separation-reattachment are key features of the flow. Recent ground-based experiments have been performed for several biconic configurations, and surface heating rate and pressure measurements have been proposed for code validation studies. The present focus is to expand on the current validating activities for a relatively new DSMC code called DS2V that Bird (second author) has developed. Comparisons with experiments and other computations help clarify the agreement currently being achieved between computations and experiments and to identify the range of measurement variability of the proposed validation data when benchmarked with respect to the current computations. For the test cases with significant vibrational nonequilibrium, the effect of the vibrational energy surface accommodation on heating and other quantities is demonstrated.

  11. Noise Computation of a Shock-Containing Supersonic Axisymmetric Jet by the CE/SE Method

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.; Hultgren, Lennart S.; Chang, Sin-Chung; Jorgenson, Philip C. E.

    1999-01-01

    The space-time conservation element solution element (CE/SE) method is employed to numerically study the near-field of a typical under-expanded jet. For the computed case-a circular jet with Mach number M(j) = 1.19-the shock-cell structure is in good agreement with experimental results. The computed noise field is in general agreement with the experiment, although further work is needed to properly close the screech feedback loop.

  12. Research in computer science

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1985-01-01

    Synopses are given for NASA supported work in computer science at the University of Virginia. Some areas of research include: error seeding as a testing method; knowledge representation for engineering design; analysis of faults in a multi-version software experiment; implementation of a parallel programming environment; two computer graphics systems for visualization of pressure distribution and convective density particles; task decomposition for multiple robot arms; vectorized incomplete conjugate gradient; and iterative methods for solving linear equations on the Flex/32.

  13. A comparison of methods for computing the sigma-coordinate pressure gradient force for flow over sloped terrain in a hybrid theta-sigma model

    NASA Technical Reports Server (NTRS)

    Johnson, D. R.; Uccellini, L. W.

    1983-01-01

    In connection with the employment of the sigma coordinates introduced by Phillips (1957), problems can arise regarding an accurate finite-difference computation of the pressure gradient force. Over steeply sloped terrain, the calculation of the sigma-coordinate pressure gradient force involves computing the difference between two large terms of opposite sign which results in large truncation error. To reduce the truncation error, several finite-difference methods have been designed and implemented. The present investigation has the objective to provide another method of computing the sigma-coordinate pressure gradient force. Phillips' method is applied for the elimination of a hydrostatic component to a flux formulation. The new technique is compared with four other methods for computing the pressure gradient force. The work is motivated by the desire to use an isentropic and sigma-coordinate hybrid model for experiments designed to study flow near mountainous terrain.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuo, Rui; Jeff Wu, C. F.

    Calibration parameters in deterministic computer experiments are those attributes that cannot be measured or available in physical experiments. Here, an approach to estimate them by using data from physical experiments and computer simulations. A theoretical framework is given which allows us to study the issues of parameter identifiability and estimation. We define the L 2-consistency for calibration as a justification for calibration methods. It is shown that a simplified version of the original KO method leads to asymptotically L 2-inconsistent calibration. This L 2-inconsistency can be remedied by modifying the original estimation procedure. A novel calibration method, called the Lmore » 2 calibration, is proposed and proven to be L 2-consistent and enjoys optimal convergence rate. Furthermore a numerical example and some mathematical analysis are used to illustrate the source of the L 2-inconsistency problem.« less

  15. The use of conduction model in laser weld profile computation

    NASA Astrophysics Data System (ADS)

    Grabas, Bogusław

    2007-02-01

    Profiles of joints resulting from deep penetration laser beam welding of a flat workpiece of carbon steel were computed. A semi-analytical conduction model solved with Green's function method was used in computations. In the model, the moving heat source was attenuated exponentially in accordance with Beer-Lambert law. Computational results were compared with those in the experiment.

  16. Differential equations as a tool for community identification.

    PubMed

    Krawczyk, Małgorzata J

    2008-06-01

    We consider the task of identification of a cluster structure in random networks. The results of two methods are presented: (i) the Newman algorithm [M. E. J. Newman and M. Girvan, Phys. Rev. E 69, 026113 (2004)]; and (ii) our method based on differential equations. A series of computer experiments is performed to check if in applying these methods we are able to determine the structure of the network. The trial networks consist initially of well-defined clusters and are disturbed by introducing noise into their connectivity matrices. Further, we show that an improvement of the previous version of our method is possible by an appropriate choice of the threshold parameter beta . With this change, the results obtained by the two methods above are similar, and our method works better, for all the computer experiments we have done.

  17. Comfort and experience with online learning: trends over nine years and associations with knowledge

    PubMed Central

    2014-01-01

    Background Some evidence suggests that attitude toward computer-based instruction is an important determinant of success in online learning. We sought to determine how comfort using computers and perceptions of prior online learning experiences have changed over the past decade, and how these associate with learning outcomes. Methods Each year from 2003–2011 we conducted a prospective trial of online learning. As part of each year’s study, we asked medicine residents about their comfort using computers and if their previous experiences with online learning were favorable. We assessed knowledge using a multiple-choice test. We used regression to analyze associations and changes over time. Results 371 internal medicine and family medicine residents participated. Neither comfort with computers nor perceptions of prior online learning experiences showed a significant change across years (p > 0.61), with mean comfort rating 3.96 (maximum 5 = very comfortable) and mean experience rating 4.42 (maximum 6 = strongly agree [favorable]). Comfort showed no significant association with knowledge scores (p = 0.39) but perceptions of prior experiences did, with a 1.56% rise in knowledge score for a 1-point rise in experience score (p = 0.02). Correlations among comfort, perceptions of prior experiences, and number of prior experiences were all small and not statistically significant. Conclusions Comfort with computers and perceptions of prior experience with online learning remained stable over nine years. Prior good experiences (but not comfort with computers) demonstrated a modest association with knowledge outcomes, suggesting that prior course satisfaction may influence subsequent learning. PMID:24985690

  18. A Comparison of Web-Based and Face-to-Face Functional Measurement Experiments

    ERIC Educational Resources Information Center

    Van Acker, Frederik; Theuns, Peter

    2010-01-01

    Information Integration Theory (IIT) is concerned with how people combine information into an overall judgment. A method is hereby presented to perform Functional Measurement (FM) experiments, the methodological counterpart of IIT, on the Web. In a comparison of Web-based FM experiments, face-to-face experiments, and computer-based experiments in…

  19. Application of CFD to a generic hypersonic flight research study

    NASA Technical Reports Server (NTRS)

    Green, Michael J.; Lawrence, Scott L.; Dilley, Arthur D.; Hawkins, Richard W.; Walker, Mary M.; Oberkampf, William L.

    1993-01-01

    Computational analyses have been performed for the initial assessment of flight research vehicle concepts that satisfy requirements for potential hypersonic experiments. Results were obtained from independent analyses at NASA Ames, NASA Langley, and Sandia National Labs, using sophisticated time-dependent Navier-Stokes and parabolized Navier-Stokes methods. Careful study of a common problem consisting of hypersonic flow past a slightly blunted conical forebody was undertaken to estimate the level of uncertainty in the computed results, and to assess the capabilities of current computational methods for predicting boundary-layer transition onset. Results of this study in terms of surface pressure and heat transfer comparisons, as well as comparisons of boundary-layer edge quantities and flow-field profiles are presented here. Sensitivities to grid and gas model are discussed. Finally, representative results are presented relating to the use of Computational Fluid Dynamics in the vehicle design and the integration/support of potential experiments.

  20. Spacelab experiment computer study. Volume 1: Executive summary (presentation)

    NASA Technical Reports Server (NTRS)

    Lewis, J. L.; Hodges, B. C.; Christy, J. O.

    1976-01-01

    A quantitative cost for various Spacelab flight hardware configurations is provided along with varied software development options. A cost analysis of Spacelab computer hardware and software is presented. The cost study is discussed based on utilization of a central experiment computer with optional auxillary equipment. Groundrules and assumptions used in deriving the costing methods for all options in the Spacelab experiment study are presented. The groundrules and assumptions, are analysed and the options along with their cost considerations, are discussed. It is concluded that Spacelab program cost for software development and maintenance is independent of experimental hardware and software options, that distributed standard computer concept simplifies software integration without a significant increase in cost, and that decisions on flight computer hardware configurations should not be made until payload selection for a given mission and a detailed analysis of the mission requirements are completed.

  1. Computer-Aided College Algebra: Learning Components that Students Find Beneficial

    ERIC Educational Resources Information Center

    Aichele, Douglas B.; Francisco, Cynthia; Utley, Juliana; Wescoatt, Benjamin

    2011-01-01

    A mixed-method study was conducted during the Fall 2008 semester to better understand the experiences of students participating in computer-aided instruction of College Algebra using the software MyMathLab. The learning environment included a computer learning system for the majority of the instruction, a support system via focus groups (weekly…

  2. A Membrane Gas Separation Experiment for the Undergraduate Laboratory.

    ERIC Educational Resources Information Center

    Davis, Richard A.; Sandall, Orville C.

    1991-01-01

    Described is a membrane experiment that provides students with experience in fundamental engineering skills such as mass balances, modeling, and using the computer as a research tool. Included are the experimental design, theory, method of solution, sample calculations, and conclusions. (KR)

  3. The Computer as a Tool for Learning

    PubMed Central

    Starkweather, John A.

    1986-01-01

    Experimenters from the beginning recognized the advantages computers might offer in medical education. Several medical schools have gained experience in such programs in automated instruction. Television images and graphic display combined with computer control and user interaction are effective for teaching problem solving. The National Board of Medical Examiners has developed patient-case simulation for examining clinical skills, and the National Library of Medicine has experimented with combining media. Advances from the field of artificial intelligence and the availability of increasingly powerful microcomputers at lower cost will aid further development. Computers will likely affect existing educational methods, adding new capabilities to laboratory exercises, to self-assessment and to continuing education. PMID:3544511

  4. Physician Utilization of a Hospital Information System: A Computer Simulation Model

    PubMed Central

    Anderson, James G.; Jay, Stephen J.; Clevenger, Stephen J.; Kassing, David R.; Perry, Jane; Anderson, Marilyn M.

    1988-01-01

    The purpose of this research was to develop a computer simulation model that represents the process through which physicians enter orders into a hospital information system (HIS). Computer simulation experiments were performed to estimate the effects of two methods of order entry on outcome variables. The results of the computer simulation experiments were used to perform a cost-benefit analysis to compare the two different means of entering medical orders into the HIS. The results indicate that the use of personal order sets to enter orders into the HIS will result in a significant reduction in manpower, salaries and fringe benefits, and errors in order entry.

  5. The Computer Bulletin Board.

    ERIC Educational Resources Information Center

    Collins, Michael J.; Vitz, Ed

    1988-01-01

    Examines two computer interfaced lab experiments: 1) discusses the automation of a Perkin Elmer 337 infrared spectrophotometer noting the mechanical and electronic changes needed; 2) uses the Gouy method and Lotus Measure software to automate magnetic susceptibility determinations. Methodology is described. (MVL)

  6. Development of user guidelines for ECAS display design, volume 1

    NASA Technical Reports Server (NTRS)

    Dodson, D. W.; Shields, N. L., Jr.

    1978-01-01

    Experiment computer application software (ECAS) display design and command usage guidelines were developed, which if followed by spacelab experiments, would standardize methods and techniques for data presentation and commanding via ECAS. These guidelines would provide some commonality among experiments which would enhance crew training and flight operations. The guidelines are applicable to all onboard experiment displays, whether allocated by ECAS or a dedicated experiment processor. A brief description of the spacelab data display system characteristics and of the services provided by the experiment computer operating system is included. Guidelines concerning data presentation and layout of alphanumeric and graphic information are presented along with guidelines concerning keyboard commanding and command feedback.

  7. Do Computers Improve the Drawing of a Geometrical Figure for 10 Year-Old Children?

    ERIC Educational Resources Information Center

    Martin, Perrine; Velay, Jean-Luc

    2012-01-01

    Nowadays, computer aided design (CAD) is widely used by designers. Would children learn to draw more easily and more efficiently if they were taught with computerised tools? To answer this question, we made an experiment designed to compare two methods for children to do the same drawing: the classical "pen and paper" method and a CAD…

  8. Measure the Earth's Radius and the Speed of Light with Simple and Inexpensive Computer-Based Experiments

    ERIC Educational Resources Information Center

    Martin, Michael J.

    2004-01-01

    With new and inexpensive computer-based methods, measuring the speed of light and the Earth's radius--historically difficult endeavors--can be simple enough to be tackled by high school and college students working in labs that have limited budgets. In this article, the author describes two methods of estimating the Earth's radius using two…

  9. Computer methods for sampling from the gamma distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, M.E.; Tadikamalla, P.R.

    1978-01-01

    Considerable attention has recently been directed at developing ever faster algorithms for generating gamma random variates on digital computers. This paper surveys the current state of the art including the leading algorithms of Ahrens and Dieter, Atkinson, Cheng, Fishman, Marsaglia, Tadikamalla, and Wallace. General random variate generation techniques are explained with reference to these gamma algorithms. Computer simulation experiments on IBM and CDC computers are reported.

  10. Fluid/Structure Interaction Studies of Aircraft Using High Fidelity Equations on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru; VanDalsem, William (Technical Monitor)

    1994-01-01

    Abstract Aeroelasticity which involves strong coupling of fluids, structures and controls is an important element in designing an aircraft. Computational aeroelasticity using low fidelity methods such as the linear aerodynamic flow equations coupled with the modal structural equations are well advanced. Though these low fidelity approaches are computationally less intensive, they are not adequate for the analysis of modern aircraft such as High Speed Civil Transport (HSCT) and Advanced Subsonic Transport (AST) which can experience complex flow/structure interactions. HSCT can experience vortex induced aeroelastic oscillations whereas AST can experience transonic buffet associated structural oscillations. Both aircraft may experience a dip in the flutter speed at the transonic regime. For accurate aeroelastic computations at these complex fluid/structure interaction situations, high fidelity equations such as the Navier-Stokes for fluids and the finite-elements for structures are needed. Computations using these high fidelity equations require large computational resources both in memory and speed. Current conventional super computers have reached their limitations both in memory and speed. As a result, parallel computers have evolved to overcome the limitations of conventional computers. This paper will address the transition that is taking place in computational aeroelasticity from conventional computers to parallel computers. The paper will address special techniques needed to take advantage of the architecture of new parallel computers. Results will be illustrated from computations made on iPSC/860 and IBM SP2 computer by using ENSAERO code that directly couples the Euler/Navier-Stokes flow equations with high resolution finite-element structural equations.

  11. Three-dimensional Diffusive Strip Method

    NASA Astrophysics Data System (ADS)

    Martinez-Ruiz, Daniel; Meunier, Patrice; Duchemin, Laurent; Villermaux, Emmanuel

    2016-11-01

    The Diffusive Strip Method (DSM) is a near-exact numerical method developed for mixing computations at large Péclet number in two-dimensions. The method consists in following stretched material lines to compute a-posteriori the resulting scalar field is extended here to three-dimensional flows, following surfaces. We describe its 3D peculiarities, and show how it applies to a simple Taylor-Couette configuration with non-rotating boundary conditions at the top end, bottom and outer cylinder. This flow produces an elaborate, although controlled, steady 3D flow which relies on the Ekman pumping arising from the rotation of the inner cylinder is both studied experimentally, and numerically modeled. A recurrent two-cells structure appears formed by stream tubes shaped as nested tori. A scalar blob in the flow experiences a Lagrangian oscillating dynamics with stretchings and compressions, driving the mixing process, and yielding both rapidly-mixed and nearly pure-diffusive regions. A triangulated-surface method is developed to calculate the blob elongation and scalar concentration PDFs through a single variable computation along the advected blob surface, capturing the rich evolution observed in the experiments.

  12. Galaxy morphology - An unsupervised machine learning approach

    NASA Astrophysics Data System (ADS)

    Schutter, A.; Shamir, L.

    2015-09-01

    Structural properties poses valuable information about the formation and evolution of galaxies, and are important for understanding the past, present, and future universe. Here we use unsupervised machine learning methodology to analyze a network of similarities between galaxy morphological types, and automatically deduce a morphological sequence of galaxies. Application of the method to the EFIGI catalog show that the morphological scheme produced by the algorithm is largely in agreement with the De Vaucouleurs system, demonstrating the ability of computer vision and machine learning methods to automatically profile galaxy morphological sequences. The unsupervised analysis method is based on comprehensive computer vision techniques that compute the visual similarities between the different morphological types. Rather than relying on human cognition, the proposed system deduces the similarities between sets of galaxy images in an automatic manner, and is therefore not limited by the number of galaxies being analyzed. The source code of the method is publicly available, and the protocol of the experiment is included in the paper so that the experiment can be replicated, and the method can be used to analyze user-defined datasets of galaxy images.

  13. Understanding the effects of diffusion and relaxation in magnetic resonance imaging using computational modeling

    NASA Astrophysics Data System (ADS)

    Russell, Greg

    The work described in this dissertation was motivated by a desire to better understand the cellular pathology of ischemic stroke. Two of the three bodies of research presented herein address and issue directly related to the investigation of ischemic stroke through the use of diffusion weighted magnetic resonance imaging (DWMRI) methods. The first topic concerns the development of a computationally efficient finite difference method, designed to evaluate the impact of microscopic tissue properties on the formation of DWMRI signal. For the second body of work, the effect of changing the intrinsic diffusion coefficient of a restricted sample on clinical DWMRI experiments is explored. The final body of work, while motivated by the desire to understand stroke, addresses the issue of acquiring large amounts of MRI data well suited for quantitative analysis in reduced scan time. In theory, the method could be used to generate quantitative parametric maps, including those depicting information gleaned through the use of DWMRI methods. Chapter 1 provides an introduction to several topics. A description of the use of DWMRI methods in the study of ischemic stroke is covered. An introduction to the fundamental physical principles at work in MRI is also provided. In this section the means by which magnetization is created in MRI experiments, how MRI signal is induced, as well as the influence of spin-spin and spin-lattice relaxation are discussed. Attention is also given to describing how MRI measurements can be sensitized to diffusion through the use of qualitative and quantitative descriptions of the process. Finally, the reader is given a brief introduction to the use of numerical methods for solving partial differential equations. In Chapters 2, 3 and 4, three related bodies of research are presented in terms of research papers. In Chapter 2, a novel computational method is described. The method reduces the computation resources required to simulate DWMRI experiments. In Chapter 3, a detailed study on how changes in the intrinsic intracellular diffusion coefficient may influence clinical DWMRI experiments is described. In Chapter 4, a novel, non-steady state quantitative MRI method is described.

  14. Colour computer-generated holography for point clouds utilizing the Phong illumination model.

    PubMed

    Symeonidou, Athanasia; Blinder, David; Schelkens, Peter

    2018-04-16

    A technique integrating the bidirectional reflectance distribution function (BRDF) is proposed to generate realistic high-quality colour computer-generated holograms (CGHs). We build on prior work, namely a fast computer-generated holography method for point clouds that handles occlusions. We extend the method by integrating the Phong illumination model so that the properties of the objects' surfaces are taken into account to achieve natural light phenomena such as reflections and shadows. Our experiments show that rendering holograms with the proposed algorithm provides realistic looking objects without any noteworthy increase to the computational cost.

  15. Three-dimensional photoacoustic tomography based on graphics-processing-unit-accelerated finite element method.

    PubMed

    Peng, Kuan; He, Ling; Zhu, Ziqiang; Tang, Jingtian; Xiao, Jiaying

    2013-12-01

    Compared with commonly used analytical reconstruction methods, the frequency-domain finite element method (FEM) based approach has proven to be an accurate and flexible algorithm for photoacoustic tomography. However, the FEM-based algorithm is computationally demanding, especially for three-dimensional cases. To enhance the algorithm's efficiency, in this work a parallel computational strategy is implemented in the framework of the FEM-based reconstruction algorithm using a graphic-processing-unit parallel frame named the "compute unified device architecture." A series of simulation experiments is carried out to test the accuracy and accelerating effect of the improved method. The results obtained indicate that the parallel calculation does not change the accuracy of the reconstruction algorithm, while its computational cost is significantly reduced by a factor of 38.9 with a GTX 580 graphics card using the improved method.

  16. A scalable parallel black oil simulator on distributed memory parallel computers

    NASA Astrophysics Data System (ADS)

    Wang, Kun; Liu, Hui; Chen, Zhangxin

    2015-11-01

    This paper presents our work on developing a parallel black oil simulator for distributed memory computers based on our in-house parallel platform. The parallel simulator is designed to overcome the performance issues of common simulators that are implemented for personal computers and workstations. The finite difference method is applied to discretize the black oil model. In addition, some advanced techniques are employed to strengthen the robustness and parallel scalability of the simulator, including an inexact Newton method, matrix decoupling methods, and algebraic multigrid methods. A new multi-stage preconditioner is proposed to accelerate the solution of linear systems from the Newton methods. Numerical experiments show that our simulator is scalable and efficient, and is capable of simulating extremely large-scale black oil problems with tens of millions of grid blocks using thousands of MPI processes on parallel computers.

  17. Model-Invariant Hybrid Computations of Separated Flows for RCA Standard Test Cases

    NASA Technical Reports Server (NTRS)

    Woodruff, Stephen

    2016-01-01

    NASA's Revolutionary Computational Aerosciences (RCA) subproject has identified several smooth-body separated flows as standard test cases to emphasize the challenge these flows present for computational methods and their importance to the aerospace community. Results of computations of two of these test cases, the NASA hump and the FAITH experiment, are presented. The computations were performed with the model-invariant hybrid LES-RANS formulation, implemented in the NASA code VULCAN-CFD. The model- invariant formulation employs gradual LES-RANS transitions and compensation for model variation to provide more accurate and efficient hybrid computations. Comparisons revealed that the LES-RANS transitions employed in these computations were sufficiently gradual that the compensating terms were unnecessary. Agreement with experiment was achieved only after reducing the turbulent viscosity to mitigate the effect of numerical dissipation. The stream-wise evolution of peak Reynolds shear stress was employed as a measure of turbulence dynamics in separated flows useful for evaluating computations.

  18. Using a computer simulation for teaching communication skills: A blinded multisite mixed methods randomized controlled trial.

    PubMed

    Kron, Frederick W; Fetters, Michael D; Scerbo, Mark W; White, Casey B; Lypson, Monica L; Padilla, Miguel A; Gliva-McConvey, Gayle A; Belfore, Lee A; West, Temple; Wallace, Amelia M; Guetterman, Timothy C; Schleicher, Lauren S; Kennedy, Rebecca A; Mangrulkar, Rajesh S; Cleary, James F; Marsella, Stacy C; Becker, Daniel M

    2017-04-01

    To assess advanced communication skills among second-year medical students exposed either to a computer simulation (MPathic-VR) featuring virtual humans, or to a multimedia computer-based learning module, and to understand each group's experiences and learning preferences. A single-blinded, mixed methods, randomized, multisite trial compared MPathic-VR (N=210) to computer-based learning (N=211). Primary outcomes: communication scores during repeat interactions with MPathic-VR's intercultural and interprofessional communication scenarios and scores on a subsequent advanced communication skills objective structured clinical examination (OSCE). Multivariate analysis of variance was used to compare outcomes. student attitude surveys and qualitative assessments of their experiences with MPathic-VR or computer-based learning. MPathic-VR-trained students improved their intercultural and interprofessional communication performance between their first and second interactions with each scenario. They also achieved significantly higher composite scores on the OSCE than computer-based learning-trained students. Attitudes and experiences were more positive among students trained with MPathic-VR, who valued its providing immediate feedback, teaching nonverbal communication skills, and preparing them for emotion-charged patient encounters. MPathic-VR was effective in training advanced communication skills and in enabling knowledge transfer into a more realistic clinical situation. MPathic-VR's virtual human simulation offers an effective and engaging means of advanced communication training. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Approximation-based common principal component for feature extraction in multi-class brain-computer interfaces.

    PubMed

    Hoang, Tuan; Tran, Dat; Huang, Xu

    2013-01-01

    Common Spatial Pattern (CSP) is a state-of-the-art method for feature extraction in Brain-Computer Interface (BCI) systems. However it is designed for 2-class BCI classification problems. Current extensions of this method to multiple classes based on subspace union and covariance matrix similarity do not provide a high performance. This paper presents a new approach to solving multi-class BCI classification problems by forming a subspace resembled from original subspaces and the proposed method for this approach is called Approximation-based Common Principal Component (ACPC). We perform experiments on Dataset 2a used in BCI Competition IV to evaluate the proposed method. This dataset was designed for motor imagery classification with 4 classes. Preliminary experiments show that the proposed ACPC feature extraction method when combining with Support Vector Machines outperforms CSP-based feature extraction methods on the experimental dataset.

  20. Evaluation Methods for Assessing Users’ Psychological Experiences of Web-Based Psychosocial Interventions: A Systematic Review

    PubMed Central

    Howson, Moira; Ritchie, Linda; Carter, Philip D; Parry, David Tudor; Koziol-McLain, Jane

    2016-01-01

    Background The use of Web-based interventions to deliver mental health and behavior change programs is increasingly popular. They are cost-effective, accessible, and generally effective. Often these interventions concern psychologically sensitive and challenging issues, such as depression or anxiety. The process by which a person receives and experiences therapy is important to understanding therapeutic process and outcomes. While the experience of the patient or client in traditional face-to-face therapy has been evaluated in a number of ways, there appeared to be a gap in the evaluation of patient experiences of therapeutic interventions delivered online. Evaluation of Web-based artifacts has focused either on evaluation of experience from a computer Web-design perspective through usability testing or on evaluation of treatment effectiveness. Neither of these methods focuses on the psychological experience of the person while engaged in the therapeutic process. Objective This study aimed to investigate what methods, if any, have been used to evaluate the in situ psychological experience of users of Web-based self-help psychosocial interventions. Methods A systematic literature review was undertaken of interdisciplinary databases with a focus on health and computer sciences. Studies that met a predetermined search protocol were included. Results Among 21 studies identified that examined psychological experience of the user, only 1 study collected user experience in situ. The most common method of understanding users’ experience was through semistructured interviews conducted posttreatment or questionnaires administrated at the end of an intervention session. The questionnaires were usually based on standardized tools used to assess user experience with traditional face-to-face treatment. Conclusions There is a lack of methods specified in the literature to evaluate the interface between Web-based mental health or behavior change artifacts and users. Main limitations in the research were the nascency of the topic and cross-disciplinary nature of the field. There is a need to develop and deliver methods of understanding users’ psychological experiences while using an intervention. PMID:27363519

  1. Comparison of computer-assisted instruction (CAI) versus traditional textbook methods for training in abdominal examination (Japanese experience).

    PubMed

    Qayumi, A K; Kurihara, Y; Imai, M; Pachev, G; Seo, H; Hoshino, Y; Cheifetz, R; Matsuura, K; Momoi, M; Saleem, M; Lara-Guerra, H; Miki, Y; Kariya, Y

    2004-10-01

    This study aimed to compare the effects of computer-assisted, text-based and computer-and-text learning conditions on the performances of 3 groups of medical students in the pre-clinical years of their programme, taking into account their academic achievement to date. A fourth group of students served as a control (no-study) group. Participants were recruited from the pre-clinical years of the training programmes in 2 medical schools in Japan, Jichi Medical School near Tokyo and Kochi Medical School near Osaka. Participants were randomly assigned to 4 learning conditions and tested before and after the study on their knowledge of and skill in performing an abdominal examination, in a multiple-choice test and an objective structured clinical examination (OSCE), respectively. Information about performance in the programme was collected from school records and students were classified as average, good or excellent. Student and faculty evaluations of their experience in the study were explored by means of a short evaluation survey. Compared to the control group, all 3 study groups exhibited significant gains in performance on knowledge and performance measures. For the knowledge measure, the gains of the computer-assisted and computer-assisted plus text-based learning groups were significantly greater than the gains of the text-based learning group. The performances of the 3 groups did not differ on the OSCE measure. Analyses of gains by performance level revealed that high achieving students' learning was independent of study method. Lower achieving students performed better after using computer-based learning methods. The results suggest that computer-assisted learning methods will be of greater help to students who do not find the traditional methods effective. Explorations of the factors behind this are a matter for future research.

  2. Examining Functions in Mathematics and Science Using Computer Interfacing.

    ERIC Educational Resources Information Center

    Walton, Karen Doyle

    1988-01-01

    Introduces microcomputer interfacing as a method for explaining and demonstrating various aspects of the concept of function. Provides three experiments with illustrations and typical computer graphic displays: pendulum motion, pendulum study using two pendulums, and heat absorption and radiation. (YP)

  3. Computer-aided detection of initial polyp candidates with level set-based adaptive convolution

    NASA Astrophysics Data System (ADS)

    Zhu, Hongbin; Duan, Chaijie; Liang, Zhengrong

    2009-02-01

    In order to eliminate or weaken the interference between different topological structures on the colon wall, adaptive and normalized convolution methods were used to compute the first and second order spatial derivatives of computed tomographic colonography images, which is the beginning of various geometric analyses. However, the performance of such methods greatly depends on the single-layer representation of the colon wall, which is called the starting layer (SL) in the following text. In this paper, we introduce a level set-based adaptive convolution (LSAC) method to compute the spatial derivatives, in which the level set method is employed to determine a more reasonable SL. The LSAC was applied to a computer-aided detection (CAD) scheme to detect the initial polyp candidates, and experiments showed that it benefits the CAD scheme in both the detection sensitivity and specificity as compared to our previous work.

  4. Remote control system for high-perfomance computer simulation of crystal growth by the PFC method

    NASA Astrophysics Data System (ADS)

    Pavlyuk, Evgeny; Starodumov, Ilya; Osipov, Sergei

    2017-04-01

    Modeling of crystallization process by the phase field crystal method (PFC) - one of the important directions of modern computational materials science. In this paper, the practical side of the computer simulation of the crystallization process by the PFC method is investigated. To solve problems using this method, it is necessary to use high-performance computing clusters, data storage systems and other often expensive complex computer systems. Access to such resources is often limited, unstable and accompanied by various administrative problems. In addition, the variety of software and settings of different computing clusters sometimes does not allow researchers to use unified program code. There is a need to adapt the program code for each configuration of the computer complex. The practical experience of the authors has shown that the creation of a special control system for computing with the possibility of remote use can greatly simplify the implementation of simulations and increase the performance of scientific research. In current paper we show the principal idea of such a system and justify its efficiency.

  5. Computer vision uncovers predictors of physical urban change.

    PubMed

    Naik, Nikhil; Kominers, Scott Duke; Raskar, Ramesh; Glaeser, Edward L; Hidalgo, César A

    2017-07-18

    Which neighborhoods experience physical improvements? In this paper, we introduce a computer vision method to measure changes in the physical appearances of neighborhoods from time-series street-level imagery. We connect changes in the physical appearance of five US cities with economic and demographic data and find three factors that predict neighborhood improvement. First, neighborhoods that are densely populated by college-educated adults are more likely to experience physical improvements-an observation that is compatible with the economic literature linking human capital and local success. Second, neighborhoods with better initial appearances experience, on average, larger positive improvements-an observation that is consistent with "tipping" theories of urban change. Third, neighborhood improvement correlates positively with physical proximity to the central business district and to other physically attractive neighborhoods-an observation that is consistent with the "invasion" theories of urban sociology. Together, our results provide support for three classical theories of urban change and illustrate the value of using computer vision methods and street-level imagery to understand the physical dynamics of cities.

  6. Computer vision uncovers predictors of physical urban change

    PubMed Central

    Naik, Nikhil; Kominers, Scott Duke; Raskar, Ramesh; Glaeser, Edward L.; Hidalgo, César A.

    2017-01-01

    Which neighborhoods experience physical improvements? In this paper, we introduce a computer vision method to measure changes in the physical appearances of neighborhoods from time-series street-level imagery. We connect changes in the physical appearance of five US cities with economic and demographic data and find three factors that predict neighborhood improvement. First, neighborhoods that are densely populated by college-educated adults are more likely to experience physical improvements—an observation that is compatible with the economic literature linking human capital and local success. Second, neighborhoods with better initial appearances experience, on average, larger positive improvements—an observation that is consistent with “tipping” theories of urban change. Third, neighborhood improvement correlates positively with physical proximity to the central business district and to other physically attractive neighborhoods—an observation that is consistent with the “invasion” theories of urban sociology. Together, our results provide support for three classical theories of urban change and illustrate the value of using computer vision methods and street-level imagery to understand the physical dynamics of cities. PMID:28684401

  7. Progress and challenges in bioinformatics approaches for enhancer identification

    PubMed Central

    Kleftogiannis, Dimitrios; Kalnis, Panos

    2016-01-01

    Enhancers are cis-acting DNA elements that play critical roles in distal regulation of gene expression. Identifying enhancers is an important step for understanding distinct gene expression programs that may reflect normal and pathogenic cellular conditions. Experimental identification of enhancers is constrained by the set of conditions used in the experiment. This requires multiple experiments to identify enhancers, as they can be active under specific cellular conditions but not in different cell types/tissues or cellular states. This has opened prospects for computational prediction methods that can be used for high-throughput identification of putative enhancers to complement experimental approaches. Potential functions and properties of predicted enhancers have been catalogued and summarized in several enhancer-oriented databases. Because the current methods for the computational prediction of enhancers produce significantly different enhancer predictions, it will be beneficial for the research community to have an overview of the strategies and solutions developed in this field. In this review, we focus on the identification and analysis of enhancers by bioinformatics approaches. First, we describe a general framework for computational identification of enhancers, present relevant data types and discuss possible computational solutions. Next, we cover over 30 existing computational enhancer identification methods that were developed since 2000. Our review highlights advantages, limitations and potentials, while suggesting pragmatic guidelines for development of more efficient computational enhancer prediction methods. Finally, we discuss challenges and open problems of this topic, which require further consideration. PMID:26634919

  8. Approximate methods in gamma-ray skyshine calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faw, R.E.; Roseberry, M.L.; Shultis, J.K.

    1985-11-01

    Gamma-ray skyshine, an important component of the radiation field in the environment of a nuclear power plant, has recently been studied in relation to storage of spent fuel and nuclear waste. This paper reviews benchmark skyshine experiments and transport calculations against which computational procedures may be tested. The paper also addresses the applicability of simplified computational methods involving single-scattering approximations. One such method, suitable for microcomputer implementation, is described and results are compared with other work.

  9. Relating Engineering Technology Students' Experiences in Electromagnetics with Performance in Communications Coursework: A Mixed-Methods Study

    ERIC Educational Resources Information Center

    Richards, Grant P.

    2009-01-01

    This study presents the results of a multi-year mixed-methods study of students' performance (n = 94) and experiences (n = 28) with electromagnetics in an elective Electrical and Computer Engineering Technology RF communications course. Data sources used in this study include academic transcripts, course exams, interviews, a learning styles…

  10. Initial Experiences with Machine-Assisted Reconsiderative Test Scoring: A New Method for Partial Credit and Multiple Correct Responses.

    ERIC Educational Resources Information Center

    Anderson, Paul S.

    Initial experiences with computer-assisted reconsiderative scoring are described. Reconsiderative scoring occurs when student responses are received and reviewed by the teacher before points for correctness are assigned. Manually scored completion-style questions are reconsiderative. A new method of machine assistance produces an item analysis on…

  11. RNA Secondary Structure Prediction by Using Discrete Mathematics: An Interdisciplinary Research Experience for Undergraduate Students

    ERIC Educational Resources Information Center

    Ellington, Roni; Wachira, James; Nkwanta, Asamoah

    2010-01-01

    The focus of this Research Experience for Undergraduates (REU) project was on RNA secondary structure prediction by using a lattice walk approach. The lattice walk approach is a combinatorial and computational biology method used to enumerate possible secondary structures and predict RNA secondary structure from RNA sequences. The method uses…

  12. Fast calculation method of computer-generated hologram using a depth camera with point cloud gridding

    NASA Astrophysics Data System (ADS)

    Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam

    2018-03-01

    We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.

  13. A method of neighbor classes based SVM classification for optical printed Chinese character recognition.

    PubMed

    Zhang, Jie; Wu, Xiaohong; Yu, Yanmei; Luo, Daisheng

    2013-01-01

    In optical printed Chinese character recognition (OPCCR), many classifiers have been proposed for the recognition. Among the classifiers, support vector machine (SVM) might be the best classifier. However, SVM is a classifier for two classes. When it is used for multi-classes in OPCCR, its computation is time-consuming. Thus, we propose a neighbor classes based SVM (NC-SVM) to reduce the computation consumption of SVM. Experiments of NC-SVM classification for OPCCR have been done. The results of the experiments have shown that the NC-SVM we proposed can effectively reduce the computation time in OPCCR.

  14. Thai Language Sentence Similarity Computation Based on Syntactic Structure and Semantic Vector

    NASA Astrophysics Data System (ADS)

    Wang, Hongbin; Feng, Yinhan; Cheng, Liang

    2018-03-01

    Sentence similarity computation plays an increasingly important role in text mining, Web page retrieval, machine translation, speech recognition and question answering systems. Thai language as a kind of resources scarce language, it is not like Chinese language with HowNet and CiLin resources. So the Thai sentence similarity research faces some challenges. In order to solve this problem of the Thai language sentence similarity computation. This paper proposes a novel method to compute the similarity of Thai language sentence based on syntactic structure and semantic vector. This method firstly uses the Part-of-Speech (POS) dependency to calculate two sentences syntactic structure similarity, and then through the word vector to calculate two sentences semantic similarity. Finally, we combine the two methods to calculate two Thai language sentences similarity. The proposed method not only considers semantic, but also considers the sentence syntactic structure. The experiment result shows that this method in Thai language sentence similarity computation is feasible.

  15. An Overview of Recent Developments in Computational Aeroelasticity

    NASA Technical Reports Server (NTRS)

    Bennett, Robert M.; Edwards, John W.

    2004-01-01

    The motivation for Computational Aeroelasticity (CA) and the elements of one type of the analysis or simulation process are briefly reviewed. The need for streamlining and improving the overall process to reduce elapsed time and improve overall accuracy is discussed. Further effort is needed to establish the credibility of the methodology, obtain experience, and to incorporate the experience base to simplify the method for future use. Experience with the application of a variety of Computational Aeroelasticity programs is summarized for the transonic flutter of two wings, the AGARD 445.6 wing and a typical business jet wing. There is a compelling need for a broad range of additional flutter test cases for further comparisons. Some existing data sets that may offer CA challenges are presented.

  16. Educating Laboratory Science Learners at a Distance Using Interactive Television

    ERIC Educational Resources Information Center

    Reddy, Christopher

    2014-01-01

    Laboratory science classes offered to students learning at a distance require a methodology that allows for the completion of tactile activities. Literature describes three different methods of solving the distance laboratory dilemma: kit-based laboratory experience, computer-based laboratory experience, and campus-based laboratory experience,…

  17. Computational predictions of stereochemistry in asymmetric thiazolium- and triazolium-catalyzed benzoin condensations.

    PubMed

    Dudding, Travis; Houk, Kendall N

    2004-04-20

    The catalytic asymmetric thiazolium- and triazolium-catalyzed benzoin condensations of aldehydes and ketones were studied with computational methods. Transition-state geometries were optimized by using Morokuma's IMOMO [integrated MO (molecular orbital) + MO method] variation of ONIOM (n-layered integrated molecular orbital method) with a combination of B3LYP/6-31G(d) and AM1 levels of theory, and final transition-state energies were computed with single-point B3LYP/6-31G(d) calculations. Correlations between experiment and theory were found, and the origins of stereoselection were identified. Thiazolium catalysts were predicted to be less selective then triazolium catalysts, a trend also found experimentally.

  18. Integrating electronic conferencing to enhance problem solving in nursing.

    PubMed

    Witucki, J M; Hodson, K E; Malm, L D

    1996-01-01

    The authors describe how a computer-mediated conference was integrated into a baccalaureate nursing program clinical course. They discuss methods used in implementing the conference, including a technical review of the software and hardware, and methods of implementing and monitoring the conference with students. Examples of discussion items, student and faculty responses to posted items, and responses to use of the computer-mediated conference are included. Results and recommendations from this experience will be useful to other schools integrating computer-mediated conference technology into the nursing school curriculum.

  19. QSAR Methods.

    PubMed

    Gini, Giuseppina

    2016-01-01

    In this chapter, we introduce the basis of computational chemistry and discuss how computational methods have been extended to some biological properties and toxicology, in particular. Since about 20 years, chemical experimentation is more and more replaced by modeling and virtual experimentation, using a large core of mathematics, chemistry, physics, and algorithms. Then we see how animal experiments, aimed at providing a standardized result about a biological property, can be mimicked by new in silico methods. Our emphasis here is on toxicology and on predicting properties through chemical structures. Two main streams of such models are available: models that consider the whole molecular structure to predict a value, namely QSAR (Quantitative Structure Activity Relationships), and models that find relevant substructures to predict a class, namely SAR. The term in silico discovery is applied to chemical design, to computational toxicology, and to drug discovery. We discuss how the experimental practice in biological science is moving more and more toward modeling and simulation. Such virtual experiments confirm hypotheses, provide data for regulation, and help in designing new chemicals.

  20. Usage of CT data in biomechanical research

    NASA Astrophysics Data System (ADS)

    Safonov, Roman A.; Golyadkina, Anastasiya A.; Kirillova, Irina V.; Kossovich, Leonid Y.

    2017-02-01

    Object of study: The investigation is focused on development of personalized medicine. The determination of mechanical properties of bone tissues based on in vivo data was considered. Methods: CT, MRI, natural experiments on versatile test machine Instron 5944, numerical experiments using Python programs. Results: The medical diagnostics methods, which allows determination of mechanical properties of bone tissues based on in vivo data. The series of experiments to define the values of mechanical parameters of bone tissues. For one and the same sample, computed tomography (CT), magnetic resonance imaging (MRI), ultrasonic investigations and mechanical experiments on single-column test machine Instron 5944 were carried out. The computer program for comparison of CT and MRI images was created. The grayscale values in the same points of the samples were determined on both CT and MRI images. The Haunsfield grayscale values were used to determine rigidity (Young module) and tensile strength of the samples. The obtained data was compared to natural experiments results for verification.

  1. A comparison of acceleration methods for solving the neutron transport k-eigenvalue problem

    NASA Astrophysics Data System (ADS)

    Willert, Jeffrey; Park, H.; Knoll, D. A.

    2014-10-01

    Over the past several years a number of papers have been written describing modern techniques for numerically computing the dominant eigenvalue of the neutron transport criticality problem. These methods fall into two distinct categories. The first category of methods rewrite the multi-group k-eigenvalue problem as a nonlinear system of equations and solve the resulting system using either a Jacobian-Free Newton-Krylov (JFNK) method or Nonlinear Krylov Acceleration (NKA), a variant of Anderson Acceleration. These methods are generally successful in significantly reducing the number of transport sweeps required to compute the dominant eigenvalue. The second category of methods utilize Moment-Based Acceleration (or High-Order/Low-Order (HOLO) Acceleration). These methods solve a sequence of modified diffusion eigenvalue problems whose solutions converge to the solution of the original transport eigenvalue problem. This second class of methods is, in our experience, always superior to the first, as most of the computational work is eliminated by the acceleration from the LO diffusion system. In this paper, we review each of these methods. Our computational results support our claim that the choice of which nonlinear solver to use, JFNK or NKA, should be secondary. The primary computational savings result from the implementation of a HOLO algorithm. We display computational results for a series of challenging multi-dimensional test problems.

  2. Computational Study of Environmental Effects on Torsional Free Energy Surface of N-Acetyl-N'-methyl-L-alanylamide Dipeptide

    ERIC Educational Resources Information Center

    Carlotto, Silvia; Zerbetto, Mirco

    2014-01-01

    We propose an articulated computational experiment in which both quantum mechanics (QM) and molecular mechanics (MM) methods are employed to investigate environment effects on the free energy surface for the backbone dihedral angles rotation of the small dipeptide N-Acetyl-N'-methyl-L-alanylamide. This computation exercise is appropriate for an…

  3. Forging Paths through Hostile Territory: Intersections of Women's Identities Pursuing Post-Secondary Computing Education

    ERIC Educational Resources Information Center

    Ratnabalasuriar, Sheruni

    2012-01-01

    This study explores experiences of women as they pursue post-secondary computing education in various contexts. Using in-depth interviews, the current study employs qualitative methods and draws from an intersectional approach to focus on how the various barriers emerge for women in different types of computing cultures. In-depth interviews with…

  4. Green's function methods in heavy ion shielding

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Costen, Robert C.; Shinn, Judy L.; Badavi, Francis F.

    1993-01-01

    An analytic solution to the heavy ion transport in terms of Green's function is used to generate a highly efficient computer code for space applications. The efficiency of the computer code is accomplished by a nonperturbative technique extending Green's function over the solution domain. The computer code can also be applied to accelerator boundary conditions to allow code validation in laboratory experiments.

  5. Prevalence and Correlates of Problematic Internet Experiences and Computer-Using Time: A Two-Year Longitudinal Study in Korean School Children

    PubMed Central

    Stewart, Robert; Lee, Ju-Yeon; Kim, Jae-Min; Kim, Sung-Wan; Shin, Il-Seon; Yoon, Jin-Sang

    2014-01-01

    Objective To measure the prevalence of and factors associated with online inappropriate sexual exposure, cyber-bullying victimisation, and computer-using time in early adolescence. Methods A two-year, prospective school survey was performed with 1,173 children aged 13 at baseline. Data collected included demographic factors, bullying experience, depression, anxiety, coping strategies, self-esteem, psychopathology, attention-deficit hyperactivity disorder symptoms, and school performance. These factors were investigated in relation to problematic Internet experiences and computer-using time at age 15. Results The prevalence of online inappropriate sexual exposure, cyber-bullying victimisation, academic-purpose computer overuse, and game-purpose computer overuse was 31.6%, 19.2%, 8.5%, and 21.8%, respectively, at age 15. Having older siblings, more weekly pocket money, depressive symptoms, anxiety symptoms, and passive coping strategy were associated with reported online sexual harassment. Male gender, depressive symptoms, and anxiety symptoms were associated with reported cyber-bullying victimisation. Female gender was associated with academic-purpose computer overuse, while male gender, lower academic level, increased height, and having older siblings were associated with game-purpose computer-overuse. Conclusion Different environmental and psychological factors predicted different aspects of problematic Internet experiences and computer-using time. This knowledge is important for framing public health interventions to educate adolescents about, and prevent, internet-derived problems. PMID:24605120

  6. The Power of Computer-aided Tomography to Investigate Marine Benthic Communities

    EPA Science Inventory

    Utilization of Computer-aided-Tomography (CT) technology is a powerful tool to investigate benthic communities in aquatic systems. In this presentation, we will attempt to summarize our 15 years of experience in developing specific CT methods and applications to marine benthic co...

  7. On The Behavior of Subgradient Projections Methods for Convex Feasibility Problems in Euclidean Spaces

    PubMed Central

    Butnariu, Dan; Censor, Yair; Gurfil, Pini; Hadar, Ethan

    2010-01-01

    We study some methods of subgradient projections for solving a convex feasibility problem with general (not necessarily hyperplanes or half-spaces) convex sets in the inconsistent case and propose a strategy that controls the relaxation parameters in a specific self-adapting manner. This strategy leaves enough user-flexibility but gives a mathematical guarantee for the algorithm’s behavior in the inconsistent case. We present numerical results of computational experiments that illustrate the computational advantage of the new method. PMID:20182556

  8. On The Behavior of Subgradient Projections Methods for Convex Feasibility Problems in Euclidean Spaces.

    PubMed

    Butnariu, Dan; Censor, Yair; Gurfil, Pini; Hadar, Ethan

    2008-07-03

    We study some methods of subgradient projections for solving a convex feasibility problem with general (not necessarily hyperplanes or half-spaces) convex sets in the inconsistent case and propose a strategy that controls the relaxation parameters in a specific self-adapting manner. This strategy leaves enough user-flexibility but gives a mathematical guarantee for the algorithm's behavior in the inconsistent case. We present numerical results of computational experiments that illustrate the computational advantage of the new method.

  9. Cost-Benefit Analysis for ECIA Chapter 1 and State DPPF Programs Comparing Groups Receiving Regular Program Instruction and Groups Receiving Computer Assisted Instruction/Computer Management System (CAI/CMS). 1986-87.

    ERIC Educational Resources Information Center

    Chamberlain, Ed

    A cost benefit study was conducted to determine the effectiveness of a computer assisted instruction/computer management system (CAI/CMS) as an alternative to conventional methods of teaching reading within Chapter 1 and DPPF funded programs of the Columbus (Ohio) Public Schools. The Chapter 1 funded Compensatory Language Experiences and Reading…

  10. Development and applications of two computational procedures for determining the vibration modes of structural systems. [aircraft structures - aerospaceplanes

    NASA Technical Reports Server (NTRS)

    Kvaternik, R. G.

    1975-01-01

    Two computational procedures for analyzing complex structural systems for their natural modes and frequencies of vibration are presented. Both procedures are based on a substructures methodology and both employ the finite-element stiffness method to model the constituent substructures. The first procedure is a direct method based on solving the eigenvalue problem associated with a finite-element representation of the complete structure. The second procedure is a component-mode synthesis scheme in which the vibration modes of the complete structure are synthesized from modes of substructures into which the structure is divided. The analytical basis of the methods contains a combination of features which enhance the generality of the procedures. The computational procedures exhibit a unique utilitarian character with respect to the versatility, computational convenience, and ease of computer implementation. The computational procedures were implemented in two special-purpose computer programs. The results of the application of these programs to several structural configurations are shown and comparisons are made with experiment.

  11. A Computer Simulation of Community Pharmacy Practice for Educational Use.

    PubMed

    Bindoff, Ivan; Ling, Tristan; Bereznicki, Luke; Westbury, Juanita; Chalmers, Leanne; Peterson, Gregory; Ollington, Robert

    2014-11-15

    To provide a computer-based learning method for pharmacy practice that is as effective as paper-based scenarios, but more engaging and less labor-intensive. We developed a flexible and customizable computer simulation of community pharmacy. Using it, the students would be able to work through scenarios which encapsulate the entirety of a patient presentation. We compared the traditional paper-based teaching method to our computer-based approach using equivalent scenarios. The paper-based group had 2 tutors while the computer group had none. Both groups were given a prescenario and postscenario clinical knowledge quiz and survey. Students in the computer-based group had generally greater improvements in their clinical knowledge score, and third-year students using the computer-based method also showed more improvements in history taking and counseling competencies. Third-year students also found the simulation fun and engaging. Our simulation of community pharmacy provided an educational experience as effective as the paper-based alternative, despite the lack of a human tutor.

  12. Data handling and analysis for the 1971 corn blight watch experiment.

    NASA Technical Reports Server (NTRS)

    Anuta, P. E.; Phillips, T. L.; Landgrebe, D. A.

    1972-01-01

    Review of the data handling and analysis methods used in the near-operational test of remote sensing systems provided by the 1971 corn blight watch experiment. The general data analysis techniques and, particularly, the statistical multispectral pattern recognition methods for automatic computer analysis of aircraft scanner data are described. Some of the results obtained are examined, and the implications of the experiment for future data communication requirements of earth resource survey systems are discussed.

  13. Inverse problems and optimal experiment design in unsteady heat transfer processes identification

    NASA Technical Reports Server (NTRS)

    Artyukhin, Eugene A.

    1991-01-01

    Experimental-computational methods for estimating characteristics of unsteady heat transfer processes are analyzed. The methods are based on the principles of distributed parameter system identification. The theoretical basis of such methods is the numerical solution of nonlinear ill-posed inverse heat transfer problems and optimal experiment design problems. Numerical techniques for solving problems are briefly reviewed. The results of the practical application of identification methods are demonstrated when estimating effective thermophysical characteristics of composite materials and thermal contact resistance in two-layer systems.

  14. Beyond Fourier

    NASA Astrophysics Data System (ADS)

    Hoch, Jeffrey C.

    2017-10-01

    Non-Fourier methods of spectrum analysis are gaining traction in NMR spectroscopy, driven by their utility for processing nonuniformly sampled data. These methods afford new opportunities for optimizing experiment time, resolution, and sensitivity of multidimensional NMR experiments, but they also pose significant challenges not encountered with the discrete Fourier transform. A brief history of non-Fourier methods in NMR serves to place different approaches in context. Non-Fourier methods reflect broader trends in the growing importance of computation in NMR, and offer insights for future software development.

  15. Computational gestalts and perception thresholds.

    PubMed

    Desolneux, Agnès; Moisan, Lionel; Morel, Jean-Michel

    2003-01-01

    In 1923, Max Wertheimer proposed a research programme and method in visual perception. He conjectured the existence of a small set of geometric grouping laws governing the perceptual synthesis of phenomenal objects, or "gestalt" from the atomic retina input. In this paper, we review this set of geometric grouping laws, using the works of Metzger, Kanizsa and their schools. In continuation, we explain why the Gestalt theory research programme can be translated into a Computer Vision programme. This translation is not straightforward, since Gestalt theory never addressed two fundamental matters: image sampling and image information measurements. Using these advances, we shall show that gestalt grouping laws can be translated into quantitative laws allowing the automatic computation of gestalts in digital images. From the psychophysical viewpoint, a main issue is raised: the computer vision gestalt detection methods deliver predictable perception thresholds. Thus, we are set in a position where we can build artificial images and check whether some kind of agreement can be found between the computationally predicted thresholds and the psychophysical ones. We describe and discuss two preliminary sets of experiments, where we compared the gestalt detection performance of several subjects with the predictable detection curve. In our opinion, the results of this experimental comparison support the idea of a much more systematic interaction between computational predictions in Computer Vision and psychophysical experiments.

  16. Jennifer van Rij | NREL

    Science.gov Websites

    Jennifer.Vanrij@nrel.gov | 303-384-7180 Jennifer's expertise is in developing computational modeling methods for collaboratively developing numerical modeling methods to simulate the hydrodynamic, structural dynamic, power -elastic interactions. Her other diverse work experiences include developing numerical modeling methods for

  17. A rapid parallelization of cone-beam projection and back-projection operator based on texture fetching interpolation

    NASA Astrophysics Data System (ADS)

    Xie, Lizhe; Hu, Yining; Chen, Yang; Shi, Luyao

    2015-03-01

    Projection and back-projection are the most computational consuming parts in Computed Tomography (CT) reconstruction. Parallelization strategies using GPU computing techniques have been introduced. We in this paper present a new parallelization scheme for both projection and back-projection. The proposed method is based on CUDA technology carried out by NVIDIA Corporation. Instead of build complex model, we aimed on optimizing the existing algorithm and make it suitable for CUDA implementation so as to gain fast computation speed. Besides making use of texture fetching operation which helps gain faster interpolation speed, we fixed sampling numbers in the computation of projection, to ensure the synchronization of blocks and threads, thus prevents the latency caused by inconsistent computation complexity. Experiment results have proven the computational efficiency and imaging quality of the proposed method.

  18. Realistic inversion of diffraction data for an amorphous solid: The case of amorphous silicon

    NASA Astrophysics Data System (ADS)

    Pandey, Anup; Biswas, Parthapratim; Bhattarai, Bishal; Drabold, D. A.

    2016-12-01

    We apply a method called "force-enhanced atomic refinement" (FEAR) to create a computer model of amorphous silicon (a -Si) based upon the highly precise x-ray diffraction experiments of Laaziri et al. [Phys. Rev. Lett. 82, 3460 (1999), 10.1103/PhysRevLett.82.3460]. The logic underlying our calculation is to estimate the structure of a real sample a -Si using experimental data and chemical information included in a nonbiased way, starting from random coordinates. The model is in close agreement with experiment and also sits at a suitable energy minimum according to density-functional calculations. In agreement with experiments, we find a small concentration of coordination defects that we discuss, including their electronic consequences. The gap states in the FEAR model are delocalized compared to a continuous random network model. The method is more efficient and accurate, in the sense of fitting the diffraction data, than conventional melt-quench methods. We compute the vibrational density of states and the specific heat, and we find that both compare favorably to experiments.

  19. Enabling Grid Computing resources within the KM3NeT computing model

    NASA Astrophysics Data System (ADS)

    Filippidis, Christos

    2016-04-01

    KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that - located at the bottom of the Mediterranean Sea - will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.

  20. Supervised learning from human performance at the computationally hard problem of optimal traffic signal control on a network of junctions

    PubMed Central

    Box, Simon

    2014-01-01

    Optimal switching of traffic lights on a network of junctions is a computationally intractable problem. In this research, road traffic networks containing signallized junctions are simulated. A computer game interface is used to enable a human ‘player’ to control the traffic light settings on the junctions within the simulation. A supervised learning approach, based on simple neural network classifiers can be used to capture human player's strategies in the game and thus develop a human-trained machine control (HuTMaC) system that approaches human levels of performance. Experiments conducted within the simulation compare the performance of HuTMaC to two well-established traffic-responsive control systems that are widely deployed in the developed world and also to a temporal difference learning-based control method. In all experiments, HuTMaC outperforms the other control methods in terms of average delay and variance over delay. The conclusion is that these results add weight to the suggestion that HuTMaC may be a viable alternative, or supplemental method, to approximate optimization for some practical engineering control problems where the optimal strategy is computationally intractable. PMID:26064570

  1. Supervised learning from human performance at the computationally hard problem of optimal traffic signal control on a network of junctions.

    PubMed

    Box, Simon

    2014-12-01

    Optimal switching of traffic lights on a network of junctions is a computationally intractable problem. In this research, road traffic networks containing signallized junctions are simulated. A computer game interface is used to enable a human 'player' to control the traffic light settings on the junctions within the simulation. A supervised learning approach, based on simple neural network classifiers can be used to capture human player's strategies in the game and thus develop a human-trained machine control (HuTMaC) system that approaches human levels of performance. Experiments conducted within the simulation compare the performance of HuTMaC to two well-established traffic-responsive control systems that are widely deployed in the developed world and also to a temporal difference learning-based control method. In all experiments, HuTMaC outperforms the other control methods in terms of average delay and variance over delay. The conclusion is that these results add weight to the suggestion that HuTMaC may be a viable alternative, or supplemental method, to approximate optimization for some practical engineering control problems where the optimal strategy is computationally intractable.

  2. A Computer Program for the Calculation of Three-Dimensional Transonic Nacelle/Inlet Flowfields

    NASA Technical Reports Server (NTRS)

    Vadyak, J.; Atta, E. H.

    1983-01-01

    A highly efficient computer analysis was developed for predicting transonic nacelle/inlet flowfields. This algorithm can compute the three dimensional transonic flowfield about axisymmetric (or asymmetric) nacelle/inlet configurations at zero or nonzero incidence. The flowfield is determined by solving the full-potential equation in conservative form on a body-fitted curvilinear computational mesh. The difference equations are solved using the AF2 approximate factorization scheme. This report presents a discussion of the computational methods used to both generate the body-fitted curvilinear mesh and to obtain the inviscid flow solution. Computed results and correlations with existing methods and experiment are presented. Also presented are discussions on the organization of the grid generation (NGRIDA) computer program and the flow solution (NACELLE) computer program, descriptions of the respective subroutines, definitions of the required input parameters for both algorithms, a brief discussion on interpretation of the output, and sample cases to illustrate application of the analysis.

  3. Pharmacist Computer Skills and Needs Assessment Survey

    PubMed Central

    Jewesson, Peter J

    2004-01-01

    Background To use technology effectively for the advancement of patient care, pharmacists must possess a variety of computer skills. We recently introduced a novel applied informatics program in this Canadian hospital clinical service unit to enhance the informatics skills of our members. Objective This study was conducted to gain a better understanding of the baseline computer skills and needs of our hospital pharmacists immediately prior to the implementation of an applied informatics program. Methods In May 2001, an 84-question written survey was distributed by mail to 106 practicing hospital pharmacists in our multi-site, 1500-bed, acute-adult-tertiary care Canadian teaching hospital in Vancouver, British Columbia. Results Fifty-eight surveys (55% of total) were returned within the two-week study period. The survey responses reflected the opinions of licensed BSc and PharmD hospital pharmacists with a broad range of pharmacy practice experience. Most respondents had home access to personal computers, and regularly used computers in the work environment for drug distribution, information management, and communication purposes. Few respondents reported experience with handheld computers. Software use experience varied according to application. Although patient-care information software and e-mail were commonly used, experience with spreadsheet, statistical, and presentation software was negligible. The respondents were familiar with Internet search engines, and these were reported to be the most common method of seeking clinical information online. Although many respondents rated themselves as being generally computer literate and not particularly anxious about using computers, the majority believed they required more training to reach their desired level of computer literacy. Lack of familiarity with computer-related terms was prevalent. Self-reported basic computer skill was typically at a moderate level, and varied depending on the task. Specifically, respondents rated their ability to manipulate files, use software help features, and install software as low, but rated their ability to access and navigate the Internet as high. Respondents were generally aware of what online resources were available to them and Clinical Pharmacology was the most commonly employed reference. In terms of anticipated needs, most pharmacists believed they needed to upgrade their computer skills. Medical database and Internet searching skills were identified as those in greatest need of improvement. Conclusions Most pharmacists believed they needed to upgrade their computer skills. Medical database and Internet searching skills were identified as those in greatest need of improvement for the purposes of improving practice effectiveness. PMID:15111277

  4. Intravenous catheter training system: computer-based education versus traditional learning methods.

    PubMed

    Engum, Scott A; Jeffries, Pamela; Fisher, Lisa

    2003-07-01

    Virtual reality simulators allow trainees to practice techniques without consequences, reduce potential risk associated with training, minimize animal use, and help to develop standards and optimize procedures. Current intravenous (IV) catheter placement training methods utilize plastic arms, however, the lack of variability can diminish the educational stimulus for the student. This study compares the effectiveness of an interactive, multimedia, virtual reality computer IV catheter simulator with a traditional laboratory experience of teaching IV venipuncture skills to both nursing and medical students. A randomized, pretest-posttest experimental design was employed. A total of 163 participants, 70 baccalaureate nursing students and 93 third-year medical students beginning their fundamental skills training were recruited. The students ranged in age from 20 to 55 years (mean 25). Fifty-eight percent were female and 68% percent perceived themselves as having average computer skills (25% declaring excellence). The methods of IV catheter education compared included a traditional method of instruction involving a scripted self-study module which involved a 10-minute videotape, instructor demonstration, and hands-on-experience using plastic mannequin arms. The second method involved an interactive multimedia, commercially made computer catheter simulator program utilizing virtual reality (CathSim). The pretest scores were similar between the computer and the traditional laboratory group. There was a significant improvement in cognitive gains, student satisfaction, and documentation of the procedure with the traditional laboratory group compared with the computer catheter simulator group. Both groups were similar in their ability to demonstrate the skill correctly. CONCLUSIONS; This evaluation and assessment was an initial effort to assess new teaching methodologies related to intravenous catheter placement and their effects on student learning outcomes and behaviors. Technology alone is not a solution for stand alone IV catheter placement education. A traditional learning method was preferred by students. The combination of these two methods of education may further enhance the trainee's satisfaction and skill acquisition level.

  5. Methods for modeling cytoskeletal and DNA filaments

    NASA Astrophysics Data System (ADS)

    Andrews, Steven S.

    2014-02-01

    This review summarizes the models that researchers use to represent the conformations and dynamics of cytoskeletal and DNA filaments. It focuses on models that address individual filaments in continuous space. Conformation models include the freely jointed, Gaussian, angle-biased chain (ABC), and wormlike chain (WLC) models, of which the first three bend at discrete joints and the last bends continuously. Predictions from the WLC model generally agree well with experiment. Dynamics models include the Rouse, Zimm, stiff rod, dynamic WLC, and reptation models, of which the first four apply to isolated filaments and the last to entangled filaments. Experiments show that the dynamic WLC and reptation models are most accurate. They also show that biological filaments typically experience strong hydrodynamic coupling and/or constrained motion. Computer simulation methods that address filament dynamics typically compute filament segment velocities from local forces using the Langevin equation and then integrate these velocities with explicit or implicit methods; the former are more versatile and the latter are more efficient. Much remains to be discovered in biological filament modeling. In particular, filament dynamics in living cells are not well understood, and current computational methods are too slow and not sufficiently versatile. Although primarily a review, this paper also presents new statistical calculations for the ABC and WLC models. Additionally, it corrects several discrepancies in the literature about bending and torsional persistence length definitions, and their relations to flexural and torsional rigidities.

  6. Optimal Multicomponent Analysis Using the Generalized Standard Addition Method.

    ERIC Educational Resources Information Center

    Raymond, Margaret; And Others

    1983-01-01

    Describes an experiment on the simultaneous determination of chromium and magnesium by spectophotometry modified to include the Generalized Standard Addition Method computer program, a multivariate calibration method that provides optimal multicomponent analysis in the presence of interference and matrix effects. Provides instructions for…

  7. Comptational Design Of Functional CA-S-H and Oxide Doped Alloy Systems

    NASA Astrophysics Data System (ADS)

    Yang, Shizhong; Chilla, Lokeshwar; Yang, Yan; Li, Kuo; Wicker, Scott; Zhao, Guang-Lin; Khosravi, Ebrahim; Bai, Shuju; Zhang, Boliang; Guo, Shengmin

    Computer aided functional materials design accelerates the discovery of novel materials. This presentation will cover our recent research advance on the Ca-S-H system properties prediction and oxide doped high entropy alloy property simulation and experiment validation. Several recent developed computational materials design methods were utilized to the two systems physical and chemical properties prediction. A comparison of simulation results to the corresponding experiment data will be introduced. This research is partially supported by NSF CIMM project (OIA-15410795 and the Louisiana BoR), NSF HBCU Supplement climate change and ecosystem sustainability subproject 3, and LONI high performance computing time allocation loni mat bio7.

  8. Large Scale Flutter Data for Design of Rotating Blades Using Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2012-01-01

    A procedure to compute flutter boundaries of rotating blades is presented; a) Navier-Stokes equations. b) Frequency domain method compatible with industry practice. Procedure is initially validated: a) Unsteady loads with flapping wing experiment. b) Flutter boundary with fixed wing experiment. Large scale flutter computation is demonstrated for rotating blade: a) Single job submission script. b) Flutter boundary in 24 hour wall clock time with 100 cores. c) Linearly scalable with number of cores. Tested with 1000 cores that produced data in 25 hrs for 10 flutter boundaries. Further wall-clock speed-up is possible by performing parallel computations within each case.

  9. A Method of Neighbor Classes Based SVM Classification for Optical Printed Chinese Character Recognition

    PubMed Central

    Zhang, Jie; Wu, Xiaohong; Yu, Yanmei; Luo, Daisheng

    2013-01-01

    In optical printed Chinese character recognition (OPCCR), many classifiers have been proposed for the recognition. Among the classifiers, support vector machine (SVM) might be the best classifier. However, SVM is a classifier for two classes. When it is used for multi-classes in OPCCR, its computation is time-consuming. Thus, we propose a neighbor classes based SVM (NC-SVM) to reduce the computation consumption of SVM. Experiments of NC-SVM classification for OPCCR have been done. The results of the experiments have shown that the NC-SVM we proposed can effectively reduce the computation time in OPCCR. PMID:23536777

  10. Experiences with Probabilistic Analysis Applied to Controlled Systems

    NASA Technical Reports Server (NTRS)

    Kenny, Sean P.; Giesy, Daniel P.

    2004-01-01

    This paper presents a semi-analytic method for computing frequency dependent means, variances, and failure probabilities for arbitrarily large-order closed-loop dynamical systems possessing a single uncertain parameter or with multiple highly correlated uncertain parameters. The approach will be shown to not suffer from the same computational challenges associated with computing failure probabilities using conventional FORM/SORM techniques. The approach is demonstrated by computing the probabilistic frequency domain performance of an optimal feed-forward disturbance rejection scheme.

  11. Heterogeneous quantum computing for satellite constellation optimization: solving the weighted k-clique problem

    NASA Astrophysics Data System (ADS)

    Bass, Gideon; Tomlin, Casey; Kumar, Vaibhaw; Rihaczek, Pete; Dulny, Joseph, III

    2018-04-01

    NP-hard optimization problems scale very rapidly with problem size, becoming unsolvable with brute force methods, even with supercomputing resources. Typically, such problems have been approximated with heuristics. However, these methods still take a long time and are not guaranteed to find an optimal solution. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. Current quantum annealing (QA) devices are designed to solve difficult optimization problems, but they are limited by hardware size and qubit connectivity restrictions. We present a novel heterogeneous computing stack that combines QA and classical machine learning, allowing the use of QA on problems larger than the hardware limits of the quantum device. These results represent experiments on a real-world problem represented by the weighted k-clique problem. Through this experiment, we provide insight into the state of quantum machine learning.

  12. AMPS data management concepts. [Atmospheric, Magnetospheric and Plasma in Space experiment

    NASA Technical Reports Server (NTRS)

    Metzelaar, P. N.

    1975-01-01

    Five typical AMPS experiments were formulated to allow simulation studies to verify data management concepts. Design studies were conducted to analyze these experiments in terms of the applicable procedures, data processing and displaying functions. Design concepts for AMPS data management system are presented which permit both automatic repetitive measurement sequences and experimenter-controlled step-by-step procedures. Extensive use is made of a cathode ray tube display, the experimenters' alphanumeric keyboard, and the computer. The types of computer software required by the system and the possible choices of control and display procedures available to the experimenter are described for several examples. An electromagnetic wave transmission experiment illustrates the methods used to analyze data processing requirements.

  13. Virtual viewpoint synthesis in multi-view video system

    NASA Astrophysics Data System (ADS)

    Li, Fang; Yang, Shiqiang

    2005-07-01

    In this paper, we present a virtual viewpoint video synthesis algorithm to satisfy the following three aims: low computing consuming; real time interpolation and acceptable video quality. In contrast with previous technologies, this method obtain incompletely 3D structure using neighbor video sources instead of getting total 3D information with all video sources, so that the computation is reduced greatly. So we demonstrate our interactive multi-view video synthesis algorithm in a personal computer. Furthermore, adopting the method of choosing feature points to build the correspondence between the frames captured by neighbor cameras, we need not require camera calibration. Finally, our method can be used when the angle between neighbor cameras is 25-30 degrees that it is much larger than common computer vision experiments. In this way, our method can be applied into many applications such as sports live, video conference, etc.

  14. An immersed boundary method for modeling a dirty geometry data

    NASA Astrophysics Data System (ADS)

    Onishi, Keiji; Tsubokura, Makoto

    2017-11-01

    We present a robust, fast, and low preparation cost immersed boundary method (IBM) for simulating an incompressible high Re flow around highly complex geometries. The method is achieved by the dispersion of the momentum by the axial linear projection and the approximate domain assumption satisfying the mass conservation around the wall including cells. This methodology has been verified against an analytical theory and wind tunnel experiment data. Next, we simulate the problem of flow around a rotating object and demonstrate the ability of this methodology to the moving geometry problem. This methodology provides the possibility as a method for obtaining a quick solution at a next large scale supercomputer. This research was supported by MEXT as ``Priority Issue on Post-K computer'' (Development of innovative design and production processes) and used computational resources of the K computer provided by the RIKEN Advanced Institute for Computational Science.

  15. Computational Nanotechnology Program

    NASA Technical Reports Server (NTRS)

    Scuseria, Gustavo E.

    1997-01-01

    The objectives are: (1) development of methodological and computational tool for the quantum chemistry study of carbon nanostructures and (2) development of the fundamental understanding of the bonding, reactivity, and electronic structure of carbon nanostructures. Our calculations have continued to play a central role in understanding the outcome of the carbon nanotube macroscopic production experiment. The calculations on buckyonions offer the resolution of a long controversy between experiment and theory. Our new tight binding method offers increased speed for realistic simulations of large carbon nanostructures.

  16. A Procedure for Measuring Latencies in Brain-Computer Interfaces

    PubMed Central

    Wilson, J. Adam; Mellinger, Jürgen; Schalk, Gerwin; Williams, Justin

    2011-01-01

    Brain-computer interface (BCI) systems must process neural signals with consistent timing in order to support adequate system performance. Thus, it is important to have the capability to determine whether a particular BCI configuration (i.e., hardware, software) provides adequate timing performance for a particular experiment. This report presents a method of measuring and quantifying different aspects of system timing in several typical BCI experiments across a range of settings, and presents comprehensive measures of expected overall system latency for each experimental configuration. PMID:20403781

  17. On the statistical equivalence of restrained-ensemble simulations with the maximum entropy method

    PubMed Central

    Roux, Benoît; Weare, Jonathan

    2013-01-01

    An issue of general interest in computer simulations is to incorporate information from experiments into a structural model. An important caveat in pursuing this goal is to avoid corrupting the resulting model with spurious and arbitrary biases. While the problem of biasing thermodynamic ensembles can be formulated rigorously using the maximum entropy method introduced by Jaynes, the approach can be cumbersome in practical applications with the need to determine multiple unknown coefficients iteratively. A popular alternative strategy to incorporate the information from experiments is to rely on restrained-ensemble molecular dynamics simulations. However, the fundamental validity of this computational strategy remains in question. Here, it is demonstrated that the statistical distribution produced by restrained-ensemble simulations is formally consistent with the maximum entropy method of Jaynes. This clarifies the underlying conditions under which restrained-ensemble simulations will yield results that are consistent with the maximum entropy method. PMID:23464140

  18. Designing for Learner Engagement with Computer Based Testing

    ERIC Educational Resources Information Center

    Walker, Richard; Handley, Zoe

    2016-01-01

    The issues influencing student engagement with high-stakes computer-based exams were investigated, drawing on feedback from two cohorts of international MA Education students encountering this assessment method for the first time. Qualitative data from surveys and focus groups on the students' examination experience were analysed, leading to the…

  19. The Project Method as Practice of Study Activation

    ERIC Educational Resources Information Center

    Fazlyeva, Zulfiya Kh.; Sheinina, Dina P.; Deputatova, Natalia A.

    2016-01-01

    Relevance of the problem stated in the article is determined by new teaching approach uniting the traditional teaching experience with that of the modern information technologies, all being merged into a new course of the computer lingua-didactics (the international term of which is "Computer Assisted Language Learning" (CALL) or…

  20. Training Programs in Applications Software.

    ERIC Educational Resources Information Center

    Modianos, Doan T.; Cornwell, Larry W.

    1988-01-01

    Description of training programs for using business applications software highlights implementing programs for Lotus 1-2-3 and dBASE III Plus. The amount of computer experience of the users and the difference in training methods needed are discussed, and the use of a Macintosh computer for producing notes is explained. (LRW)

  1. An Integrated Evaluation Method for E-Learning: A Case Study

    ERIC Educational Resources Information Center

    Rentroia-Bonito, M. A.; Figueiredo, F.; Martins, A.; Jorge, J. A.; Ghaoui, C.

    2006-01-01

    Technological improvements in broadband and distributed computing are making it possible to distribute live media content cost-effectively. Because of this, organizations are looking into cost-effective approaches to implement e-Learning initiatives. Indeed, computing resources are not enough by themselves to promote better e-Learning experiences.…

  2. Simulation Studies as Designed Experiments: The Comparison of Penalized Regression Models in the “Large p, Small n” Setting

    PubMed Central

    Chaibub Neto, Elias; Bare, J. Christopher; Margolin, Adam A.

    2014-01-01

    New algorithms are continuously proposed in computational biology. Performance evaluation of novel methods is important in practice. Nonetheless, the field experiences a lack of rigorous methodology aimed to systematically and objectively evaluate competing approaches. Simulation studies are frequently used to show that a particular method outperforms another. Often times, however, simulation studies are not well designed, and it is hard to characterize the particular conditions under which different methods perform better. In this paper we propose the adoption of well established techniques in the design of computer and physical experiments for developing effective simulation studies. By following best practices in planning of experiments we are better able to understand the strengths and weaknesses of competing algorithms leading to more informed decisions about which method to use for a particular task. We illustrate the application of our proposed simulation framework with a detailed comparison of the ridge-regression, lasso and elastic-net algorithms in a large scale study investigating the effects on predictive performance of sample size, number of features, true model sparsity, signal-to-noise ratio, and feature correlation, in situations where the number of covariates is usually much larger than sample size. Analysis of data sets containing tens of thousands of features but only a few hundred samples is nowadays routine in computational biology, where “omics” features such as gene expression, copy number variation and sequence data are frequently used in the predictive modeling of complex phenotypes such as anticancer drug response. The penalized regression approaches investigated in this study are popular choices in this setting and our simulations corroborate well established results concerning the conditions under which each one of these methods is expected to perform best while providing several novel insights. PMID:25289666

  3. The Effect of the Computer Assisted Teaching and 7e Model of the Constructivist Learning Methods on the Achievements and Attitudes of High School Students

    ERIC Educational Resources Information Center

    Gönen, Selahattin; Kocakaya, Serhat; Inan, Cemil

    2006-01-01

    This study provides a comparative effect study of the Computer Assisted Teaching and the 7E model of the Constructivist Learning methods on attitudes and achievements of the students in physics classes. The experiments have been carried out in a private high school in Diyarbakir/Turkey on groups of first year students whose pre-test scores of…

  4. Computational predictions of stereochemistry in asymmetric thiazolium- and triazolium-catalyzed benzoin condensations

    PubMed Central

    Dudding, Travis; Houk, Kendall N.

    2004-01-01

    The catalytic asymmetric thiazolium- and triazolium-catalyzed benzoin condensations of aldehydes and ketones were studied with computational methods. Transition-state geometries were optimized by using Morokuma's IMOMO [integrated MO (molecular orbital) + MO method] variation of ONIOM (n-layered integrated molecular orbital method) with a combination of B3LYP/6–31G(d) and AM1 levels of theory, and final transition-state energies were computed with single-point B3LYP/6–31G(d) calculations. Correlations between experiment and theory were found, and the origins of stereoselection were identified. Thiazolium catalysts were predicted to be less selective then triazolium catalysts, a trend also found experimentally. PMID:15079058

  5. Convolutional Deep Belief Networks for Single-Cell/Object Tracking in Computational Biology and Computer Vision.

    PubMed

    Zhong, Bineng; Pan, Shengnan; Zhang, Hongbo; Wang, Tian; Du, Jixiang; Chen, Duansheng; Cao, Liujuan

    2016-01-01

    In this paper, we propose deep architecture to dynamically learn the most discriminative features from data for both single-cell and object tracking in computational biology and computer vision. Firstly, the discriminative features are automatically learned via a convolutional deep belief network (CDBN). Secondly, we design a simple yet effective method to transfer features learned from CDBNs on the source tasks for generic purpose to the object tracking tasks using only limited amount of training data. Finally, to alleviate the tracker drifting problem caused by model updating, we jointly consider three different types of positive samples. Extensive experiments validate the robustness and effectiveness of the proposed method.

  6. Convolutional Deep Belief Networks for Single-Cell/Object Tracking in Computational Biology and Computer Vision

    PubMed Central

    Pan, Shengnan; Zhang, Hongbo; Wang, Tian; Du, Jixiang; Chen, Duansheng; Cao, Liujuan

    2016-01-01

    In this paper, we propose deep architecture to dynamically learn the most discriminative features from data for both single-cell and object tracking in computational biology and computer vision. Firstly, the discriminative features are automatically learned via a convolutional deep belief network (CDBN). Secondly, we design a simple yet effective method to transfer features learned from CDBNs on the source tasks for generic purpose to the object tracking tasks using only limited amount of training data. Finally, to alleviate the tracker drifting problem caused by model updating, we jointly consider three different types of positive samples. Extensive experiments validate the robustness and effectiveness of the proposed method. PMID:27847827

  7. Computational model for noncontact atomic force microscopy: energy dissipation of cantilever.

    PubMed

    Senda, Yasuhiro; Blomqvist, Janne; Nieminen, Risto M

    2016-09-21

    We propose a computational model for noncontact atomic force microscopy (AFM) in which the atomic force between the cantilever tip and the surface is calculated using a molecular dynamics method, and the macroscopic motion of the cantilever is modeled by an oscillating spring. The movement of atoms in the tip and surface is connected with the oscillating spring using a recently developed coupling method. In this computational model, the oscillation energy is dissipated, as observed in AFM experiments. We attribute this dissipation to the hysteresis and nonconservative properties of the interatomic force that acts between the atoms in the tip and sample surface. The dissipation rate strongly depends on the parameters used in the computational model.

  8. A Fast Approach to Automatic Detection of Brain Lesions

    PubMed Central

    Koley, Subhranil; Chakraborty, Chandan; Mainero, Caterina; Fischl, Bruce; Aganj, Iman

    2017-01-01

    Template matching is a popular approach to computer-aided detection of brain lesions from magnetic resonance (MR) images. The outcomes are often sufficient for localizing lesions and assisting clinicians in diagnosis. However, processing large MR volumes with three-dimensional (3D) templates is demanding in terms of computational resources, hence the importance of the reduction of computational complexity of template matching, particularly in situations in which time is crucial (e.g. emergent stroke). In view of this, we make use of 3D Gaussian templates with varying radii and propose a new method to compute the normalized cross-correlation coefficient as a similarity metric between the MR volume and the template to detect brain lesions. Contrary to the conventional fast Fourier transform (FFT) based approach, whose runtime grows as O(N logN) with the number of voxels, the proposed method computes the cross-correlation in O(N). We show through our experiments that the proposed method outperforms the FFT approach in terms of computational time, and retains comparable accuracy. PMID:29082383

  9. Chandrasekhar-type algorithms for fast recursive estimation in linear systems with constant parameters

    NASA Technical Reports Server (NTRS)

    Choudhury, A. K.; Djalali, M.

    1975-01-01

    In this recursive method proposed, the gain matrix for the Kalman filter and the convariance of the state vector are computed not via the Riccati equation, but from certain other equations. These differential equations are of Chandrasekhar-type. The 'invariant imbedding' idea resulted in the reduction of the basic boundary value problem of transport theory to an equivalent initial value system, a significant computational advance. Initial value experience showed that there is some computational savings in the method and the loss of positive definiteness of the covariance matrix is less vulnerable.

  10. Viscosity Measurement of Highly Viscous Liquids Using Drop Coalescence in Low Gravity

    NASA Technical Reports Server (NTRS)

    Antar, Basil N.; Ethridge, Edwin; Maxwell, Daniel

    1999-01-01

    The method of drop coalescence is being investigated for use as a method for determining the viscosity of highly viscous undercooled liquids. Low gravity environment is necessary in this case to minimize the undesirable effects of body forces and liquid motion in levitated drops. Also, the low gravity environment will allow for investigating large liquid volumes which can lead to much higher accuracy for the viscosity calculations than possible under 1 - g conditions. The drop coalescence method is preferred over the drop oscillation technique since the latter method can only be applied for liquids with vanishingly small viscosities. The technique developed relies on both the highly accurate solution of the Navier-Stokes equations as well as on data from experiments conducted in near zero gravity environment. In the analytical aspect of the method two liquid volumes are brought into contact which will coalesce under the action of surface tension alone. The free surface geometry development as well as its velocity during coalescence which are obtained from numerical computations are compared with an analogous experimental model. The viscosity in the numerical computations is then adjusted to bring into agreement of the experimental results with the calculations. The true liquid viscosity is the one which brings the experiment closest to the calculations. Results are presented for method validation experiments performed recently on board the NASA/KC-135 aircraft. The numerical solution for this validation case was produced using the Boundary Element Method. In these tests the viscosity of a highly viscous liquid, in this case glycerine at room temperature, was determined to high degree of accuracy using the liquid coalescence method. These experiments gave very encouraging results which will be discussed together with plans for implementing the method in a shuttle flight experiment.

  11. DAKOTA Design Analysis Kit for Optimization and Terascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.

    2010-02-24

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-basedmore » methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less

  12. Classical boson sampling algorithms with superior performance to near-term experiments

    NASA Astrophysics Data System (ADS)

    Neville, Alex; Sparrow, Chris; Clifford, Raphaël; Johnston, Eric; Birchall, Patrick M.; Montanaro, Ashley; Laing, Anthony

    2017-12-01

    It is predicted that quantum computers will dramatically outperform their conventional counterparts. However, large-scale universal quantum computers are yet to be built. Boson sampling is a rudimentary quantum algorithm tailored to the platform of linear optics, which has sparked interest as a rapid way to demonstrate such quantum supremacy. Photon statistics are governed by intractable matrix functions, which suggests that sampling from the distribution obtained by injecting photons into a linear optical network could be solved more quickly by a photonic experiment than by a classical computer. The apparently low resource requirements for large boson sampling experiments have raised expectations of a near-term demonstration of quantum supremacy by boson sampling. Here we present classical boson sampling algorithms and theoretical analyses of prospects for scaling boson sampling experiments, showing that near-term quantum supremacy via boson sampling is unlikely. Our classical algorithm, based on Metropolised independence sampling, allowed the boson sampling problem to be solved for 30 photons with standard computing hardware. Compared to current experiments, a demonstration of quantum supremacy over a successful implementation of these classical methods on a supercomputer would require the number of photons and experimental components to increase by orders of magnitude, while tackling exponentially scaling photon loss.

  13. Iterative solution of the inverse Cauchy problem for an elliptic equation by the conjugate gradient method

    NASA Astrophysics Data System (ADS)

    Vasil'ev, V. I.; Kardashevsky, A. M.; Popov, V. V.; Prokopev, G. A.

    2017-10-01

    This article presents results of computational experiment carried out using a finite-difference method for solving the inverse Cauchy problem for a two-dimensional elliptic equation. The computational algorithm involves an iterative determination of the missing boundary condition from the override condition using the conjugate gradient method. The results of calculations are carried out on the examples with exact solutions as well as at specifying an additional condition with random errors are presented. Results showed a high efficiency of the iterative method of conjugate gradients for numerical solution

  14. Using Qualitative Research Methods in Higher Education

    ERIC Educational Resources Information Center

    Savenye, Wilhelmina C.; Robinson, Rhonda S.

    2005-01-01

    Researchers investigating issues related to computing in higher education are increasingly using qualitative research methods to conduct their investigations. However, they may have little training or experience in qualitative research. The purpose of this paper is to introduce researchers to the appropriate use of qualitative methods. It begins…

  15. Numerical simulations of the flow with the prescribed displacement of the airfoil and comparison with experiment

    NASA Astrophysics Data System (ADS)

    Řidký, V.; Šidlof, P.; Vlček, V.

    2013-04-01

    The work is devoted to comparing measured data with the results of numerical simulations. As mathematical model was used mathematical model whitout turbulence for incompressible flow In the experiment was observed the behavior of designed NACA0015 airfoil in airflow. For the numerical solution was used OpenFOAM computational package, this is open-source software based on finite volume method. In the numerical solution is prescribed displacement of the airfoil, which corresponds to the experiment. The velocity at a point close to the airfoil surface is compared with the experimental data obtained from interferographic measurements of the velocity field. Numerical solution is computed on a 3D mesh composed of about 1 million ortogonal hexahedron elements. The time step is limited by the Courant number. Parallel computations are run on supercomputers of the CIV at Technical University in Prague (HAL and FOX) and on a computer cluster of the Faculty of Mechatronics of Liberec (HYDRA). Run time is fixed at five periods, the results from the fifth periods and average value for all periods are then be compared with experiment.

  16. Selection of Thermal Worst-Case Orbits via Modified Efficient Global Optimization

    NASA Technical Reports Server (NTRS)

    Moeller, Timothy M.; Wilhite, Alan W.; Liles, Kaitlin A.

    2014-01-01

    Efficient Global Optimization (EGO) was used to select orbits with worst-case hot and cold thermal environments for the Stratospheric Aerosol and Gas Experiment (SAGE) III. The SAGE III system thermal model changed substantially since the previous selection of worst-case orbits (which did not use the EGO method), so the selections were revised to ensure the worst cases are being captured. The EGO method consists of first conducting an initial set of parametric runs, generated with a space-filling Design of Experiments (DoE) method, then fitting a surrogate model to the data and searching for points of maximum Expected Improvement (EI) to conduct additional runs. The general EGO method was modified by using a multi-start optimizer to identify multiple new test points at each iteration. This modification facilitates parallel computing and decreases the burden of user interaction when the optimizer code is not integrated with the model. Thermal worst-case orbits for SAGE III were successfully identified and shown by direct comparison to be more severe than those identified in the previous selection. The EGO method is a useful tool for this application and can result in computational savings if the initial Design of Experiments (DoE) is selected appropriately.

  17. Human sense utilization method on real-time computer graphics

    NASA Astrophysics Data System (ADS)

    Maehara, Hideaki; Ohgashi, Hitoshi; Hirata, Takao

    1997-06-01

    We are developing an adjustment method of real-time computer graphics, to obtain effective ones which give audience various senses intended by producer, utilizing human sensibility technologically. Generally, production of real-time computer graphics needs much adjustment of various parameters, such as 3D object models/their motions/attributes/view angle/parallax etc., in order that the graphics gives audience superior effects as reality of materials, sense of experience and so on. And it is also known it costs much to adjust such various parameters by trial and error. A graphics producer often evaluates his graphics to improve it. For example, it may lack 'sense of speed' or be necessary to be given more 'sense of settle down,' to improve it. On the other hand, we can know how the parameters in computer graphics affect such senses by means of statistically analyzing several samples of computer graphics which provide different senses. We paid attention to these two facts, so that we designed an adjustment method of the parameters by inputting phases of sense into a computer. By the way of using this method, it becomes possible to adjust real-time computer graphics more effectively than by conventional way of trial and error.

  18. Advanced ballistic range technology

    NASA Technical Reports Server (NTRS)

    Yates, Leslie A.

    1994-01-01

    The research conducted supported two facilities at NASA Ames Research Center: the Hypervelocity Free-Flight Aerodynamic Facility and the 16-Inch Shock Tunnel. During the grant period, a computerized film-reading system was developed, and five- and six-degree-of-freedom parameter-identification routines were written and successfully implemented. Studies of flow separation were conducted, and methods to extract phase shift information from finite-fringe interferograms were developed. Methods for constructing optical images from Computational Fluid Dynamics solutions were also developed, and these methods were used for one-to-one comparisons of experiment and computations.

  19. Adaptive control for eye-gaze input system

    NASA Astrophysics Data System (ADS)

    Zhao, Qijie; Tu, Dawei; Yin, Hairong

    2004-01-01

    The characteristics of the vision-based human-computer interaction system have been analyzed, and the practical application and its limited factors at present time have also been mentioned. The information process methods have been put forward. In order to make the communication flexible and spontaneous, the algorithms to adaptive control of user"s head movement has been designed, and the events-based methods and object-oriented computer language is used to develop the system software, by experiment testing, we found that under given condition, these methods and algorithms can meet the need of the HCI.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    I. W. Ginsberg

    Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, the authors investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (a) the computational expense of the new method is compared with the computational costs of the current method and (b) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The resultsmore » show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.« less

  1. Manufacturing Methods and Technology Project Summary Reports

    DTIC Science & Technology

    1985-06-01

    Computer -Aided Design (CAD)/ Computer -Aided Manufacturing (CAM) Process for the Production of Cold Forged Gears Project 483 6121 - Robotic Welding and...Caliber Projectile Bodies Project 682 8370 - Automatic Inspection and 1-I1 Process Control of Weapons Parts Manufacturing METALS Project 181 7285 - Cast...designed for use on each project. Experience suggested that a general purpose computer interface might be designed that could be used on any project

  2. Projected role of advanced computational aerodynamic methods at the Lockheed-Georgia company

    NASA Technical Reports Server (NTRS)

    Lores, M. E.

    1978-01-01

    Experience with advanced computational methods being used at the Lockheed-Georgia Company to aid in the evaluation and design of new and modified aircraft indicates that large and specialized computers will be needed to make advanced three-dimensional viscous aerodynamic computations practical. The Numerical Aerodynamic Simulation Facility should be used to provide a tool for designing better aerospace vehicles while at the same time reducing development costs by performing computations using Navier-Stokes equations solution algorithms and permitting less sophisticated but nevertheless complex calculations to be made efficiently. Configuration definition procedures and data output formats can probably best be defined in cooperation with industry, therefore, the computer should handle many remote terminals efficiently. The capability of transferring data to and from other computers needs to be provided. Because of the significant amount of input and output associated with 3-D viscous flow calculations and because of the exceedingly fast computation speed envisioned for the computer, special attention should be paid to providing rapid, diversified, and efficient input and output.

  3. A multilevel finite element method for Fredholm integral eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Xie, Hehu; Zhou, Tao

    2015-12-01

    In this work, we proposed a multigrid finite element (MFE) method for solving the Fredholm integral eigenvalue problems. The main motivation for such studies is to compute the Karhunen-Loève expansions of random fields, which play an important role in the applications of uncertainty quantification. In our MFE framework, solving the eigenvalue problem is converted to doing a series of integral iterations and eigenvalue solving in the coarsest mesh. Then, any existing efficient integration scheme can be used for the associated integration process. The error estimates are provided, and the computational complexity is analyzed. It is noticed that the total computational work of our method is comparable with a single integration step in the finest mesh. Several numerical experiments are presented to validate the efficiency of the proposed numerical method.

  4. Computer-aided head film analysis: the University of California San Francisco method.

    PubMed

    Baumrind, S; Miller, D M

    1980-07-01

    Computer technology is already assuming an important role in the management of orthodontic practices. The next 10 years are likely to see expansion in computer usage into the areas of diagnosis, treatment planning, and treatment-record keeping. In the areas of diagnosis and treatment planning, one of the first problems to be attacked will be the automation of head film analysis. The problems of constructing computer-aided systems for this purpose are considered herein in the light of the authors' 10 years of experience in developing a similar system for research purposes. The need for building in methods for automatic detection and correction of gross errors is discussed and the authors' method for doing so is presented. The construction of a rudimentary machine-readable data base for research and clinical purposes is described.

  5. Implementation of Steiner point of fuzzy set.

    PubMed

    Liang, Jiuzhen; Wang, Dejiang

    2014-01-01

    This paper deals with the implementation of Steiner point of fuzzy set. Some definitions and properties of Steiner point are investigated and extended to fuzzy set. This paper focuses on establishing efficient methods to compute Steiner point of fuzzy set. Two strategies of computing Steiner point of fuzzy set are proposed. One is called linear combination of Steiner points computed by a series of crisp α-cut sets of the fuzzy set. The other is an approximate method, which is trying to find the optimal α-cut set approaching the fuzzy set. Stability analysis of Steiner point of fuzzy set is also studied. Some experiments on image processing are given, in which the two methods are applied for implementing Steiner point of fuzzy image, and both strategies show their own advantages in computing Steiner point of fuzzy set.

  6. Beyond Fourier.

    PubMed

    Hoch, Jeffrey C

    2017-10-01

    Non-Fourier methods of spectrum analysis are gaining traction in NMR spectroscopy, driven by their utility for processing nonuniformly sampled data. These methods afford new opportunities for optimizing experiment time, resolution, and sensitivity of multidimensional NMR experiments, but they also pose significant challenges not encountered with the discrete Fourier transform. A brief history of non-Fourier methods in NMR serves to place different approaches in context. Non-Fourier methods reflect broader trends in the growing importance of computation in NMR, and offer insights for future software development. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Viscous-inviscid interaction method including wake effects for three-dimensional wing-body configurations

    NASA Technical Reports Server (NTRS)

    Streett, C. L.

    1981-01-01

    A viscous-inviscid interaction method has been developed by using a three-dimensional integral boundary-layer method which produces results in good agreement with a finite-difference method in a fraction of the computer time. The integral method is stable and robust and incorporates a model for computation in a small region of streamwise separation. A locally two-dimensional wake model, accounting for thickness and curvature effects, is also included in the interaction procedure. Computation time spent in converging an interacted result is, many times, only slightly greater than that required to converge an inviscid calculation. Results are shown from the interaction method, run at experimental angle of attack, Reynolds number, and Mach number, on a wing-body test case for which viscous effects are large. Agreement with experiment is good; in particular, the present wake model improves prediction of the spanwise lift distribution and lower surface cove pressure.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jing Yanfei, E-mail: yanfeijing@uestc.edu.c; Huang Tingzhu, E-mail: tzhuang@uestc.edu.c; Duan Yong, E-mail: duanyong@yahoo.c

    This study is mainly focused on iterative solutions with simple diagonal preconditioning to two complex-valued nonsymmetric systems of linear equations arising from a computational chemistry model problem proposed by Sherry Li of NERSC. Numerical experiments show the feasibility of iterative methods to some extent when applied to the problems and reveal the competitiveness of our recently proposed Lanczos biconjugate A-orthonormalization methods to other classic and popular iterative methods. By the way, experiment results also indicate that application specific preconditioners may be mandatory and required for accelerating convergence.

  9. A new computational strategy for identifying essential proteins based on network topological properties and biological information.

    PubMed

    Qin, Chao; Sun, Yongqi; Dong, Yadong

    2017-01-01

    Essential proteins are the proteins that are indispensable to the survival and development of an organism. Deleting a single essential protein will cause lethality or infertility. Identifying and analysing essential proteins are key to understanding the molecular mechanisms of living cells. There are two types of methods for predicting essential proteins: experimental methods, which require considerable time and resources, and computational methods, which overcome the shortcomings of experimental methods. However, the prediction accuracy of computational methods for essential proteins requires further improvement. In this paper, we propose a new computational strategy named CoTB for identifying essential proteins based on a combination of topological properties, subcellular localization information and orthologous protein information. First, we introduce several topological properties of the protein-protein interaction (PPI) network. Second, we propose new methods for measuring orthologous information and subcellular localization and a new computational strategy that uses a random forest prediction model to obtain a probability score for the proteins being essential. Finally, we conduct experiments on four different Saccharomyces cerevisiae datasets. The experimental results demonstrate that our strategy for identifying essential proteins outperforms traditional computational methods and the most recently developed method, SON. In particular, our strategy improves the prediction accuracy to 89, 78, 79, and 85 percent on the YDIP, YMIPS, YMBD and YHQ datasets at the top 100 level, respectively.

  10. Preliminary skyshine calculations for the Poloidal Diverter Tokamak Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nigg, D.W.; Wheeler, F.J.

    1981-01-01

    The Poloidal Diverter Experiment (PDX) facility at Princeton University is the first operating tokamak to require substantial radiation shielding. A calculational model has been developed to estimate the radiation dose in the PDX control room and at the site boundary due to the skyshine effect. An efficient one-dimensional method is used to compute the neutron and capture gamma leakage currents at the top surface of the PDX roof shield. This method employs an S /SUB n/ calculation in slab geometry and, for the PDX, is superior to spherical models found in the literature. If certain conditions are met, the slabmore » model provides the exact probability of leakage out the top surface of the roof for fusion source neutrons and for capture gamma rays produced in the PDX floor and roof shield. The model also provides the correct neutron and capture gamma leakage current spectra and angular distributions, averaged over the top roof shield surface. For the PDX, this method is nearly as accurate as multidimensional techniques for computing the roof leakage and is much less costly. The actual neutron skyshine dose is computed using a Monte Carlo model with the neutron source at the roof surface obtained from the slab S /SUB n/ calculation. The capture gamma dose is computed using a simple point-kernel single-scatter method.« less

  11. Parallel solution of the symmetric tridiagonal eigenproblem. Research report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessup, E.R.

    1989-10-01

    This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed-memory Multiple Instruction, Multiple Data multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speed up, and accuracy. Experiments on an IPSC hypercube multiprocessor reveal that Cuppen's method ismore » the most accurate approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effect of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptions of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less

  12. Parallel solution of the symmetric tridiagonal eigenproblem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessup, E.R.

    1989-01-01

    This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed memory MIMD multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues, and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speedup, and accuracy. Experiments on an iPSC hyper-cube multiprocessor reveal that Cuppen's method is the most accuratemore » approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effects of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptations of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less

  13. RPM-WEBBSYS: A web-based computer system to apply the rational polynomial method for estimating static formation temperatures of petroleum and geothermal wells

    NASA Astrophysics Data System (ADS)

    Wong-Loya, J. A.; Santoyo, E.; Andaverde, J. A.; Quiroz-Ruiz, A.

    2015-12-01

    A Web-Based Computer System (RPM-WEBBSYS) has been developed for the application of the Rational Polynomial Method (RPM) to estimate static formation temperatures (SFT) of geothermal and petroleum wells. The system is also capable to reproduce the full thermal recovery processes occurred during the well completion. RPM-WEBBSYS has been programmed using advances of the information technology to perform more efficiently computations of SFT. RPM-WEBBSYS may be friendly and rapidly executed by using any computing device (e.g., personal computers and portable computing devices such as tablets or smartphones) with Internet access and a web browser. The computer system was validated using bottomhole temperature (BHT) measurements logged in a synthetic heat transfer experiment, where a good matching between predicted and true SFT was achieved. RPM-WEBBSYS was finally applied to BHT logs collected from well drilling and shut-in operations, where the typical problems of the under- and over-estimation of the SFT (exhibited by most of the existing analytical methods) were effectively corrected.

  14. An Improved Azimuth Angle Estimation Method with a Single Acoustic Vector Sensor Based on an Active Sonar Detection System.

    PubMed

    Zhao, Anbang; Ma, Lin; Ma, Xuefei; Hui, Juan

    2017-02-20

    In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequencydomain and achieves computational complexity reduction.

  15. "Sure, I Would Like to Continue": A Method for Mapping the Experience of Engagement in Video Games

    ERIC Educational Resources Information Center

    Schonau-Fog, Henrik; Bjorner, Thomas

    2012-01-01

    In order to explore one aspect of the engaging nature of computer games, this study will propose a method that aims at classifying the experience of engagement in video games. Inspired by a literature review, we will focus on the fundamental causes of engagement that motivate a player so much that he or she wants to continue playing. By organizing…

  16. The Improvement and Individualization of Computer-Assisted Instruction

    DTIC Science & Technology

    1975-09-15

    Spanish experiments had studied at least one Romance language and con- sequently were able to learn some of +he Spanish wordo by using cognates...Involved the acquisition of foreign- language vocabulary Items. The first (using Geraan vocabulary) concerned Itself with optimizing the selection of...method. Experiments with Spanish and Russian items showed that the method could be a powerful aid in building and retaining a large vocabulary of

  17. Inventing Motivates and Prepares Student Teachers for Computer-Based Learning

    ERIC Educational Resources Information Center

    Glogger-Frey, I.; Kappich, J.; Schwonke, R.; Holzäpfel, L.; Nückles, M.; Renkl, A.

    2015-01-01

    A brief, problem-oriented phase such as an inventing activity is one potential instructional method for preparing learners not only cognitively but also motivationally for learning. Student teachers often need to overcome motivational barriers in order to use computer-based learning opportunities. In a preliminary experiment, we found that student…

  18. Critical Emergency Medicine Procedural Skills: A Comparative Study of Methods for Teaching and Assessment.

    ERIC Educational Resources Information Center

    Chapman, Dane M.; And Others

    Three critical procedural skills in emergency medicine were evaluated using three assessment modalities--written, computer, and animal model. The effects of computer practice and previous procedure experience on skill competence were also examined in an experimental sequential assessment design. Subjects were six medical students, six residents,…

  19. Computer-Based Learning of Neuroanatomy: A Longitudinal Study of Learning, Transfer, and Retention

    ERIC Educational Resources Information Center

    Chariker, Julia H.; Naaz, Farah; Pani, John R.

    2011-01-01

    A longitudinal experiment was conducted to evaluate the effectiveness of new methods for learning neuroanatomy with computer-based instruction. Using a three-dimensional graphical model of the human brain and sections derived from the model, tools for exploring neuroanatomy were developed to encourage "adaptive exploration". This is an…

  20. Quantification of DNA cleavage specificity in Hi-C experiments.

    PubMed

    Meluzzi, Dario; Arya, Gaurav

    2016-01-08

    Hi-C experiments produce large numbers of DNA sequence read pairs that are typically analyzed to deduce genomewide interactions between arbitrary loci. A key step in these experiments is the cleavage of cross-linked chromatin with a restriction endonuclease. Although this cleavage should happen specifically at the enzyme's recognition sequence, an unknown proportion of cleavage events may involve other sequences, owing to the enzyme's star activity or to random DNA breakage. A quantitative estimation of these non-specific cleavages may enable simulating realistic Hi-C read pairs for validation of downstream analyses, monitoring the reproducibility of experimental conditions and investigating biophysical properties that correlate with DNA cleavage patterns. Here we describe a computational method for analyzing Hi-C read pairs to estimate the fractions of cleavages at different possible targets. The method relies on expressing an observed local target distribution downstream of aligned reads as a linear combination of known conditional local target distributions. We validated this method using Hi-C read pairs obtained by computer simulation. Application of the method to experimental Hi-C datasets from murine cells revealed interesting similarities and differences in patterns of cleavage across the various experiments considered. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. Viscosity Measurement using Drop Coalescence in Microgravity

    NASA Technical Reports Server (NTRS)

    Antar, Basil N.; Ethridge, Edwin; Maxwell, Daniel

    1999-01-01

    We present in here details of a new method, using drop coalescence, for application in microgravity environment for determining the viscosity of highly viscous undercooled liquids. The method has the advantage of eliminating heterogeneous nucleation at container walls caused by crystallization of undercooled liquids during processing. Also, due to the rapidity of the measurement, homogeneous nucleation would be avoided. The technique relies on both a highly accurate solution to the Navier-Stokes equations as well as on data gathered from experiments conducted in near zero gravity environment. The liquid viscosity is determined by allowing the computed free surface shape relaxation time to be adjusted in response to the measured free surface velocity of two coalescing drops. Results are presented from two validation experiments of the method which were conducted recently on board the NASA KC-135 aircraft. In these tests the viscosity of a highly viscous liquid, such as glycerine at different temperatures, was determined to reasonable accuracy using the liquid coalescence method. The experiments measured the free surface velocity of two glycerine drops coalescing under the action of surface tension alone in low gravity environment using high speed photography. The free surface velocity was then compared with the computed values obtained from different viscosity values. The results of these experiments were found to agree reasonably well with the calculated values.

  2. A variational multiscale method for particle-cloud tracking in turbomachinery flows

    NASA Astrophysics Data System (ADS)

    Corsini, A.; Rispoli, F.; Sheard, A. G.; Takizawa, K.; Tezduyar, T. E.; Venturini, P.

    2014-11-01

    We present a computational method for simulation of particle-laden flows in turbomachinery. The method is based on a stabilized finite element fluid mechanics formulation and a finite element particle-cloud tracking method. We focus on induced-draft fans used in process industries to extract exhaust gases in the form of a two-phase fluid with a dispersed solid phase. The particle-laden flow causes material wear on the fan blades, degrading their aerodynamic performance, and therefore accurate simulation of the flow would be essential in reliable computational turbomachinery analysis and design. The turbulent-flow nature of the problem is dealt with a Reynolds-Averaged Navier-Stokes model and Streamline-Upwind/Petrov-Galerkin/Pressure-Stabilizing/Petrov-Galerkin stabilization, the particle-cloud trajectories are calculated based on the flow field and closure models for the turbulence-particle interaction, and one-way dependence is assumed between the flow field and particle dynamics. We propose a closure model utilizing the scale separation feature of the variational multiscale method, and compare that to the closure utilizing the eddy viscosity model. We present computations for axial- and centrifugal-fan configurations, and compare the computed data to those obtained from experiments, analytical approaches, and other computational methods.

  3. Unsteady Aerodynamic Validation Experiences From the Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Heeg, Jennifer; Chawlowski, Pawel

    2014-01-01

    The AIAA Aeroelastic Prediction Workshop (AePW) was held in April 2012, bringing together communities of aeroelasticians, computational fluid dynamicists and experimentalists. The extended objective was to assess the state of the art in computational aeroelastic methods as practical tools for the prediction of static and dynamic aeroelastic phenomena. As a step in this process, workshop participants analyzed unsteady aerodynamic and weakly-coupled aeroelastic cases. Forced oscillation and unforced system experiments and computations have been compared for three configurations. This paper emphasizes interpretation of the experimental data, computational results and their comparisons from the perspective of validation of unsteady system predictions. The issues examined in detail are variability introduced by input choices for the computations, post-processing, and static aeroelastic modeling. The final issue addressed is interpreting unsteady information that is present in experimental data that is assumed to be steady, and the resulting consequences on the comparison data sets.

  4. Studying the Elusive Experience in Pervasive Games

    ERIC Educational Resources Information Center

    Stenros, Jaakko; Waern, Annika; Montola, Markus

    2012-01-01

    Studying pervasive games is inherently difficult and different from studying computer or board games. This article builds upon the experiences of staging and studying several playful pervasive technology prototypes. It discusses the challenges and pitfalls of evaluating pervasive game prototypes and charts methods that have proven useful in…

  5. Viscosity Measurement Using Drop Coalescence in Microgravity

    NASA Technical Reports Server (NTRS)

    Antar, Basil N.; Ethridge, Edwin C.; Maxwell, Daniel; Curreri, Peter A. (Technical Monitor)

    2002-01-01

    We present in here validation studies of a new method for application in microgravity environment which measures the viscosity of highly viscous undercooled liquids using drop coalescence. The method has the advantage of avoiding heterogeneous nucleation at container walls caused by crystallization of undercooled liquids during processing. Homogeneous nucleation can also be avoided due to the rapidity of the measurement using this method. The technique relies on measurements from experiments conducted in near zero gravity environment as well as highly accurate analytical formulation for the coalescence process. The viscosity of the liquid is determined by allowing the computed free surface shape relaxation time to be adjusted in response to the measured free surface velocity for two coalescing drops. Results are presented from two sets of validation experiments for the method which were conducted on board aircraft flying parabolic trajectories. In these tests the viscosity of a highly viscous liquid, namely glycerin, was determined at different temperatures using the drop coalescence method described in here. The experiments measured the free surface velocity of two glycerin drops coalescing under the action of surface tension alone in low gravity environment using high speed photography. The liquid viscosity was determined by adjusting the computed free surface velocity values to the measured experimental data. The results of these experiments were found to agree reasonably well with the known viscosity for the test liquid used.

  6. Command and data handling of science signals on Spacelab

    NASA Technical Reports Server (NTRS)

    Mccain, H. G.

    1975-01-01

    The Orbiter Avionics and the Spacelab Command and Data Management System (CDMS) combine to provide a relatively complete command, control, and data handling service to the instrument complement during a Shuttle Sortie Mission. The Spacelab CDMS services the instruments and the Orbiter in turn services the Spacelab. The CDMS computer system includes three computers, two I/O units, a mass memory, and a variable number of remote acquisition units. Attention is given to the CDMS high rate multiplexer, CDMS tape recorders, closed circuit television for the visual monitoring of payload bay and cabin area activities, methods of science data acquisition, questions of transmission and recording, CDMS experiment computer usage, and experiment electronics.

  7. Computing and Applying Atomic Regulons to Understand Gene Expression and Regulation

    DOE PAGES

    Faria, José P.; Davis, James J.; Edirisinghe, Janaka N.; ...

    2016-11-24

    Understanding gene function and regulation is essential for the interpretation, prediction, and ultimate design of cell responses to changes in the environment. A multitude of technologies, abstractions, and interpretive frameworks have emerged to answer the challenges presented by genome function and regulatory network inference. Here, we propose a new approach for producing biologically meaningful clusters of coexpressed genes, called Atomic Regulons (ARs), based on expression data, gene context, and functional relationships. We demonstrate this new approach by computing ARs for Escherichia coli, which we compare with the coexpressed gene clusters predicted by two prevalent existing methods: hierarchical clustering and k-meansmore » clustering. We test the consistency of ARs predicted by all methods against expected interactions predicted by the Context Likelihood of Relatedness (CLR) mutual information based method, finding that the ARs produced by our approach show better agreement with CLR interactions. We then apply our method to compute ARs for four other genomes: Shewanella oneidensis, Pseudomonas aeruginosa, Thermus thermophilus, and Staphylococcus aureus. We compare the AR clusters from all genomes to study the similarity of coexpression among a phylogenetically diverse set of species, identifying subsystems that show remarkable similarity over wide phylogenetic distances. We also study the sensitivity of our method for computing ARs to the expression data used in the computation, showing that our new approach requires less data than competing approaches to converge to a near final configuration of ARs. We go on to use our sensitivity analysis to identify the specific experiments that lead most rapidly to the final set of ARs for E. coli. As a result, this analysis produces insights into improving the design of gene expression experiments.« less

  8. Computing and Applying Atomic Regulons to Understand Gene Expression and Regulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faria, José P.; Davis, James J.; Edirisinghe, Janaka N.

    Understanding gene function and regulation is essential for the interpretation, prediction, and ultimate design of cell responses to changes in the environment. A multitude of technologies, abstractions, and interpretive frameworks have emerged to answer the challenges presented by genome function and regulatory network inference. Here, we propose a new approach for producing biologically meaningful clusters of coexpressed genes, called Atomic Regulons (ARs), based on expression data, gene context, and functional relationships. We demonstrate this new approach by computing ARs for Escherichia coli, which we compare with the coexpressed gene clusters predicted by two prevalent existing methods: hierarchical clustering and k-meansmore » clustering. We test the consistency of ARs predicted by all methods against expected interactions predicted by the Context Likelihood of Relatedness (CLR) mutual information based method, finding that the ARs produced by our approach show better agreement with CLR interactions. We then apply our method to compute ARs for four other genomes: Shewanella oneidensis, Pseudomonas aeruginosa, Thermus thermophilus, and Staphylococcus aureus. We compare the AR clusters from all genomes to study the similarity of coexpression among a phylogenetically diverse set of species, identifying subsystems that show remarkable similarity over wide phylogenetic distances. We also study the sensitivity of our method for computing ARs to the expression data used in the computation, showing that our new approach requires less data than competing approaches to converge to a near final configuration of ARs. We go on to use our sensitivity analysis to identify the specific experiments that lead most rapidly to the final set of ARs for E. coli. As a result, this analysis produces insights into improving the design of gene expression experiments.« less

  9. A GPU-based mipmapping method for water surface visualization

    NASA Astrophysics Data System (ADS)

    Li, Hua; Quan, Wei; Xu, Chao; Wu, Yan

    2018-03-01

    Visualization of water surface is a hot topic in computer graphics. In this paper, we presented a fast method to generate wide range of water surface with good image quality both near and far from the viewpoint. This method utilized uniform mesh and Fractal Perlin noise to model water surface. Mipmapping technology was enforced to the surface textures, which adjust the resolution with respect to the distance from the viewpoint and reduce the computing cost. Lighting effect was computed based on shadow mapping technology, Snell's law and Fresnel term. The render pipeline utilizes a CPU-GPU shared memory structure, which improves the rendering efficiency. Experiment results show that our approach visualizes water surface with good image quality at real-time frame rates performance.

  10. Efficient tiled calculation of over-10-gigapixel holograms using ray-wavefront conversion.

    PubMed

    Igarashi, Shunsuke; Nakamura, Tomoya; Matsushima, Kyoji; Yamaguchi, Masahiro

    2018-04-16

    In the calculation of large-scale computer-generated holograms, an approach called "tiling," which divides the hologram plane into small rectangles, is often employed due to limitations on computational memory. However, the total amount of computational complexity severely increases with the number of divisions. In this paper, we propose an efficient method for calculating tiled large-scale holograms using ray-wavefront conversion. In experiments, the effectiveness of the proposed method was verified by comparing its calculation cost with that using the previous method. Additionally, a hologram of 128K × 128K pixels was calculated and fabricated by a laser-lithography system, and a high-quality 105 mm × 105 mm 3D image including complicated reflection and translucency was optically reconstructed.

  11. Computer literacy for life sciences: helping the digital-era biology undergraduates face today's research.

    PubMed

    Smolinski, Tomasz G

    2010-01-01

    Computer literacy plays a critical role in today's life sciences research. Without the ability to use computers to efficiently manipulate and analyze large amounts of data resulting from biological experiments and simulations, many of the pressing questions in the life sciences could not be answered. Today's undergraduates, despite the ubiquity of computers in their lives, seem to be largely unfamiliar with how computers are being used to pursue and answer such questions. This article describes an innovative undergraduate-level course, titled Computer Literacy for Life Sciences, that aims to teach students the basics of a computerized scientific research pursuit. The purpose of the course is for students to develop a hands-on working experience in using standard computer software tools as well as computer techniques and methodologies used in life sciences research. This paper provides a detailed description of the didactical tools and assessment methods used in and outside of the classroom as well as a discussion of the lessons learned during the first installment of the course taught at Emory University in fall semester 2009.

  12. Experiments in Computing: A Survey

    PubMed Central

    Moisseinen, Nella

    2014-01-01

    Experiments play a central role in science. The role of experiments in computing is, however, unclear. Questions about the relevance of experiments in computing attracted little attention until the 1980s. As the discipline then saw a push towards experimental computer science, a variety of technically, theoretically, and empirically oriented views on experiments emerged. As a consequence of those debates, today's computing fields use experiments and experiment terminology in a variety of ways. This paper analyzes experimentation debates in computing. It presents five ways in which debaters have conceptualized experiments in computing: feasibility experiment, trial experiment, field experiment, comparison experiment, and controlled experiment. This paper has three aims: to clarify experiment terminology in computing; to contribute to disciplinary self-understanding of computing; and, due to computing's centrality in other fields, to promote understanding of experiments in modern science in general. PMID:24688404

  13. Experiments in computing: a survey.

    PubMed

    Tedre, Matti; Moisseinen, Nella

    2014-01-01

    Experiments play a central role in science. The role of experiments in computing is, however, unclear. Questions about the relevance of experiments in computing attracted little attention until the 1980s. As the discipline then saw a push towards experimental computer science, a variety of technically, theoretically, and empirically oriented views on experiments emerged. As a consequence of those debates, today's computing fields use experiments and experiment terminology in a variety of ways. This paper analyzes experimentation debates in computing. It presents five ways in which debaters have conceptualized experiments in computing: feasibility experiment, trial experiment, field experiment, comparison experiment, and controlled experiment. This paper has three aims: to clarify experiment terminology in computing; to contribute to disciplinary self-understanding of computing; and, due to computing's centrality in other fields, to promote understanding of experiments in modern science in general.

  14. OPTHYLIC: An Optimised Tool for Hybrid Limits Computation

    NASA Astrophysics Data System (ADS)

    Busato, Emmanuel; Calvet, David; Theveneaux-Pelzer, Timothée

    2018-05-01

    A software tool, computing observed and expected upper limits on Poissonian process rates using a hybrid frequentist-Bayesian CLs method, is presented. This tool can be used for simple counting experiments where only signal, background and observed yields are provided or for multi-bin experiments where binned distributions of discriminating variables are provided. It allows the combination of several channels and takes into account statistical and systematic uncertainties, as well as correlations of systematic uncertainties between channels. It has been validated against other software tools and analytical calculations, for several realistic cases.

  15. Linear solver performance in elastoplastic problem solution on GPU cluster

    NASA Astrophysics Data System (ADS)

    Khalevitsky, Yu. V.; Konovalov, A. V.; Burmasheva, N. V.; Partin, A. S.

    2017-12-01

    Applying the finite element method to severe plastic deformation problems involves solving linear equation systems. While the solution procedure is relatively hard to parallelize and computationally intensive by itself, a long series of large scale systems need to be solved for each problem. When dealing with fine computational meshes, such as in the simulations of three-dimensional metal matrix composite microvolume deformation, tens and hundreds of hours may be needed to complete the whole solution procedure, even using modern supercomputers. In general, one of the preconditioned Krylov subspace methods is used in a linear solver for such problems. The method convergence highly depends on the operator spectrum of a problem stiffness matrix. In order to choose the appropriate method, a series of computational experiments is used. Different methods may be preferable for different computational systems for the same problem. In this paper we present experimental data obtained by solving linear equation systems from an elastoplastic problem on a GPU cluster. The data can be used to substantiate the choice of the appropriate method for a linear solver to use in severe plastic deformation simulations.

  16. Digital Libraries--Methods and Applications

    ERIC Educational Resources Information Center

    Huang, Kuo Hung, Ed.

    2011-01-01

    Digital library is commonly seen as a type of information retrieval system which stores and accesses digital content remotely via computer networks. However, the vision of digital libraries is not limited to technology or management, but user experience. This book is an attempt to share the practical experiences of solutions to the operation of…

  17. Fast algorithms for computing phylogenetic divergence time.

    PubMed

    Crosby, Ralph W; Williams, Tiffani L

    2017-12-06

    The inference of species divergence time is a key step in most phylogenetic studies. Methods have been available for the last ten years to perform the inference, but the performance of the methods does not yet scale well to studies with hundreds of taxa and thousands of DNA base pairs. For example a study of 349 primate taxa was estimated to require over 9 months of processing time. In this work, we present a new algorithm, AncestralAge, that significantly improves the performance of the divergence time process. As part of AncestralAge, we demonstrate a new method for the computation of phylogenetic likelihood and our experiments show a 90% improvement in likelihood computation time on the aforementioned dataset of 349 primates taxa with over 60,000 DNA base pairs. Additionally, we show that our new method for the computation of the Bayesian prior on node ages reduces the running time for this computation on the 349 taxa dataset by 99%. Through the use of these new algorithms we open up the ability to perform divergence time inference on large phylogenetic studies.

  18. Fast Reduction Method in Dominance-Based Information Systems

    NASA Astrophysics Data System (ADS)

    Li, Yan; Zhou, Qinghua; Wen, Yongchuan

    2018-01-01

    In real world applications, there are often some data with continuous values or preference-ordered values. Rough sets based on dominance relations can effectively deal with these kinds of data. Attribute reduction can be done in the framework of dominance-relation based approach to better extract decision rules. However, the computational cost of the dominance classes greatly affects the efficiency of attribute reduction and rule extraction. This paper presents an efficient method of computing dominance classes, and further compares it with traditional method with increasing attributes and samples. Experiments on UCI data sets show that the proposed algorithm obviously improves the efficiency of the traditional method, especially for large-scale data.

  19. An Overview of Computational Aeroacoustic Modeling at NASA Langley

    NASA Technical Reports Server (NTRS)

    Lockard, David P.

    2001-01-01

    The use of computational techniques in the area of acoustics is known as computational aeroacoustics and has shown great promise in recent years. Although an ultimate goal is to use computational simulations as a virtual wind tunnel, the problem is so complex that blind applications of traditional algorithms are typically unable to produce acceptable results. The phenomena of interest are inherently unsteady and cover a wide range of frequencies and amplitudes. Nonetheless, with appropriate simplifications and special care to resolve specific phenomena, currently available methods can be used to solve important acoustic problems. These simulations can be used to complement experiments, and often give much more detailed information than can be obtained in a wind tunnel. The use of acoustic analogy methods to inexpensively determine far-field acoustics from near-field unsteadiness has greatly reduced the computational requirements. A few examples of current applications of computational aeroacoustics at NASA Langley are given. There remains a large class of problems that require more accurate and efficient methods. Research to develop more advanced methods that are able to handle the geometric complexity of realistic problems using block-structured and unstructured grids are highlighted.

  20. Segmentation and detection of breast cancer in mammograms combining wavelet analysis and genetic algorithm.

    PubMed

    Pereira, Danilo Cesar; Ramos, Rodrigo Pereira; do Nascimento, Marcelo Zanchetta

    2014-04-01

    In Brazil, the National Cancer Institute (INCA) reports more than 50,000 new cases of the disease, with risk of 51 cases per 100,000 women. Radiographic images obtained from mammography equipments are one of the most frequently used techniques for helping in early diagnosis. Due to factors related to cost and professional experience, in the last two decades computer systems to support detection (Computer-Aided Detection - CADe) and diagnosis (Computer-Aided Diagnosis - CADx) have been developed in order to assist experts in detection of abnormalities in their initial stages. Despite the large number of researches on CADe and CADx systems, there is still a need for improved computerized methods. Nowadays, there is a growing concern with the sensitivity and reliability of abnormalities diagnosis in both views of breast mammographic images, namely cranio-caudal (CC) and medio-lateral oblique (MLO). This paper presents a set of computational tools to aid segmentation and detection of mammograms that contained mass or masses in CC and MLO views. An artifact removal algorithm is first implemented followed by an image denoising and gray-level enhancement method based on wavelet transform and Wiener filter. Finally, a method for detection and segmentation of masses using multiple thresholding, wavelet transform and genetic algorithm is employed in mammograms which were randomly selected from the Digital Database for Screening Mammography (DDSM). The developed computer method was quantitatively evaluated using the area overlap metric (AOM). The mean ± standard deviation value of AOM for the proposed method was 79.2 ± 8%. The experiments demonstrate that the proposed method has a strong potential to be used as the basis for mammogram mass segmentation in CC and MLO views. Another important aspect is that the method overcomes the limitation of analyzing only CC and MLO views. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  1. Coarse-grained computation for particle coagulation and sintering processes by linking Quadrature Method of Moments with Monte-Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou Yu, E-mail: yzou@Princeton.ED; Kavousanakis, Michail E., E-mail: mkavousa@Princeton.ED; Kevrekidis, Ioannis G., E-mail: yannis@Princeton.ED

    2010-07-20

    The study of particle coagulation and sintering processes is important in a variety of research studies ranging from cell fusion and dust motion to aerosol formation applications. These processes are traditionally simulated using either Monte-Carlo methods or integro-differential equations for particle number density functions. In this paper, we present a computational technique for cases where we believe that accurate closed evolution equations for a finite number of moments of the density function exist in principle, but are not explicitly available. The so-called equation-free computational framework is then employed to numerically obtain the solution of these unavailable closed moment equations bymore » exploiting (through intelligent design of computational experiments) the corresponding fine-scale (here, Monte-Carlo) simulation. We illustrate the use of this method by accelerating the computation of evolving moments of uni- and bivariate particle coagulation and sintering through short simulation bursts of a constant-number Monte-Carlo scheme.« less

  2. An evaluation of four single element airfoil analytic methods

    NASA Technical Reports Server (NTRS)

    Freuler, R. J.; Gregorek, G. M.

    1979-01-01

    A comparison of four computer codes for the analysis of two-dimensional single element airfoil sections is presented for three classes of section geometries. Two of the computer codes utilize vortex singularities methods to obtain the potential flow solution. The other two codes solve the full inviscid potential flow equation using finite differencing techniques, allowing results to be obtained for transonic flow about an airfoil including weak shocks. Each program incorporates boundary layer routines for computing the boundary layer displacement thickness and boundary layer effects on aerodynamic coefficients. Computational results are given for a symmetrical section represented by an NACA 0012 profile, a conventional section illustrated by an NACA 65A413 profile, and a supercritical type section for general aviation applications typified by a NASA LS(1)-0413 section. The four codes are compared and contrasted in the areas of method of approach, range of applicability, agreement among each other and with experiment, individual advantages and disadvantages, computer run times and memory requirements, and operational idiosyncrasies.

  3. A Novel Resource Management Method of Providing Operating System as a Service for Mobile Transparent Computing

    PubMed Central

    Huang, Suzhen; Wu, Min; Zhang, Yaoxue; She, Jinhua

    2014-01-01

    This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU) virtualization and mobile agent for mobile transparent computing (MTC) to devise a method of managing shared resources and services management (SRSM). It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user's requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable. PMID:24883353

  4. A novel resource management method of providing operating system as a service for mobile transparent computing.

    PubMed

    Xiong, Yonghua; Huang, Suzhen; Wu, Min; Zhang, Yaoxue; She, Jinhua

    2014-01-01

    This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU) virtualization and mobile agent for mobile transparent computing (MTC) to devise a method of managing shared resources and services management (SRSM). It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user's requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable.

  5. Computer Aided Design of Computer Generated Holograms for electron beam fabrication

    NASA Technical Reports Server (NTRS)

    Urquhart, Kristopher S.; Lee, Sing H.; Guest, Clark C.; Feldman, Michael R.; Farhoosh, Hamid

    1989-01-01

    Computer Aided Design (CAD) systems that have been developed for electrical and mechanical design tasks are also effective tools for the process of designing Computer Generated Holograms (CGHs), particularly when these holograms are to be fabricated using electron beam lithography. CAD workstations provide efficient and convenient means of computing, storing, displaying, and preparing for fabrication many of the features that are common to CGH designs. Experience gained in the process of designing CGHs with various types of encoding methods is presented. Suggestions are made so that future workstations may further accommodate the CGH design process.

  6. On modelling three-dimensional piezoelectric smart structures with boundary spectral element method

    NASA Astrophysics Data System (ADS)

    Zou, Fangxin; Aliabadi, M. H.

    2017-05-01

    The computational efficiency of the boundary element method in elastodynamic analysis can be significantly improved by employing high-order spectral elements for boundary discretisation. In this work, for the first time, the so-called boundary spectral element method is utilised to formulate the piezoelectric smart structures that are widely used in structural health monitoring (SHM) applications. The resultant boundary spectral element formulation has been validated by the finite element method (FEM) and physical experiments. The new formulation has demonstrated a lower demand on computational resources and a higher numerical stability than commercial FEM packages. Comparing to the conventional boundary element formulation, a significant reduction in computational expenses has been achieved. In summary, the boundary spectral element formulation presented in this paper provides a highly efficient and stable mathematical tool for the development of SHM applications.

  7. Computing the stability of steady-state solutions of mathematical models of the electrical activity in the heart.

    PubMed

    Tveito, Aslak; Skavhaug, Ola; Lines, Glenn T; Artebrant, Robert

    2011-08-01

    Instabilities in the electro-chemical resting state of the heart can generate ectopic waves that in turn can initiate arrhythmias. We derive methods for computing the resting state for mathematical models of the electro-chemical process underpinning a heartbeat, and we estimate the stability of the resting state by invoking the largest real part of the eigenvalues of a linearized model. The implementation of the methods is described and a number of numerical experiments illustrate the feasibility of the methods. In particular, we test the methods for problems where we can compare the solutions with analytical results, and problems where we have solutions computed by independent software. The software is also tested for a fairly realistic 3D model. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Reliability history of the Apollo guidance computer

    NASA Technical Reports Server (NTRS)

    Hall, E. C.

    1972-01-01

    The Apollo guidance computer was designed to provide the computation necessary for guidance, navigation and control of the command module and the lunar landing module of the Apollo spacecraft. The computer was designed using the technology of the early 1960's and the production was completed by 1969. During the development, production, and operational phase of the program, the computer has accumulated a very interesting history which is valuable for evaluating the technology, production methods, system integration, and the reliability of the hardware. The operational experience in the Apollo guidance systems includes 17 computers which flew missions and another 26 flight type computers which are still in various phases of prelaunch activity including storage, system checkout, prelaunch spacecraft checkout, etc. These computers were manufactured and maintained under very strict quality control procedures with requirements for reporting and analyzing all indications of failure. Probably no other computer or electronic equipment with equivalent complexity has been as well documented and monitored. Since it has demonstrated a unique reliability history, it is important to evaluate the techniques and methods which have contributed to the high reliability of this computer.

  9. Multichannel loudness compensation method based on segmented sound pressure level for digital hearing aids

    NASA Astrophysics Data System (ADS)

    Liang, Ruiyu; Xi, Ji; Bao, Yongqiang

    2017-07-01

    To improve the performance of gain compensation based on three-segment sound pressure level (SPL) in hearing aids, an improved multichannel loudness compensation method based on eight-segment SPL was proposed. Firstly, the uniform cosine modulated filter bank was designed. Then, the adjacent channels which have low or gradual slopes were adaptively merged to obtain the corresponding non-uniform cosine modulated filter according to the audiogram of hearing impaired persons. Secondly, the input speech was decomposed into sub-band signals and the SPL of every sub-band signal was computed. Meanwhile, the audible SPL range from 0 dB SPL to 120 dB SPL was equally divided into eight segments. Based on these segments, a different prescription formula was designed to compute more detailed gain to compensate according to the audiogram and the computed SPL. Finally, the enhanced signal was synthesized. Objective experiments showed the decomposed signals after cosine modulated filter bank have little distortion. Objective experiments showed that the hearing aids speech perception index (HASPI) and hearing aids speech quality index (HASQI) increased 0.083 and 0.082 on average, respectively. Subjective experiments showed the proposed algorithm can effectively improve the speech recognition of six hearing impaired persons.

  10. An experiment on the use of disposable plastics as a reinforcement in concrete beams

    NASA Technical Reports Server (NTRS)

    Chowdhury, Mostafiz R.

    1992-01-01

    Illustrated here is the concept of reinforced concrete structures by the use of computer simulation and an inexpensive hands-on design experiment. The students in our construction management program use disposable plastic as a reinforcement to demonstrate their understanding of reinforced concrete and prestressed concrete beams. The plastics used for such an experiment vary from plastic bottles to steel reinforced auto tires. This experiment will show the extent to which plastic reinforcement increases the strength of a concrete beam. The procedure of using such throw-away plastics in an experiment to explain the interaction between the reinforcement material and concrete, and a comparison of the test results for using different types of waste plastics are discussed. A computer analysis to simulate the structural response is used to compare the test results and to understand the analytical background of reinforced concrete design. This interaction of using computers to analyze structures and to relate the output results with real experimentation is found to be a very useful method for teaching a math-based analytical subject to our non-engineering students.

  11. How Archimedes Helped Students to Unravel the Mystery of the Magical Number Pi

    ERIC Educational Resources Information Center

    Papadopoulos, Ioannis

    2014-01-01

    This paper describes a classroom experiment where students use techniques found in the history of mathematics to learn about an important mathematical idea. More precisely, sixth graders in a primary school follow Archimedes's method of exhaustion in order to compute the number p. Working in a computer environment, students inscribe and…

  12. Solution of the Schrodinger Equation for One-Dimensional Anharmonic Potentials: An Undergraduate Computational Experiment

    ERIC Educational Resources Information Center

    Beddard, Godfrey S.

    2011-01-01

    A method of solving the Schrodinger equation using a basis set expansion is described and used to calculate energy levels and wavefunctions of the hindered rotation of ethane and the ring puckering of cyclopentene. The calculations were performed using a computer algebra package and the calculations are straightforward enough for undergraduates to…

  13. Effectiveness of Using a Video Game to Teach a Course in Mechanical Engineering

    ERIC Educational Resources Information Center

    Coller, B. D.; Scott, M. J.

    2009-01-01

    One of the core courses in the undergraduate mechanical engineering curriculum has been completely redesigned. In the new numerical methods course, all assignments and learning experiences are built around a video/computer game. Students are given the task of writing computer programs to race a simulated car around a track. In doing so, students…

  14. Preservice Teacher Sense-Making as They Learn to Teach Reading as Seen through Computer-Mediated Discourse

    ERIC Educational Resources Information Center

    Stefanski, Angela J.; Leitze, Amy; Fife-Demski, Veronica M.

    2018-01-01

    This collective case study used methods of discourse analysis to consider what computer-mediated collaboration might reveal about preservice teachers' sense-making in a field-based practicum as they learn to teach reading to children identified as struggling readers. Researchers agree that field-based experiences coupled with time for reflection…

  15. Comparison of two methods to determine fan performance curves using computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Onma, Patinya; Chantrasmi, Tonkid

    2018-01-01

    This work investigates a systematic numerical approach that employs Computational Fluid Dynamics (CFD) to obtain performance curves of a backward-curved centrifugal fan. Generating the performance curves requires a number of three-dimensional simulations with varying system loads at a fixed rotational speed. Two methods were used and their results compared to experimental data. The first method incrementally changes the mass flow late through the inlet boundary condition while the second method utilizes a series of meshes representing the physical damper blade at various angles. The generated performance curves from both methods are compared with an experiment setup in accordance with the AMCA fan performance testing standard.

  16. Investigation of methods to search for the boundaries on the image and their use on lung hardware of methods finding saliency map

    NASA Astrophysics Data System (ADS)

    Semenishchev, E. A.; Marchuk, V. I.; Fedosov, V. P.; Stradanchenko, S. G.; Ruslyakov, D. V.

    2015-05-01

    This work aimed to study computationally simple method of saliency map calculation. Research in this field received increasing interest for the use of complex techniques in portable devices. A saliency map allows increasing the speed of many subsequent algorithms and reducing the computational complexity. The proposed method of saliency map detection based on both image and frequency space analysis. Several examples of test image from the Kodak dataset with different detalisation considered in this paper demonstrate the effectiveness of the proposed approach. We present experiments which show that the proposed method providing better results than the framework Salience Toolbox in terms of accuracy and speed.

  17. Doubly stochastic radial basis function methods

    NASA Astrophysics Data System (ADS)

    Yang, Fenglian; Yan, Liang; Ling, Leevan

    2018-06-01

    We propose a doubly stochastic radial basis function (DSRBF) method for function recoveries. Instead of a constant, we treat the RBF shape parameters as stochastic variables whose distribution were determined by a stochastic leave-one-out cross validation (LOOCV) estimation. A careful operation count is provided in order to determine the ranges of all the parameters in our methods. The overhead cost for setting up the proposed DSRBF method is O (n2) for function recovery problems with n basis. Numerical experiments confirm that the proposed method not only outperforms constant shape parameter formulation (in terms of accuracy with comparable computational cost) but also the optimal LOOCV formulation (in terms of both accuracy and computational cost).

  18. Computational methods for vortex dominated compressible flows

    NASA Technical Reports Server (NTRS)

    Murman, Earll M.

    1987-01-01

    The principal objectives were to: understand the mechanisms by which Euler equation computations model leading edge vortex flows; understand the vortical and shock wave structures that may exist for different wing shapes, angles of incidence, and Mach numbers; and compare calculations with experiments in order to ascertain the limitations and advantages of Euler equation models. The initial approach utilized the cell centered finite volume Jameson scheme. The final calculation utilized a cell vertex finite volume method on an unstructured grid. Both methods used Runge-Kutta four stage schemes for integrating the equations. The principal findings are briefly summarized.

  19. An Improved Azimuth Angle Estimation Method with a Single Acoustic Vector Sensor Based on an Active Sonar Detection System

    PubMed Central

    Zhao, Anbang; Ma, Lin; Ma, Xuefei; Hui, Juan

    2017-01-01

    In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequency-domain and achieves computational complexity reduction. PMID:28230763

  20. Obtaining Approximate Values of Exterior Orientation Elements of Multi-Intersection Images Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Li, X.; Li, S. W.

    2012-07-01

    In this paper, an efficient global optimization algorithm in the field of artificial intelligence, named Particle Swarm Optimization (PSO), is introduced into close range photogrammetric data processing. PSO can be applied to obtain the approximate values of exterior orientation elements under the condition that multi-intersection photography and a small portable plane control frame are used. PSO, put forward by an American social psychologist J. Kennedy and an electrical engineer R.C. Eberhart, is a stochastic global optimization method based on swarm intelligence, which was inspired by social behavior of bird flocking or fish schooling. The strategy of obtaining the approximate values of exterior orientation elements using PSO is as follows: in terms of image coordinate observed values and space coordinates of few control points, the equations of calculating the image coordinate residual errors can be given. The sum of absolute value of each image coordinate is minimized to be the objective function. The difference between image coordinate observed value and the image coordinate computed through collinear condition equation is defined as the image coordinate residual error. Firstly a gross area of exterior orientation elements is given, and then the adjustment of other parameters is made to get the particles fly in the gross area. After iterative computation for certain times, the satisfied approximate values of exterior orientation elements are obtained. By doing so, the procedures like positioning and measuring space control points in close range photogrammetry can be avoided. Obviously, this method can improve the surveying efficiency greatly and at the same time can decrease the surveying cost. And during such a process, only one small portable control frame with a couple of control points is employed, and there are no strict requirements for the space distribution of control points. In order to verify the effectiveness of this algorithm, two experiments are carried out. In the first experiment, images of a standard grid board are taken according to multi-intersection photography using digital camera. Three points or six points which are located on the left-down corner of the standard grid are regarded as control points respectively, and the exterior orientation elements of each image are computed through PSO, and compared with these elements computed through bundle adjustment. In the second experiment, the exterior orientation elements obtained from the first experiment are used as approximate values in bundle adjustment and then the space coordinates of other grid points on the board can be computed. The coordinate difference of grid points between these computed space coordinates and their known coordinates can be used to compute the accuracy. The point accuracy computed in above experiments are ±0.76mm and ±0.43mm respectively. The above experiments prove the effectiveness of PSO used in close range photogrammetry to compute approximate values of exterior orientation elements, and the algorithm can meet the requirement of higher accuracy. In short, PSO can get better results in a faster, cheaper way compared with other surveying methods in close range photogrammetry.

  1. Review of Research into the Concept of the Microblowing Technique for Turbulent Skin Friction Reduction

    NASA Technical Reports Server (NTRS)

    2004-01-01

    A new technology for reducing turbulent skin friction, called the Microblowing Technique (MBT), is presented. Results from proof-of-concept experiments show that this technology could potentially reduce turbulent skin friction by more than 50% of the skin friction of a solid flat plate for subsonic and supersonic flow conditions. The primary purpose of this review paper is to provide readers with information on the turbulent skin friction reduction obtained from many experiments using the MBT. Although the MBT has a penalty for obtaining the microblowing air associated with it, some combinations of the MBT with suction boundary layer control methods are an attractive alternative for a real application. Several computational simulations to understand the flow physics of the MBT are also included. More experiments and computational fluid dynamics (CFD) computations are needed for the understanding of the unsteady flow nature of the MBT and the optimization of this new technology.

  2. The silicon synapse or, neural net computing.

    PubMed

    Frenger, P

    1989-01-01

    Recent developments have rekindled interest in the electronic neural network, a form of parallel computer architecture loosely based on the nervous system of living creatures. This paper describes the elements of neural net computers, reviews the historical milestones in their development, and lists the advantages and disadvantages of their use. Methods for software simulation of neural network systems on existing computers, as well as creation of hardware analogues, are given. The most successful applications of these techniques, involving emulation of biological system responses, are presented. The author's experiences with neural net systems are discussed.

  3. Art for the Ages.

    ERIC Educational Resources Information Center

    Casazza, Ornella; Franchi, Paolo

    1985-01-01

    Description of encoding of art works and digitization of paintings to preserve and restore them reviews experiments which used chromatic selection and abstraction as a painting restoration method. This method utilizes the numeric processing resulting from digitization to restore a painting and computer simulation to shorten the restoration…

  4. Known plaintext attack on double random phase encoding using fingerprint as key and a method for avoiding the attack.

    PubMed

    Tashima, Hideaki; Takeda, Masafumi; Suzuki, Hiroyuki; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2010-06-21

    We have shown that the application of double random phase encoding (DRPE) to biometrics enables the use of biometrics as cipher keys for binary data encryption. However, DRPE is reported to be vulnerable to known-plaintext attacks (KPAs) using a phase recovery algorithm. In this study, we investigated the vulnerability of DRPE using fingerprints as cipher keys to the KPAs. By means of computational experiments, we estimated the encryption key and restored the fingerprint image using the estimated key. Further, we propose a method for avoiding the KPA on the DRPE that employs the phase retrieval algorithm. The proposed method makes the amplitude component of the encrypted image constant in order to prevent the amplitude component of the encrypted image from being used as a clue for phase retrieval. Computational experiments showed that the proposed method not only avoids revealing the cipher key and the fingerprint but also serves as a sufficiently accurate verification system.

  5. Reconstruction method for fluorescent X-ray computed tomography by least-squares method using singular value decomposition

    NASA Astrophysics Data System (ADS)

    Yuasa, T.; Akiba, M.; Takeda, T.; Kazama, M.; Hoshino, A.; Watanabe, Y.; Hyodo, K.; Dilmanian, F. A.; Akatsuka, T.; Itai, Y.

    1997-02-01

    We describe a new attenuation correction method for fluorescent X-ray computed tomography (FXCT) applied to image nonradioactive contrast materials in vivo. The principle of the FXCT imaging is that of computed tomography of the first generation. Using monochromatized synchrotron radiation from the BLNE-5A bending-magnet beam line of Tristan Accumulation Ring in KEK, Japan, we studied phantoms with the FXCT method, and we succeeded in delineating a 4-mm-diameter channel filled with a 500 /spl mu/g I/ml iodine solution in a 20-mm-diameter acrylic cylindrical phantom. However, to detect smaller iodine concentrations, attenuation correction is needed. We present a correction method based on the equation representing the measurement process. The discretized equation system is solved by the least-squares method using the singular value decomposition. The attenuation correction method is applied to the projections by the Monte Carlo simulation and the experiment to confirm its effectiveness.

  6. Computer Simulation and Field Experiment for Downlink Multiuser MIMO in Mobile WiMAX System.

    PubMed

    Yamaguchi, Kazuhiro; Nagahashi, Takaharu; Akiyama, Takuya; Matsue, Hideaki; Uekado, Kunio; Namera, Takakazu; Fukui, Hiroshi; Nanamatsu, Satoshi

    2015-01-01

    The transmission performance for a downlink mobile WiMAX system with multiuser multiple-input multiple-output (MU-MIMO) systems in a computer simulation and field experiment is described. In computer simulation, a MU-MIMO transmission system can be realized by using the block diagonalization (BD) algorithm, and each user can receive signals without any signal interference from other users. The bit error rate (BER) performance and channel capacity in accordance with modulation schemes and the number of streams were simulated in a spatially correlated multipath fading environment. Furthermore, we propose a method for evaluating the transmission performance for this downlink mobile WiMAX system in this environment by using the computer simulation. In the field experiment, the received power and downlink throughput in the UDP layer were measured on an experimental mobile WiMAX system developed in Azumino City in Japan. In comparison with the simulated and experimented results, the measured maximum throughput performance in the downlink had almost the same performance as the simulated throughput. It was confirmed that the experimental mobile WiMAX system for MU-MIMO transmission successfully increased the total channel capacity of the system.

  7. Computer Simulation and Field Experiment for Downlink Multiuser MIMO in Mobile WiMAX System

    PubMed Central

    Yamaguchi, Kazuhiro; Nagahashi, Takaharu; Akiyama, Takuya; Matsue, Hideaki; Uekado, Kunio; Namera, Takakazu; Fukui, Hiroshi; Nanamatsu, Satoshi

    2015-01-01

    The transmission performance for a downlink mobile WiMAX system with multiuser multiple-input multiple-output (MU-MIMO) systems in a computer simulation and field experiment is described. In computer simulation, a MU-MIMO transmission system can be realized by using the block diagonalization (BD) algorithm, and each user can receive signals without any signal interference from other users. The bit error rate (BER) performance and channel capacity in accordance with modulation schemes and the number of streams were simulated in a spatially correlated multipath fading environment. Furthermore, we propose a method for evaluating the transmission performance for this downlink mobile WiMAX system in this environment by using the computer simulation. In the field experiment, the received power and downlink throughput in the UDP layer were measured on an experimental mobile WiMAX system developed in Azumino City in Japan. In comparison with the simulated and experimented results, the measured maximum throughput performance in the downlink had almost the same performance as the simulated throughput. It was confirmed that the experimental mobile WiMAX system for MU-MIMO transmission successfully increased the total channel capacity of the system. PMID:26421311

  8. Robust flow stability: Theory, computations and experiments in near wall turbulence

    NASA Astrophysics Data System (ADS)

    Bobba, Kumar Manoj

    Helmholtz established the field of hydrodynamic stability with his pioneering work in 1868. From then on, hydrodynamic stability became an important tool in understanding various fundamental fluid flow phenomena in engineering (mechanical, aeronautics, chemical, materials, civil, etc.) and science (astrophysics, geophysics, biophysics, etc.), and turbulence in particular. However, there are many discrepancies between classical hydrodynamic stability theory and experiments. In this thesis, the limitations of traditional hydrodynamic stability theory are shown and a framework for robust flow stability theory is formulated. A host of new techniques like gramians, singular values, operator norms, etc. are introduced to understand the role of various kinds of uncertainty. An interesting feature of this framework is the close interplay between theory and computations. It is shown that a subset of Navier-Stokes equations are globally, non-nonlinearly stable for all Reynolds number. Yet, invoking this new theory, it is shown that these equations produce structures (vortices and streaks) as seen in the experiments. The experiments are done in zero pressure gradient transiting boundary layer on a flat plate in free surface tunnel. Digital particle image velocimetry, and MEMS based laser Doppler velocimeter and shear stress sensors have been used to make quantitative measurements of the flow. Various theoretical and computational predictions are in excellent agreement with the experimental data. A closely related topic of modeling, simulation and complexity reduction of large mechanics problems with multiple spatial and temporal scales is also studied. A nice method that rigorously quantifies the important scales and automatically gives models of the problem to various levels of accuracy is introduced. Computations done using spectral methods are presented.

  9. Can computed crystal energy landscapes help understand pharmaceutical solids?

    PubMed Central

    Price, Sarah L.; Braun, Doris E.; Reutzel-Edens, Susan M.

    2017-01-01

    Computational crystal structure prediction (CSP) methods can now be applied to the smaller pharmaceutical molecules currently in drug development. We review the recent uses of computed crystal energy landscapes for pharmaceuticals, concentrating on examples where they have been used in collaboration with industrial-style experimental solid form screening. There is a strong complementarity in aiding experiment to find and characterise practically important solid forms and understanding the nature of the solid form landscape. PMID:27067116

  10. A phase match based frequency estimation method for sinusoidal signals

    NASA Astrophysics Data System (ADS)

    Shen, Yan-Lin; Tu, Ya-Qing; Chen, Lin-Jun; Shen, Ting-Ao

    2015-04-01

    Accurate frequency estimation affects the ranging precision of linear frequency modulated continuous wave (LFMCW) radars significantly. To improve the ranging precision of LFMCW radars, a phase match based frequency estimation method is proposed. To obtain frequency estimation, linear prediction property, autocorrelation, and cross correlation of sinusoidal signals are utilized. The analysis of computational complex shows that the computational load of the proposed method is smaller than those of two-stage autocorrelation (TSA) and maximum likelihood. Simulations and field experiments are performed to validate the proposed method, and the results demonstrate the proposed method has better performance in terms of frequency estimation precision than methods of Pisarenko harmonic decomposition, modified covariance, and TSA, which contribute to improving the precision of LFMCW radars effectively.

  11. Tapping into Graduate Students' Collaborative Technology Experience in a Research Methods Class: Insights on Teaching Research Methods in a Malaysian and American Setting

    ERIC Educational Resources Information Center

    Vasquez-Colina, Maria D.; Maslin-Ostrowski, Pat; Baba, Suria

    2017-01-01

    This case study used qualitative and quantitative methods to investigate challenges of learning and teaching research methods by examining graduate students' use of collaborative technology (i.e., digital tools that enable collaboration and information seeking such as software and social media) and students' computer self-efficacy. We conducted…

  12. A method of semi-quantifying β-AP in brain PET-CT 11C-PiB images.

    PubMed

    Jiang, Jiehui; Lin, Xiaoman; Wen, Junlin; Huang, Zhemin; Yan, Zhuangzhi

    2014-01-01

    Alzheimer's disease (AD) is a common health problem for elderly populations. Positron emission tomography-computed tomography (PET-CT)11C-PiB for beta-P (amyloid-β peptide, β-AP) imaging is an advanced method to diagnose AD in early stage. However, in practice radiologists lack a standardized value to semi-quantify β-AP. This paper proposes such a standardized value: SVβ-AP. This standardized value measures the mean ratio between the dimension of β-AP areas in PET and CT images. A computer aided diagnosis approach is also proposed to achieve SVβ-AP. A simulation experiment was carried out to pre-test the technical feasibility of the CAD approach and SVβ-AP. The experiment results showed that it is technically feasible.

  13. Domain decomposition methods for the parallel computation of reacting flows

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1988-01-01

    Domain decomposition is a natural route to parallel computing for partial differential equation solvers. Subdomains of which the original domain of definition is comprised are assigned to independent processors at the price of periodic coordination between processors to compute global parameters and maintain the requisite degree of continuity of the solution at the subdomain interfaces. In the domain-decomposed solution of steady multidimensional systems of PDEs by finite difference methods using a pseudo-transient version of Newton iteration, the only portion of the computation which generally stands in the way of efficient parallelization is the solution of the large, sparse linear systems arising at each Newton step. For some Jacobian matrices drawn from an actual two-dimensional reacting flow problem, comparisons are made between relaxation-based linear solvers and also preconditioned iterative methods of Conjugate Gradient and Chebyshev type, focusing attention on both iteration count and global inner product count. The generalized minimum residual method with block-ILU preconditioning is judged the best serial method among those considered, and parallel numerical experiments on the Encore Multimax demonstrate for it approximately 10-fold speedup on 16 processors.

  14. A New Runge-Kutta Discontinuous Galerkin Method with Conservation Constraint to Improve CFL Condition for Solving Conservation Laws

    PubMed Central

    Xu, Zhiliang; Chen, Xu-Yan; Liu, Yingjie

    2014-01-01

    We present a new formulation of the Runge-Kutta discontinuous Galerkin (RKDG) method [9, 8, 7, 6] for solving conservation Laws with increased CFL numbers. The new formulation requires the computed RKDG solution in a cell to satisfy additional conservation constraint in adjacent cells and does not increase the complexity or change the compactness of the RKDG method. Numerical computations for solving one-dimensional and two-dimensional scalar and systems of nonlinear hyperbolic conservation laws are performed with approximate solutions represented by piecewise quadratic and cubic polynomials, respectively. The hierarchical reconstruction [17, 33] is applied as a limiter to eliminate spurious oscillations in discontinuous solutions. From both numerical experiments and the analytic estimate of the CFL number of the newly formulated method, we find that: 1) this new formulation improves the CFL number over the original RKDG formulation by at least three times or more and thus reduces the overall computational cost; and 2) the new formulation essentially does not compromise the resolution of the numerical solutions of shock wave problems compared with ones computed by the RKDG method. PMID:25414520

  15. Differential computation method used to calibrate the angle-centroid relationship in coaxial reverse Hartmann test

    NASA Astrophysics Data System (ADS)

    Li, Xinji; Hui, Mei; Zhao, Zhu; Liu, Ming; Dong, Liquan; Kong, Lingqin; Zhao, Yuejin

    2018-05-01

    A differential computation method is presented to improve the precision of calibration for coaxial reverse Hartmann test (RHT). In the calibration, the accuracy of the distance measurement greatly influences the surface shape test, as demonstrated in the mathematical analyses. However, high-precision absolute distance measurement is difficult in the calibration. Thus, a differential computation method that only requires the relative distance was developed. In the proposed method, a liquid crystal display screen successively displayed two regular dot matrix patterns with different dot spacing. In a special case, images on the detector exhibited similar centroid distributions during the reflector translation. Thus, the critical value of the relative displacement distance and the centroid distributions of the dots on the detector were utilized to establish the relationship between the rays at certain angles and the detector coordinates. Experiments revealed the approximately linear behavior of the centroid variation with the relative displacement distance. With the differential computation method, we increased the precision of traditional calibration 10-5 rad root mean square. The precision of the RHT was increased by approximately 100 nm.

  16. Comparison of Response Surface and Kriging Models for Multidisciplinary Design Optimization

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.; Korte, John J.; Mauery, Timothy M.; Mistree, Farrokh

    1998-01-01

    In this paper, we compare and contrast the use of second-order response surface models and kriging models for approximating non-random, deterministic computer analyses. After reviewing the response surface method for constructing polynomial approximations, kriging is presented as an alternative approximation method for the design and analysis of computer experiments. Both methods are applied to the multidisciplinary design of an aerospike nozzle which consists of a computational fluid dynamics model and a finite-element model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations, and four optimization problems m formulated and solved using both sets of approximation models. The second-order response surface models and kriging models-using a constant underlying global model and a Gaussian correlation function-yield comparable results.

  17. Applications of a General Finite-Difference Method for Calculating Bending Deformations of Solid Plates

    NASA Technical Reports Server (NTRS)

    Walton, William C., Jr.

    1960-01-01

    This paper reports the findings of an investigation of a finite - difference method directly applicable to calculating static or simple harmonic flexures of solid plates and potentially useful in other problems of structural analysis. The method, which was proposed in doctoral thesis by John C. Houbolt, is based on linear theory and incorporates the principle of minimum potential energy. Full realization of its advantages requires use of high-speed computing equipment. After a review of Houbolt's method, results of some applications are presented and discussed. The applications consisted of calculations of the natural modes and frequencies of several uniform-thickness cantilever plates and, as a special case of interest, calculations of the modes and frequencies of the uniform free-free beam. Computed frequencies and nodal patterns for the first five or six modes of each plate are compared with existing experiments, and those for one plate are compared with another approximate theory. Beam computations are compared with exact theory. On the basis of the comparisons it is concluded that the method is accurate and general in predicting plate flexures, and additional applications are suggested. An appendix is devoted t o computing procedures which evolved in the progress of the applications and which facilitate use of the method in conjunction with high-speed computing equipment.

  18. Computer-Assisted Experiments with a Laser Diode

    ERIC Educational Resources Information Center

    Kraftmakher, Yaakov

    2011-01-01

    A laser diode from an inexpensive laser pen (laser pointer) is used in simple experiments. The radiant output power and efficiency of the laser are measured, and polarization of the light beam is shown. The "h/e" ratio is available from the threshold of spontaneous emission. The lasing threshold is found using several methods. With a…

  19. Practical Tools for Content Development: Pre-Service Teachers' Experiences and Perceptions

    ERIC Educational Resources Information Center

    Yurtseven Avci, Zeynep; Eren, Esra; Seckin Kapucu, Munise

    2016-01-01

    This study adopts phenomenology approach as the research design method to investigate pre-service teachers' experiences and perceptions on using practical tools for content development. The participants are twenty-four pre-service teachers who were taking Computer II course during 2013-2014 spring semester at a public university in Turkey. During…

  20. Student Experiments and Teacher Tests Using EDAQ530

    ERIC Educational Resources Information Center

    Kopasz, Katalin; Makra, Péter; Gingl, Zoltán

    2013-01-01

    Experiments, as we all know, are especially important in science education. However, their impact on improving thinking could be even greater when applied together with the methods of inquiry-based learning (IBL). In this paper we present our observations of a high-school laboratory class where students used computers to carry out and analyse real…

  1. Validity of Adult Retrospective Reports of Adverse Childhood Experiences: Review of the Evidence

    ERIC Educational Resources Information Center

    Hardt, Jochen; Rutter, Michael

    2004-01-01

    Background: Influential studies have cast doubt on the validity of retrospective reports by adults of their own adverse experiences in childhood. Accordingly, many researchers view retrospective reports with scepticism. Method: A computer-based search, supplemented by hand searches, was used to identify studies reported between 1980 and 2001 in…

  2. Change Detection of Mobile LIDAR Data Using Cloud Computing

    NASA Astrophysics Data System (ADS)

    Liu, Kun; Boehm, Jan; Alis, Christian

    2016-06-01

    Change detection has long been a challenging problem although a lot of research has been conducted in different fields such as remote sensing and photogrammetry, computer vision, and robotics. In this paper, we blend voxel grid and Apache Spark together to propose an efficient method to address the problem in the context of big data. Voxel grid is a regular geometry representation consisting of the voxels with the same size, which fairly suites parallel computation. Apache Spark is a popular distributed parallel computing platform which allows fault tolerance and memory cache. These features can significantly enhance the performance of Apache Spark and results in an efficient and robust implementation. In our experiments, both synthetic and real point cloud data are employed to demonstrate the quality of our method.

  3. A Vision-Based Motion Sensor for Undergraduate Laboratories.

    ERIC Educational Resources Information Center

    Salumbides, Edcel John; Maristela, Joyce; Uy, Alfredson; Karremans, Kees

    2002-01-01

    Introduces an alternative method to determine the mechanics of a moving object that uses computer vision algorithms with a charge-coupled device (CCD) camera as a recording device. Presents two experiments, pendulum motion and terminal velocity, to compare results of the alternative and conventional methods. (YDS)

  4. Use of Tablet Computers to Promote Physical Therapy Students' Engagement in Knowledge Translation During Clinical Experiences

    PubMed Central

    Loeb, Kathryn; Barbosa, Sabrina; Jiang, Fei; Lee, Karin T.

    2016-01-01

    Background and Purpose: Physical therapists strive to integrate research into daily practice. The tablet computer is a potentially transformational tool for accessing information within the clinical practice environment. The purpose of this study was to measure and describe patterns of tablet computer use among physical therapy students during clinical rotation experiences. Methods: Doctor of physical therapy students (n = 13 users) tracked their use of tablet computers (iPad), loaded with commercially available apps, during 16 clinical experiences (6-16 weeks in duration). Results: The tablets were used on 70% of 691 clinic days, averaging 1.3 uses per day. Information seeking represented 48% of uses; 33% of those were foreground searches for research articles and syntheses and 66% were for background medical information. Other common uses included patient education (19%), medical record documentation (13%), and professional communication (9%). The most frequently used app was Safari, the preloaded web browser (representing 281 [36.5%] incidents of use). Users accessed 56 total apps to support clinical practice. Discussion and Conclusions: Physical therapy students successfully integrated use of a tablet computer into their clinical experiences including regular activities of information seeking. Our findings suggest that the tablet computer represents a potentially transformational tool for promoting knowledge translation in the clinical practice environment. Video Abstract available for more insights from the authors (see Supplemental Digital Content 1, http://links.lww.com/JNPT/A127). PMID:26945431

  5. Design of k-Space Channel Combination Kernels and Integration with Parallel Imaging

    PubMed Central

    Beatty, Philip J.; Chang, Shaorong; Holmes, James H.; Wang, Kang; Brau, Anja C. S.; Reeder, Scott B.; Brittain, Jean H.

    2014-01-01

    Purpose In this work, a new method is described for producing local k-space channel combination kernels using a small amount of low-resolution multichannel calibration data. Additionally, this work describes how these channel combination kernels can be combined with local k-space unaliasing kernels produced by the calibration phase of parallel imaging methods such as GRAPPA, PARS and ARC. Methods Experiments were conducted to evaluate both the image quality and computational efficiency of the proposed method compared to a channel-by-channel parallel imaging approach with image-space sum-of-squares channel combination. Results Results indicate comparable image quality overall, with some very minor differences seen in reduced field-of-view imaging. It was demonstrated that this method enables a speed up in computation time on the order of 3–16X for 32-channel data sets. Conclusion The proposed method enables high quality channel combination to occur earlier in the reconstruction pipeline, reducing computational and memory requirements for image reconstruction. PMID:23943602

  6. Experimental comparison between performance of the PM and LPM methods in computed radiography

    NASA Astrophysics Data System (ADS)

    Kermani, Aboutaleb; Feghhi, Seyed Amir Hossein; Rokrok, Behrouz

    2018-07-01

    The scatter downgrades the image quality and reduces its information efficiency in quantitative measurement usages when creating projections with ionizing radiation. Therefore, the variety of methods have been applied for scatter reduction and correction of the undesirable effects. As new approaches, the ordinary and localized primary modulation methods have already been used individually through experiments and simulations in medical and industrial computed tomography, respectively. The aim of this study is the evaluation of capabilities and limitations of these methods in comparison with each other. For this mean, the ordinary primary modulation has been implemented in computed radiography for the first time and the potential of both methods has been assessed in thickness measurement as well as scatter to primary signal ratio determination. The comparison results, based on the experimental outputs which obtained using aluminum specimens and continuous X-ray spectra, are to the benefit of the localized primary modulation method because of improved accuracy and higher performance especially at the edges.

  7. Solution of nonlinear time-dependent PDEs through componentwise approximation of matrix functions

    NASA Astrophysics Data System (ADS)

    Cibotarica, Alexandru; Lambers, James V.; Palchak, Elisabeth M.

    2016-09-01

    Exponential propagation iterative (EPI) methods provide an efficient approach to the solution of large stiff systems of ODEs, compared to standard integrators. However, the bulk of the computational effort in these methods is due to products of matrix functions and vectors, which can become very costly at high resolution due to an increase in the number of Krylov projection steps needed to maintain accuracy. In this paper, it is proposed to modify EPI methods by using Krylov subspace spectral (KSS) methods, instead of standard Krylov projection methods, to compute products of matrix functions and vectors. Numerical experiments demonstrate that this modification causes the number of Krylov projection steps to become bounded independently of the grid size, thus dramatically improving efficiency and scalability. As a result, for each test problem featured, as the total number of grid points increases, the growth in computation time is just below linear, while other methods achieved this only on selected test problems or not at all.

  8. Explicit Building Block Multiobjective Evolutionary Computation: Methods and Applications

    DTIC Science & Technology

    2005-06-16

    which is introduced in 1990 by Richard Dawkins in his book ”The Selfish Gene .” [34] 356 E.5.7 Pareto Envelop-based Selection Algorithm I and II...IGC Intelligent Gene Collector . . . . . . . . . . . . . . . . . 59 OED Orthogonal Experimental Design . . . . . . . . . . . . . 59 MED Main Effect...complete one experiment 74 `′ The string length hold within the computer (can be longer than number of genes

  9. THE PRODUCTION AND EVALUATION OF THREE COMPUTER-BASED ECONOMICS GAMES FOR THE SIXTH GRADE. FINAL REPORT.

    ERIC Educational Resources Information Center

    WING, RICHARD L.; AND OTHERS

    THE PURPOSE OF THE EXPERIMENT WAS TO PRODUCE AND EVALUATE 3 COMPUTER-BASED ECONOMICS GAMES AS A METHOD OF INDIVIDUALIZING INSTRUCTION FOR GRADE 6 STUDENTS. 26 EXPERIMENTAL SUBJECTS PLAYED 2 ECONOMICS GAMES, WHILE A CONTROL GROUP RECEIVED CONVENTIONAL INSTRUCTION ON SIMILAR MATERIAL. IN THE SUMERIAN GAME, STUDENTS SEATED AT THE TYPEWRITER TERMINALS…

  10. A Didactic Experience of Statistical Analysis for the Determination of Glycine in a Nonaqueous Medium Using ANOVA and a Computer Program

    ERIC Educational Resources Information Center

    Santos-Delgado, M. J.; Larrea-Tarruella, L.

    2004-01-01

    The back-titration methods are compared statistically to establish glycine in a nonaqueous medium of acetic acid. Important variations in the mean values of glycine are observed due to the interaction effects between the analysis of variance (ANOVA) technique and a statistical study through a computer software.

  11. Walk a Mile in My Shoes: Stakeholder Accounts of Testing Experience with a Computer-Administered Test

    ERIC Educational Resources Information Center

    Fox, Janna; Cheng, Liying

    2015-01-01

    In keeping with the trend to elicit multiple stakeholder responses to operational tests as part of test validation, this exploratory mixed methods study examines test-taker accounts of an Internet-based (i.e., computer-administered) test in the high-stakes context of proficiency testing for university admission. In 2013, as language testing…

  12. Time-resolved absorption and hemoglobin concentration difference maps: a method to retrieve depth-related information on cerebral hemodynamics.

    NASA Astrophysics Data System (ADS)

    Montcel, Bruno; Chabrier, Renée; Poulet, Patrick

    2006-12-01

    Time-resolved diffuse optical methods have been applied to detect hemodynamic changes induced by cerebral activity. We describe a near infrared spectroscopic (NIRS) reconstruction free method which allows retrieving depth-related information on absorption variations. Variations in the absorption coefficient of tissues have been computed over the duration of the whole experiment, but also over each temporal step of the time-resolved optical signal, using the microscopic Beer-Lambert law.Finite element simulations show that time-resolved computation of the absorption difference as a function of the propagation time of detected photons is sensitive to the depth profile of optical absorption variations. Differences in deoxyhemoglobin and oxyhemoglobin concentrations can also be calculated from multi-wavelength measurements. Experimental validations of the simulated results have been obtained for resin phantoms. They confirm that time-resolved computation of the absorption differences exhibited completely different behaviours, depending on whether these variations occurred deeply or superficially. The hemodynamic response to a short finger tapping stimulus was measured over the motor cortex and compared to experiments involving Valsalva manoeuvres. Functional maps were also calculated for the hemodynamic response induced by finger tapping movements.

  13. Time-resolved absorption and hemoglobin concentration difference maps: a method to retrieve depth-related information on cerebral hemodynamics.

    PubMed

    Montcel, Bruno; Chabrier, Renée; Poulet, Patrick

    2006-12-11

    Time-resolved diffuse optical methods have been applied to detect hemodynamic changes induced by cerebral activity. We describe a near infrared spectroscopic (NIRS) reconstruction free method which allows retrieving depth-related information on absorption variations. Variations in the absorption coefficient of tissues have been computed over the duration of the whole experiment, but also over each temporal step of the time-resolved optical signal, using the microscopic Beer-Lambert law.Finite element simulations show that time-resolved computation of the absorption difference as a function of the propagation time of detected photons is sensitive to the depth profile of optical absorption variations. Differences in deoxyhemoglobin and oxyhemoglobin concentrations can also be calculated from multi-wavelength measurements. Experimental validations of the simulated results have been obtained for resin phantoms. They confirm that time-resolved computation of the absorption differences exhibited completely different behaviours, depending on whether these variations occurred deeply or superficially. The hemodynamic response to a short finger tapping stimulus was measured over the motor cortex and compared to experiments involving Valsalva manoeuvres. Functional maps were also calculated for the hemodynamic response induced by finger tapping movements.

  14. Self-Organized Service Negotiation for Collaborative Decision Making

    PubMed Central

    Zhang, Bo; Zheng, Ziming

    2014-01-01

    This paper proposes a self-organized service negotiation method for CDM in intelligent and automatic manners. It mainly includes three phases: semantic-based capacity evaluation for the CDM sponsor, trust computation of the CDM organization, and negotiation selection of the decision-making service provider (DMSP). In the first phase, the CDM sponsor produces the formal semantic description of the complex decision task for DMSP and computes the capacity evaluation values according to participator instructions from different DMSPs. In the second phase, a novel trust computation approach is presented to compute the subjective belief value, the objective reputation value, and the recommended trust value. And in the third phase, based on the capacity evaluation and trust computation, a negotiation mechanism is given to efficiently implement the service selection. The simulation experiment results show that our self-organized service negotiation method is feasible and effective for CDM. PMID:25243228

  15. Self-organized service negotiation for collaborative decision making.

    PubMed

    Zhang, Bo; Huang, Zhenhua; Zheng, Ziming

    2014-01-01

    This paper proposes a self-organized service negotiation method for CDM in intelligent and automatic manners. It mainly includes three phases: semantic-based capacity evaluation for the CDM sponsor, trust computation of the CDM organization, and negotiation selection of the decision-making service provider (DMSP). In the first phase, the CDM sponsor produces the formal semantic description of the complex decision task for DMSP and computes the capacity evaluation values according to participator instructions from different DMSPs. In the second phase, a novel trust computation approach is presented to compute the subjective belief value, the objective reputation value, and the recommended trust value. And in the third phase, based on the capacity evaluation and trust computation, a negotiation mechanism is given to efficiently implement the service selection. The simulation experiment results show that our self-organized service negotiation method is feasible and effective for CDM.

  16. Modified complementary ensemble empirical mode decomposition and intrinsic mode functions evaluation index for high-speed train gearbox fault diagnosis

    NASA Astrophysics Data System (ADS)

    Chen, Dongyue; Lin, Jianhui; Li, Yanping

    2018-06-01

    Complementary ensemble empirical mode decomposition (CEEMD) has been developed for the mode-mixing problem in Empirical Mode Decomposition (EMD) method. Compared to the ensemble empirical mode decomposition (EEMD), the CEEMD method reduces residue noise in the signal reconstruction. Both CEEMD and EEMD need enough ensemble number to reduce the residue noise, and hence it would be too much computation cost. Moreover, the selection of intrinsic mode functions (IMFs) for further analysis usually depends on experience. A modified CEEMD method and IMFs evaluation index are proposed with the aim of reducing the computational cost and select IMFs automatically. A simulated signal and in-service high-speed train gearbox vibration signals are employed to validate the proposed method in this paper. The results demonstrate that the modified CEEMD can decompose the signal efficiently with less computation cost, and the IMFs evaluation index can select the meaningful IMFs automatically.

  17. Robust Optimization Design for Turbine Blade-Tip Radial Running Clearance using Hierarchically Response Surface Method

    NASA Astrophysics Data System (ADS)

    Zhiying, Chen; Ping, Zhou

    2017-11-01

    Considering the robust optimization computational precision and efficiency for complex mechanical assembly relationship like turbine blade-tip radial running clearance, a hierarchically response surface robust optimization algorithm is proposed. The distribute collaborative response surface method is used to generate assembly system level approximation model of overall parameters and blade-tip clearance, and then a set samples of design parameters and objective response mean and/or standard deviation is generated by using system approximation model and design of experiment method. Finally, a new response surface approximation model is constructed by using those samples, and this approximation model is used for robust optimization process. The analyses results demonstrate the proposed method can dramatic reduce the computational cost and ensure the computational precision. The presented research offers an effective way for the robust optimization design of turbine blade-tip radial running clearance.

  18. Comparison of Different Methods of Grading a Level Turn Task on a Flight Simulator

    NASA Technical Reports Server (NTRS)

    Heath, Bruce E.; Crier, tomyka

    2003-01-01

    With the advancements in the computing power of personal computers, pc-based flight simulators and trainers have opened new avenues in the training of airplane pilots. It may be desirable to have the flight simulator make a quantitative evaluation of the progress of a pilot's training thereby reducing the physical requirement of the flight instructor who must, in turn, watch every flight. In an experiment, University students conducted six different flights, each consisting of two level turns. The flights were three minutes in duration. By evaluating videotapes, two certified flight instructors provided separate letter grades for each turn. These level turns were also evaluated using two other computer based grading methods. One method determined automated grades based on prescribed tolerances in bank angle, airspeed and altitude. The other method used was deviations in altitude and bank angle for performance index and performance grades.

  19. Water Level Prediction of Lake Cascade Mahakam Using Adaptive Neural Network Backpropagation (ANNBP)

    NASA Astrophysics Data System (ADS)

    Mislan; Gaffar, A. F. O.; Haviluddin; Puspitasari, N.

    2018-04-01

    A natural hazard information and flood events are indispensable as a form of prevention and improvement. One of the causes is flooding in the areas around the lake. Therefore, forecasting the surface of Lake water level to anticipate flooding is required. The purpose of this paper is implemented computational intelligence method namely Adaptive Neural Network Backpropagation (ANNBP) to forecasting the Lake Cascade Mahakam. Based on experiment, performance of ANNBP indicated that Lake water level prediction have been accurate by using mean square error (MSE) and mean absolute percentage error (MAPE). In other words, computational intelligence method can produce good accuracy. A hybrid and optimization of computational intelligence are focus in the future work.

  20. AN EFFICIENT HIGHER-ORDER FAST MULTIPOLE BOUNDARY ELEMENT SOLUTION FOR POISSON-BOLTZMANN BASED MOLECULAR ELECTROSTATICS

    PubMed Central

    Bajaj, Chandrajit; Chen, Shun-Chuan; Rand, Alexander

    2011-01-01

    In order to compute polarization energy of biomolecules, we describe a boundary element approach to solving the linearized Poisson-Boltzmann equation. Our approach combines several important features including the derivative boundary formulation of the problem and a smooth approximation of the molecular surface based on the algebraic spline molecular surface. State of the art software for numerical linear algebra and the kernel independent fast multipole method is used for both simplicity and efficiency of our implementation. We perform a variety of computational experiments, testing our method on a number of actual proteins involved in molecular docking and demonstrating the effectiveness of our solver for computing molecular polarization energy. PMID:21660123

  1. Concurrent performance in a three-alternative choice situation: response allocation in a Rock/Paper/Scissors game.

    PubMed

    Kangas, Brian D; Berry, Meredith S; Cassidy, Rachel N; Dallery, Jesse; Vaidya, Manish; Hackenberg, Timothy D

    2009-10-01

    Adult human subjects engaged in a simulated Rock/Paper/Scissors game against a computer opponent. The computer opponent's responses were determined by programmed probabilities that differed across 10 blocks of 100 trials each. Response allocation in Experiment 1 was well described by a modified version of the generalized matching equation, with undermatching observed in all subjects. To assess the effects of instructions on response allocation, accurate probability-related information on how the computer was programmed to respond was provided to subjects in Experiment 2. Five of 6 subjects played the counter response of the computer's dominant programmed response near-exclusively (e.g., subjects played paper almost exclusively if the probability of rock was high), resulting in minor overmatching, and higher reinforcement rates relative to Experiment 1. On the whole, the study shows that the generalized matching law provides a good description of complex human choice in a gaming context, and illustrates a promising set of laboratory methods and analytic techniques that capture important features of human choice outside the laboratory.

  2. Cone beam x-ray luminescence computed tomography: a feasibility study.

    PubMed

    Chen, Dongmei; Zhu, Shouping; Yi, Huangjian; Zhang, Xianghan; Chen, Duofang; Liang, Jimin; Tian, Jie

    2013-03-01

    The appearance of x-ray luminescence computed tomography (XLCT) opens new possibilities to perform molecular imaging by x ray. In the previous XLCT system, the sample was irradiated by a sequence of narrow x-ray beams and the x-ray luminescence was measured by a highly sensitive charge coupled device (CCD) camera. This resulted in a relatively long sampling time and relatively low utilization of the x-ray beam. In this paper, a novel cone beam x-ray luminescence computed tomography strategy is proposed, which can fully utilize the x-ray dose and shorten the scanning time. The imaging model and reconstruction method are described. The validity of the imaging strategy has been studied in this paper. In the cone beam XLCT system, the cone beam x ray was adopted to illuminate the sample and a highly sensitive CCD camera was utilized to acquire luminescent photons emitted from the sample. Photons scattering in biological tissues makes it an ill-posed problem to reconstruct the 3D distribution of the x-ray luminescent sample in the cone beam XLCT. In order to overcome this issue, the authors used the diffusion approximation model to describe the photon propagation in tissues, and employed the sparse regularization method for reconstruction. An incomplete variables truncated conjugate gradient method and permissible region strategy were used for reconstruction. Meanwhile, traditional x-ray CT imaging could also be performed in this system. The x-ray attenuation effect has been considered in their imaging model, which is helpful in improving the reconstruction accuracy. First, simulation experiments with cylinder phantoms were carried out to illustrate the validity of the proposed compensated method. The experimental results showed that the location error of the compensated algorithm was smaller than that of the uncompensated method. The permissible region strategy was applied and reduced the reconstruction error to less than 2 mm. The robustness and stability were then evaluated from different view numbers, different regularization parameters, different measurement noise levels, and optical parameters mismatch. The reconstruction results showed that the settings had a small effect on the reconstruction. The nonhomogeneous phantom simulation was also carried out to simulate a more complex experimental situation and evaluated their proposed method. Second, the physical cylinder phantom experiments further showed similar results in their prototype XLCT system. With the discussion of the above experiments, it was shown that the proposed method is feasible to the general case and actual experiments. Utilizing numerical simulation and physical experiments, the authors demonstrated the validity of the new cone beam XLCT method. Furthermore, compared with the previous narrow beam XLCT, the cone beam XLCT could more fully utilize the x-ray dose and the scanning time would be shortened greatly. The study of both simulation experiments and physical phantom experiments indicated that the proposed method was feasible to the general case and actual experiments.

  3. Some Applications Of Semigroups And Computer Algebra In Discrete Structures

    NASA Astrophysics Data System (ADS)

    Bijev, G.

    2009-11-01

    An algebraic approach to the pseudoinverse generalization problem in Boolean vector spaces is used. A map (p) is defined, which is similar to an orthogonal projection in linear vector spaces. Some other important maps with properties similar to those of the generalized inverses (pseudoinverses) of linear transformations and matrices corresponding to them are also defined and investigated. Let Ax = b be an equation with matrix A and vectors x and b Boolean. Stochastic experiments for solving the equation, which involves the maps defined and use computer algebra methods, have been made. As a result, the Hamming distance between vectors Ax = p(b) and b is equal or close to the least possible. We also share our experience in using computer algebra systems for teaching discrete mathematics and linear algebra and research. Some examples for computations with binary relations using Maple are given.

  4. Computational Analysis of Behavior.

    PubMed

    Egnor, S E Roian; Branson, Kristin

    2016-07-08

    In this review, we discuss the emerging field of computational behavioral analysis-the use of modern methods from computer science and engineering to quantitatively measure animal behavior. We discuss aspects of experiment design important to both obtaining biologically relevant behavioral data and enabling the use of machine vision and learning techniques for automation. These two goals are often in conflict. Restraining or restricting the environment of the animal can simplify automatic behavior quantification, but it can also degrade the quality or alter important aspects of behavior. To enable biologists to design experiments to obtain better behavioral measurements, and computer scientists to pinpoint fruitful directions for algorithm improvement, we review known effects of artificial manipulation of the animal on behavior. We also review machine vision and learning techniques for tracking, feature extraction, automated behavior classification, and automated behavior discovery, the assumptions they make, and the types of data they work best with.

  5. Navier-Stokes computations for circulation control airfoils

    NASA Technical Reports Server (NTRS)

    Pulliam, Thomas H.; Jespersen, Dennis C.; Barth, Timothy J.

    1987-01-01

    Navier-Stokes computations of subsonic to transonic flow past airfoils with augmented lift due to rearward jet blowing over a curved trailing edge are presented. The approach uses a spiral grid topology. Solutions are obtained using a Navier-Stokes code which employs an implicit finite difference method, an algebraic turbulence model, and developments which improve stability, convergence, and accuracy. Results are compared against experiments for no jet blowing and moderate jet pressures and demonstrate the capability to compute these complicated flows.

  6. Navier-Stokes computations for circulation controlled airfoils

    NASA Technical Reports Server (NTRS)

    Pulliam, T. H.; Jesperen, D. C.; Barth, T. J.

    1986-01-01

    Navier-Stokes computations of subsonic to transonic flow past airfoils with augmented lift due to rearward jet blowing over a curved trailing edge are presented. The approach uses a spiral grid topology. Solutions are obtained using a Navier-Stokes code which employs an implicit finite difference method, an algebraic turbulence model, and developments which improve stability, convergence, and accuracy. Results are compared against experiments for no jet blowing and moderate jet pressures and demonstrate the capability to compute these complicated flows.

  7. A set of devices for Mechanics Laboratory assisted by a Computer

    NASA Astrophysics Data System (ADS)

    Rusu, Alexandru; Pirtac, Constantin

    2015-12-01

    The booklet give a description of a set of devices designed for unified work out of a number of Laboratory works in Mechanics for students at Technical Universities. It consists of a clock, adjusted to a computer, which allows to compute times with an error not greater than 0.0001 s. It allows also to make the calculations of the physical quantities measured in the experience and present the compilation of the final report. The least square method is used throughout the workshop.

  8. Computations of Flow over a Hump Model Using Higher Order Method with Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    Balakumar, P.

    2005-01-01

    Turbulent separated flow over a two-dimensional hump is computed by solving the RANS equations with k - omega (SST) turbulence model for the baseline, steady suction and oscillatory blowing/suction flow control cases. The flow equations and the turbulent model equations are solved using a fifth-order accurate weighted essentially. nonoscillatory (WENO) scheme for space discretization and a third order, total variation diminishing (TVD) Runge-Kutta scheme for time integration. Qualitatively the computed pressure distributions exhibit the same behavior as those observed in the experiments. The computed separation regions are much longer than those observed experimentally. However, the percentage reduction in the separation region in the steady suction case is closer to what was measured in the experiment. The computations did not predict the expected reduction in the separation length in the oscillatory case. The predicted turbulent quantities are two to three times smaller than the measured values pointing towards the deficiencies in the existing turbulent models when they are applied to strong steady/unsteady separated flows.

  9. First-principles anharmonic quantum calculations for peptide spectroscopy: VSCF calculations and comparison with experiments.

    PubMed

    Roy, Tapta Kanchan; Sharma, Rahul; Gerber, R Benny

    2016-01-21

    First-principles quantum calculations for anharmonic vibrational spectroscopy of three protected dipeptides are carried out and compared with experimental data. Using hybrid HF/MP2 potentials, the Vibrational Self-Consistent Field with Second-Order Perturbation Correction (VSCF-PT2) algorithm is used to compute the spectra without any ad hoc scaling or fitting. All of the vibrational modes (135 for the largest system) are treated quantum mechanically and anharmonically using full pair-wise coupling potentials to represent the interaction between different modes. In the hybrid potential scheme the MP2 method is used for the harmonic part of the potential and a modified HF method is used for the anharmonic part. The overall agreement between computed spectra and experiment is very good and reveals different signatures for different conformers. This study shows that first-principles spectroscopic calculations of good accuracy are possible for dipeptides hence it opens possibilities for determination of dipeptide conformer structures by comparison of spectroscopic calculations with experiment.

  10. Design of a specialized computer for on-line monitoring of cardiac stroke volume

    NASA Technical Reports Server (NTRS)

    Webb, J. A., Jr.; Gebben, V. D.

    1972-01-01

    The design of a specialized analog computer for on-line determination of cardiac stroke volume by means of a modified version of the pressure pulse contour method is presented. The design consists of an analog circuit for computation and a timing circuit for detecting necessary events on the pressure waveform. Readouts of arterial pressures, systolic duration, heart rate, percent change in stroke volume, and percent change in cardiac output are provided for monitoring cardiac patients. Laboratory results showed that computational accuracy was within 3 percent, while animal experiments verified the operational capability of the computer. Patient safety considerations are also discussed.

  11. Computing organic stereoselectivity - from concepts to quantitative calculations and predictions.

    PubMed

    Peng, Qian; Duarte, Fernanda; Paton, Robert S

    2016-11-07

    Advances in theory and processing power have established computation as a valuable interpretative and predictive tool in the discovery of new asymmetric catalysts. This tutorial review outlines the theory and practice of modeling stereoselective reactions. Recent examples illustrate how an understanding of the fundamental principles and the application of state-of-the-art computational methods may be used to gain mechanistic insight into organic and organometallic reactions. We highlight the emerging potential of this computational tool-box in providing meaningful predictions for the rational design of asymmetric catalysts. We present an accessible account of the field to encourage future synergy between computation and experiment.

  12. Using the Game Paddle in the Laboratory and Classroom.

    ERIC Educational Resources Information Center

    De Gilio, John F.

    1983-01-01

    Offers a rationale and method for using the hand controllers (game paddles) in the design of computer programs for student use. Methods for their use in entering data as well as in conducting pendulum and acceleration experiments are provided. Complete program listings (for Apple) are included. (JN)

  13. Numerical Modelling of Mechanical Properties of C-Pd Film by Homogenization Technique and Finite Element Method

    NASA Astrophysics Data System (ADS)

    Rymarczyk, Joanna; Kowalczyk, Piotr; Czerwosz, Elzbieta; Bielski, Włodzimierz

    2011-09-01

    The nanomechanical properties of nanostructural carbonaceous-palladium films are studied. The nanoindentation experiments are numerically using the Finite Element Method. The homogenization theory is applied to compute the properties of the composite material used as the input data for nanoindentation calculations.

  14. Feature selection using probabilistic prediction of support vector regression.

    PubMed

    Yang, Jian-Bo; Ong, Chong-Jin

    2011-06-01

    This paper presents a new wrapper-based feature selection method for support vector regression (SVR) using its probabilistic predictions. The method computes the importance of a feature by aggregating the difference, over the feature space, of the conditional density functions of the SVR prediction with and without the feature. As the exact computation of this importance measure is expensive, two approximations are proposed. The effectiveness of the measure using these approximations, in comparison to several other existing feature selection methods for SVR, is evaluated on both artificial and real-world problems. The result of the experiments show that the proposed method generally performs better than, or at least as well as, the existing methods, with notable advantage when the dataset is sparse.

  15. Efficient Algorithms for Estimating the Absorption Spectrum within Linear Response TDDFT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brabec, Jiri; Lin, Lin; Shao, Meiyue

    We present two iterative algorithms for approximating the absorption spectrum of molecules within linear response of time-dependent density functional theory (TDDFT) framework. These methods do not attempt to compute eigenvalues or eigenvectors of the linear response matrix. They are designed to approximate the absorption spectrum as a function directly. They take advantage of the special structure of the linear response matrix. Neither method requires the linear response matrix to be constructed explicitly. They only require a procedure that performs the multiplication of the linear response matrix with a vector. These methods can also be easily modified to efficiently estimate themore » density of states (DOS) of the linear response matrix without computing the eigenvalues of this matrix. We show by computational experiments that the methods proposed in this paper can be much more efficient than methods that are based on the exact diagonalization of the linear response matrix. We show that they can also be more efficient than real-time TDDFT simulations. We compare the pros and cons of these methods in terms of their accuracy as well as their computational and storage cost.« less

  16. Fast Bayesian experimental design: Laplace-based importance sampling for the expected information gain

    NASA Astrophysics Data System (ADS)

    Beck, Joakim; Dia, Ben Mansour; Espath, Luis F. R.; Long, Quan; Tempone, Raúl

    2018-06-01

    In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized according to the desired error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a more recent single-loop Monte Carlo method that uses the Laplace method as an approximation of the return value of the inner loop. The first example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites.

  17. Workflows and Provenance: Toward Information Science Solutions for the Natural Sciences.

    PubMed

    Gryk, Michael R; Ludäscher, Bertram

    2017-01-01

    The era of big data and ubiquitous computation has brought with it concerns about ensuring reproducibility in this new research environment. It is easy to assume computational methods self-document by their very nature of being exact, deterministic processes. However, similar to laboratory experiments, ensuring reproducibility in the computational realm requires the documentation of both the protocols used (workflows) as well as a detailed description of the computational environment: algorithms, implementations, software environments as well as the data ingested and execution logs of the computation. These two aspects of computational reproducibility (workflows and execution details) are discussed in the context of biomolecular Nuclear Magnetic Resonance spectroscopy (bioNMR) as well as the PRIMAD model for computational reproducibility.

  18. DGSA: A Matlab toolbox for distance-based generalized sensitivity analysis of geoscientific computer experiments

    NASA Astrophysics Data System (ADS)

    Park, Jihoon; Yang, Guang; Satija, Addy; Scheidt, Céline; Caers, Jef

    2016-12-01

    Sensitivity analysis plays an important role in geoscientific computer experiments, whether for forecasting, data assimilation or model calibration. In this paper we focus on an extension of a method of regionalized sensitivity analysis (RSA) to applications typical in the Earth Sciences. Such applications involve the building of large complex spatial models, the application of computationally extensive forward modeling codes and the integration of heterogeneous sources of model uncertainty. The aim of this paper is to be practical: 1) provide a Matlab code, 2) provide novel visualization methods to aid users in getting a better understanding in the sensitivity 3) provide a method based on kernel principal component analysis (KPCA) and self-organizing maps (SOM) to account for spatial uncertainty typical in Earth Science applications and 4) provide an illustration on a real field case where the above mentioned complexities present themselves. We present methods that extend the original RSA method in several ways. First we present the calculation of conditional effects, defined as the sensitivity of a parameter given a level of another parameters. Second, we show how this conditional effect can be used to choose nominal values or ranges to fix insensitive parameters aiming to minimally affect uncertainty in the response. Third, we develop a method based on KPCA and SOM to assign a rank to spatial models in order to calculate the sensitivity on spatial variability in the models. A large oil/gas reservoir case is used as illustration of these ideas.

  19. Neutron skyshine calculations for the PDX tokamak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wheeler, F.J.; Nigg, D.W.

    1979-01-01

    The Poloidal Divertor Experiment (PDX) at Princeton will be the first operating tokamak to require a substantial radiation shield. The PDX shielding includes a water-filled roof shield over the machine to reduce air scattering skyshine dose in the PDX control room and at the site boundary. During the design of this roof shield a unique method was developed to compute the neutron source emerging from the top of the roof shield for use in Monte Carlo skyshine calculations. The method is based on simple, one-dimensional calculations rather than multidimensional calculations, resulting in considerable savings in computer time and input preparationmore » effort. This method is described.« less

  20. 3D Parallel Multigrid Methods for Real-Time Fluid Simulation

    NASA Astrophysics Data System (ADS)

    Wan, Feifei; Yin, Yong; Zhang, Suiyu

    2018-03-01

    The multigrid method is widely used in fluid simulation because of its strong convergence. In addition to operating accuracy, operational efficiency is also an important factor to consider in order to enable real-time fluid simulation in computer graphics. For this problem, we compared the performance of the Algebraic Multigrid and the Geometric Multigrid in the V-Cycle and Full-Cycle schemes respectively, and analyze the convergence and speed of different methods. All the calculations are done on the parallel computing of GPU in this paper. Finally, we experiment with the 3D-grid for each scale, and give the exact experimental results.

  1. A new graph-based method for pairwise global network alignment

    PubMed Central

    Klau, Gunnar W

    2009-01-01

    Background In addition to component-based comparative approaches, network alignments provide the means to study conserved network topology such as common pathways and more complex network motifs. Yet, unlike in classical sequence alignment, the comparison of networks becomes computationally more challenging, as most meaningful assumptions instantly lead to NP-hard problems. Most previous algorithmic work on network alignments is heuristic in nature. Results We introduce the graph-based maximum structural matching formulation for pairwise global network alignment. We relate the formulation to previous work and prove NP-hardness of the problem. Based on the new formulation we build upon recent results in computational structural biology and present a novel Lagrangian relaxation approach that, in combination with a branch-and-bound method, computes provably optimal network alignments. The Lagrangian algorithm alone is a powerful heuristic method, which produces solutions that are often near-optimal and – unlike those computed by pure heuristics – come with a quality guarantee. Conclusion Computational experiments on the alignment of protein-protein interaction networks and on the classification of metabolic subnetworks demonstrate that the new method is reasonably fast and has advantages over pure heuristics. Our software tool is freely available as part of the LISA library. PMID:19208162

  2. Transonic Symposium: Theory, Application, and Experiment, volume 1, part 2

    NASA Technical Reports Server (NTRS)

    Foughner, Jerome T., Jr. (Compiler)

    1989-01-01

    In order to assess the state of the art in transonic flow disciplines and to glimpse at future directions, NASA-Langley held a Transonic Symposium. Emphasis was placed on steady, three dimensional external, transonic flow and its simulation, both numerically and experimentally. The symposium included technical sessions on wind tunnel and flight experiments; computational fluid dynamic applications; inviscid methods and grid generation; viscous methods and boundary layer stability; and wind tunnel techniques and wall interference. This, being volume 1, is unclassified.

  3. Implementation of DFT application on ternary optical computer

    NASA Astrophysics Data System (ADS)

    Junjie, Peng; Youyi, Fu; Xiaofeng, Zhang; Shuai, Kong; Xinyu, Wei

    2018-03-01

    As its characteristics of huge number of data bits and low energy consumption, optical computing may be used in the applications such as DFT etc. which needs a lot of computation and can be implemented in parallel. According to this, DFT implementation methods in full parallel as well as in partial parallel are presented. Based on resources ternary optical computer (TOC), extensive experiments were carried out. Experimental results show that the proposed schemes are correct and feasible. They provide a foundation for further exploration of the applications on TOC that needs a large amount calculation and can be processed in parallel.

  4. Numerical prediction of a draft tube flow taking into account uncertain inlet conditions

    NASA Astrophysics Data System (ADS)

    Brugiere, O.; Balarac, G.; Corre, C.; Metais, O.; Flores, E.; Pleroy

    2012-11-01

    The swirling turbulent flow in a hydroturbine draft tube is computed with a non-intrusive uncertainty quantification (UQ) method coupled to Reynolds-Averaged Navier-Stokes (RANS) modelling in order to take into account in the numerical prediction the physical uncertainties existing on the inlet flow conditions. The proposed approach yields not only mean velocity fields to be compared with measured profiles, as is customary in Computational Fluid Dynamics (CFD) practice, but also variance of these quantities from which error bars can be deduced on the computed profiles, thus making more significant the comparison between experiment and computation.

  5. Subgrid or Reynolds stress-modeling for three-dimensional turbulence computations

    NASA Technical Reports Server (NTRS)

    Rubesin, M. W.

    1975-01-01

    A review is given of recent advances in two distinct computational methods for evaluating turbulence fields, namely, statistical Reynolds stress modeling and turbulence simulation, where large eddies are followed in time. It is shown that evaluation of the mean Reynolds stresses, rather than use of a scalar eddy viscosity, permits an explanation of streamline curvature effects found in several experiments. Turbulence simulation, with a new volume averaging technique and third-order accurate finite-difference computing is shown to predict the decay of isotropic turbulence in incompressible flow with rather modest computer storage requirements, even at Reynolds numbers of aerodynamic interest.

  6. Enabling the First Ever Measurement of Coherent Neutrino Scattering Through Background Neutron Measurements.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reyna, David; Betty, Rita

    Using High Performance Computing to Examine the Processes of Neurogenesis Underlying Pattern Separation/Completion of Episodic Information - Sandia researchers developed novel methods and metrics for studying the computational function of neurogenesis,thus generating substantial impact to the neuroscience and neural computing communities. This work could benefit applications in machine learning and other analysis activities. The purpose of this project was to computationally model the impact of neural population dynamics within the neurobiological memory system in order to examine how subareas in the brain enable pattern separation and completion of information in memory across time as associated experiences.

  7. GYC: A program to compute the turbulent boundary layer on a rotating cone

    NASA Technical Reports Server (NTRS)

    Sullivan, R. D.

    1976-01-01

    A computer program, GYC, which is capable of computing the properties of a compressible turbulent boundary layer on a rotating axisymmetric cone-cylinder body, according to the principles of invariant modeling was studied. The program is extended to include the calculation of the turbulence scale by a differential equation. GYC is in operation on the CDC-7600 computer and has undergone several corrections and improvements as a result of the experience gained. The theoretical basis for the program and the method of implementation, as well as information on its operation are given.

  8. Zero-fringe demodulation method based on location-dependent birefringence dispersion in polarized low-coherence interferometry.

    PubMed

    Wang, Shuang; Liu, Tiegen; Jiang, Junfeng; Liu, Kun; Yin, Jinde; Qin, Zunqi; Zou, Shengliang

    2014-04-01

    We present a high precision and fast speed demodulation method for a polarized low-coherence interferometer with location-dependent birefringence dispersion. Based on the characteristics of location-dependent birefringence dispersion and five-step phase-shifting technology, the method accurately retrieves the peak position of zero-fringe at the central wavelength, which avoids the fringe order ambiguity. The method processes data only in the spatial domain and reduces the computational load greatly. We successfully demonstrated the effectiveness of the proposed method in an optical fiber Fabry-Perot barometric pressure sensing experiment system. Measurement precision of 0.091 kPa was realized in the pressure range of 160 kPa, and computation time was improved by 10 times compared to the traditional phase-based method that requires Fourier transform operation.

  9. Organization of the secure distributed computing based on multi-agent system

    NASA Astrophysics Data System (ADS)

    Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera

    2018-04-01

    Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.

  10. Autonomous entropy-based intelligent experimental design

    NASA Astrophysics Data System (ADS)

    Malakar, Nabin Kumar

    2011-07-01

    The aim of this thesis is to explore the application of probability and information theory in experimental design, and to do so in a way that combines what we know about inference and inquiry in a comprehensive and consistent manner. Present day scientific frontiers involve data collection at an ever-increasing rate. This requires that we find a way to collect the most relevant data in an automated fashion. By following the logic of the scientific method, we couple an inference engine with an inquiry engine to automate the iterative process of scientific learning. The inference engine involves Bayesian machine learning techniques to estimate model parameters based upon both prior information and previously collected data, while the inquiry engine implements data-driven exploration. By choosing an experiment whose distribution of expected results has the maximum entropy, the inquiry engine selects the experiment that maximizes the expected information gain. The coupled inference and inquiry engines constitute an autonomous learning method for scientific exploration. We apply it to a robotic arm to demonstrate the efficacy of the method. Optimizing inquiry involves searching for an experiment that promises, on average, to be maximally informative. If the set of potential experiments is described by many parameters, the search involves a high-dimensional entropy space. In such cases, a brute force search method will be slow and computationally expensive. We develop an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment. This helps to reduce the number of computations necessary to find the optimal experiment. We also extended the method of maximizing entropy, and developed a method of maximizing joint entropy so that it could be used as a principle of collaboration between two robots. This is a major achievement of this thesis, as it allows the information-based collaboration between two robotic units towards a same goal in an automated fashion.

  11. GPU-accelerated element-free reverse-time migration with Gauss points partition

    NASA Astrophysics Data System (ADS)

    Zhou, Zhen; Jia, Xiaofeng; Qiang, Xiaodong

    2018-06-01

    An element-free method (EFM) has been demonstrated successfully in elasticity, heat conduction and fatigue crack growth problems. We present the theory of EFM and its numerical applications in seismic modelling and reverse time migration (RTM). Compared with the finite difference method and the finite element method, the EFM has unique advantages: (1) independence of grids in computation and (2) lower expense and more flexibility (because only the information of the nodes and the boundary of the concerned area is required). However, in EFM, due to improper computation and storage of some large sparse matrices, such as the mass matrix and the stiffness matrix, the method is difficult to apply to seismic modelling and RTM for a large velocity model. To solve the problem of storage and computation efficiency, we propose a concept of Gauss points partition and utilise the graphics processing unit to improve the computational efficiency. We employ the compressed sparse row format to compress the intermediate large sparse matrices and attempt to simplify the operations by solving the linear equations with CULA solver. To improve the computation efficiency further, we introduce the concept of the lumped mass matrix. Numerical experiments indicate that the proposed method is accurate and more efficient than the regular EFM.

  12. Accuracy of Time Integration Approaches for Stiff Magnetohydrodynamics Problems

    NASA Astrophysics Data System (ADS)

    Knoll, D. A.; Chacon, L.

    2003-10-01

    The simulation of complex physical processes with multiple time scales presents a continuing challenge to the computational plasma physisist due to the co-existence of fast and slow time scales. Within computational plasma physics, practitioners have developed and used linearized methods, semi-implicit methods, and time splitting in an attempt to tackle such problems. All of these methods are understood to generate numerical error. We are currently developing algorithms which remove such error for MHD problems [1,2]. These methods do not rely on linearization or time splitting. We are also attempting to analyze the errors introduced by existing ``implicit'' methods using modified equation analysis (MEA) [3]. In this presentation we will briefly cover the major findings in [3]. We will then extend this work further into MHD. This analysis will be augmented with numerical experiments with the hope of gaining insight, particularly into how these errors accumulate over many time steps. [1] L. Chacon,. D.A. Knoll, J.M. Finn, J. Comput. Phys., vol. 178, pp. 15-36 (2002) [2] L. Chacon and D.A. Knoll, J. Comput. Phys., vol. 188, pp. 573-592 (2003) [3] D.A. Knoll , L. Chacon, L.G. Margolin, V.A. Mousseau, J. Comput. Phys., vol. 185, pp. 583-611 (2003)

  13. Retrieving relevant time-course experiments: a study on Arabidopsis microarrays.

    PubMed

    Şener, Duygu Dede; Oğul, Hasan

    2016-06-01

    Understanding time-course regulation of genes in response to a stimulus is a major concern in current systems biology. The problem is usually approached by computational methods to model the gene behaviour or its networked interactions with the others by a set of latent parameters. The model parameters can be estimated through a meta-analysis of available data obtained from other relevant experiments. The key question here is how to find the relevant experiments which are potentially useful in analysing current data. In this study, the authors address this problem in the context of time-course gene expression experiments from an information retrieval perspective. To this end, they introduce a computational framework that takes a time-course experiment as a query and reports a list of relevant experiments retrieved from a given repository. These retrieved experiments can then be used to associate the environmental factors of query experiment with the findings previously reported. The model is tested using a set of time-course Arabidopsis microarrays. The experimental results show that relevant experiments can be successfully retrieved based on content similarity.

  14. An Interactive Computer-Aided Instructional Strategy and Assessment Methods for System Identification and Adaptive Control Laboratory

    ERIC Educational Resources Information Center

    Özbek, Necdet Sinan; Eker, Ilyas

    2015-01-01

    This study describes a set of real-time interactive experiments that address system identification and model reference adaptive control (MRAC) techniques. In constructing laboratory experiments that contribute to efficient teaching, experimental design and instructional strategy are crucial, but a process for doing this has yet to be defined. This…

  15. Computation of design parameters and visualization of Goertler vortices

    NASA Technical Reports Server (NTRS)

    Verma, Alok K.

    1984-01-01

    A method for analyzing an airfoil regarding Goertler type instability was presented. A model for the visualizatin of Goertler vortices was designed and fabricated. A smoke generating apparatus was made to be used in the experiment. Experiments were conducted to photograph the vortices, however, the smoke generated was not enough to bring out the vortices.

  16. User Experience in Digital Games: Differences between Laboratory and Home

    ERIC Educational Resources Information Center

    Takatalo, Jari; Hakkinen, Jukka; Kaistinen, Jyrki; Nyman, Gote

    2011-01-01

    Playing entertainment computer, video, and portable games, namely, digital games, is receiving more and more attention in academic research. Games are studied in different situations with numerous methods, but little is known about if and how the playing situation affects the user experience (UX) in games. In addition, it is hard to understand and…

  17. Girls and computer science: experiences, perceptions, and career aspirations

    NASA Astrophysics Data System (ADS)

    Hur, Jung Won; Andrzejewski, Carey E.; Marghitu, Daniela

    2017-04-01

    The purpose of this mixed methods study was to examine ways to promote computer science (CS) among girls by exploring young women's experiences and perceptions of CS as well as investigating factors affecting their career aspirations. American girls aged 10-16 participated in focus group interviews as well as pre-, post-, and follow-up surveys while attending a CS camp. The analysis of data revealed that although the participants were generally positive about the CS field, they had very limited knowledge of and experience with CS, leading to little aspiration to become computer scientists. The findings also indicated that girls' affinity for and confidence in CS were critical factors affecting their motivation for pursuing a CS-related career. The study demonstrated that participation in the CS camp motivated a small number of participants to be interested in majoring in CS, but the activity time was too short to make a significant impact. Based on the findings, we suggest that providing CS programming experiences in K-12 classrooms is important in order to boost girls' confidence and interest in CS.

  18. Light aircraft lift, drag, and moment prediction: A review and analysis

    NASA Technical Reports Server (NTRS)

    Smetana, F. O.; Summey, D. C.; Smith, N. S.; Carden, R. K.

    1975-01-01

    The historical development of analytical methods for predicting the lift, drag, and pitching moment of complete light aircraft configurations in cruising flight is reviewed. Theoretical methods, based in part on techniques described in the literature and in part on original work, are developed. These methods form the basis for understanding the computer programs given to: (1) compute the lift, drag, and moment of conventional airfoils, (2) extend these two-dimensional characteristics to three dimensions for moderate-to-high aspect ratio unswept wings, (3) plot complete configurations, (4) convert the fuselage geometric data to the correct input format, (5) compute the fuselage lift and drag, (6) compute the lift and moment of symmetrical airfoils to M = 1.0 by a simplified semi-empirical procedure, and (7) compute, in closed form, the pressure distribution over a prolate spheroid at alpha = 0. Comparisons of the predictions with experiment indicate excellent lift and drag agreement for conventional airfoils and wings. Limited comparisons of body-alone drag characteristics yield reasonable agreement. Also included are discussions for interference effects and techniques for summing the results above to obtain predictions for complete configurations.

  19. Agent-Based Modeling in Molecular Systems Biology.

    PubMed

    Soheilypour, Mohammad; Mofrad, Mohammad R K

    2018-07-01

    Molecular systems orchestrating the biology of the cell typically involve a complex web of interactions among various components and span a vast range of spatial and temporal scales. Computational methods have advanced our understanding of the behavior of molecular systems by enabling us to test assumptions and hypotheses, explore the effect of different parameters on the outcome, and eventually guide experiments. While several different mathematical and computational methods are developed to study molecular systems at different spatiotemporal scales, there is still a need for methods that bridge the gap between spatially-detailed and computationally-efficient approaches. In this review, we summarize the capabilities of agent-based modeling (ABM) as an emerging molecular systems biology technique that provides researchers with a new tool in exploring the dynamics of molecular systems/pathways in health and disease. © 2018 WILEY Periodicals, Inc.

  20. Parallel discontinuous Galerkin FEM for computing hyperbolic conservation law on unstructured grids

    NASA Astrophysics Data System (ADS)

    Ma, Xinrong; Duan, Zhijian

    2018-04-01

    High-order resolution Discontinuous Galerkin finite element methods (DGFEM) has been known as a good method for solving Euler equations and Navier-Stokes equations on unstructured grid, but it costs too much computational resources. An efficient parallel algorithm was presented for solving the compressible Euler equations. Moreover, the multigrid strategy based on three-stage three-order TVD Runge-Kutta scheme was used in order to improve the computational efficiency of DGFEM and accelerate the convergence of the solution of unsteady compressible Euler equations. In order to make each processor maintain load balancing, the domain decomposition method was employed. Numerical experiment performed for the inviscid transonic flow fluid problems around NACA0012 airfoil and M6 wing. The results indicated that our parallel algorithm can improve acceleration and efficiency significantly, which is suitable for calculating the complex flow fluid.

  1. DNS of Flow in a Low-Pressure Turbine Cascade Using a Discontinuous-Galerkin Spectral-Element Method

    NASA Technical Reports Server (NTRS)

    Garai, Anirban; Diosady, Laslo Tibor; Murman, Scott; Madavan, Nateri

    2015-01-01

    A new computational capability under development for accurate and efficient high-fidelity direct numerical simulation (DNS) and large eddy simulation (LES) of turbomachinery is described. This capability is based on an entropy-stable Discontinuous-Galerkin spectral-element approach that extends to arbitrarily high orders of spatial and temporal accuracy and is implemented in a computationally efficient manner on a modern high performance computer architecture. A validation study using this method to perform DNS of flow in a low-pressure turbine airfoil cascade are presented. Preliminary results indicate that the method captures the main features of the flow. Discrepancies between the predicted results and the experiments are likely due to the effects of freestream turbulence not being included in the simulation and will be addressed in the final paper.

  2. Project JOVE. [microgravity experiments and applications

    NASA Technical Reports Server (NTRS)

    Lyell, M. J.

    1994-01-01

    The goal of this project is to investigate new areas of research pertaining to free surface-interface fluids mechanics and/or microgravity which have potential commercial applications. This paper presents an introduction to ferrohydrodynamics (FHD), and discusses some applications. Also, computational methods for solving free surface flow problems are presented in detail. Both have diverse applications in industry and in microgravity fluids applications. Three different modeling schemes for FHD flows are addressed and the governing equations, including Maxwell's equations, are introduced. In the area of computational modeling of free surface flows, both Eulerian and Lagrangian schemes are discussed. The state of the art in computational methods applied to free surface flows is elucidated. In particular, adaptive grids and re-zoning methods are discussed. Additional research results are addressed and copies of the publications produced under the JOVE Project are included.

  3. Physics Notes

    ERIC Educational Resources Information Center

    School Science Review, 1977

    1977-01-01

    Includes methods for demonstrating Schlieren effect, measuring refractive index, measuring acceleration, presenting concepts of optics, automatically recording weather, constructing apparaturs for sound experiments, using thermistor thermometers, using the 741 operational amplifier in analog computing, measuring inductance, electronically ringing…

  4. Notes on Experiments.

    ERIC Educational Resources Information Center

    Physics Education, 1982

    1982-01-01

    Describes: (1) an apparatus which provides a simple method for measuring Stefan's constant; (2) a simple phase shifting circuit; (3) a radioactive decay computer program (for ZX81); and (4) phase difference between transformer voltages. (Author/JN)

  5. Multiscale atomistic simulation of metal-oxygen surface interactions: Methodological development, theoretical investigation, and correlation with experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Judith C.

    The purpose of this grant is to develop the multi-scale theoretical methods to describe the nanoscale oxidation of metal thin films, as the PI (Yang) extensive previous experience in the experimental elucidation of the initial stages of Cu oxidation by primarily in situ transmission electron microscopy methods. Through the use and development of computational tools at varying length (and time) scales, from atomistic quantum mechanical calculation, force field mesoscale simulations, to large scale Kinetic Monte Carlo (KMC) modeling, the fundamental underpinings of the initial stages of Cu oxidation have been elucidated. The development of computational modeling tools allows for acceleratedmore » materials discovery. The theoretical tools developed from this program impact a wide range of technologies that depend on surface reactions, including corrosion, catalysis, and nanomaterials fabrication.« less

  6. Star Identification Without Attitude Knowledge: Testing with X-Ray Timing Experiment Data

    NASA Technical Reports Server (NTRS)

    Ketchum, Eleanor

    1997-01-01

    As the budget for the scientific exploration of space shrinks, the need for more autonomous spacecraft increases. For a spacecraft with a star tracker, the ability to determinate attitude from a lost in space state autonomously requires the capability to identify the stars in the field of view of the tracker. Although there have been efforts to produce autonomous star trackers which perform this function internally, many programs cannot afford these sensors. The author previously presented a method for identifying stars without a priori attitude knowledge specifically targeted for onboard computers as it minimizes the necessary computer storage. The method has previously been tested with simulated data. This paper provides results of star identification without a priori attitude knowledge using flight data from two 8 by 8 degree charge coupled device star trackers onboard the X-Ray Timing Experiment.

  7. Using Computer Simulation for Neurolab 2 Mission Planning

    NASA Technical Reports Server (NTRS)

    Sanders, Betty M.

    1997-01-01

    This paper presents an overview of the procedure used in the creation of a computer simulation video generated by the Graphics Research and Analysis Facility at NASA/Johnson Space Center. The simulation was preceded by an analysis of anthropometric characteristics of crew members and workspace requirements for 13 experiments to be conducted on Neurolab 2 which is dedicated to neuroscience and behavioral research. Neurolab 2 is being carried out as a partnership among national domestic research institutes and international space agencies. The video is a tour of the Spacelab module as it will be configured for STS-90, scheduled for launch in the spring of 1998, and identifies experiments that can be conducted in parallel during that mission. Therefore, this paper will also address methods for using computer modeling to facilitate the mission planning activity.

  8. Chromatographic and computational assessment of lipophilicity using sum of ranking differences and generalized pair-correlation.

    PubMed

    Andrić, Filip; Héberger, Károly

    2015-02-06

    Lipophilicity (logP) represents one of the most studied and most frequently used fundamental physicochemical properties. At present there are several possibilities for its quantitative expression and many of them stems from chromatographic experiments. Numerous attempts have been made to compare different computational methods, chromatographic methods vs. computational approaches, as well as chromatographic methods and direct shake-flask procedure without definite results or these findings are not accepted generally. In the present work numerous chromatographically derived lipophilicity measures in combination with diverse computational methods were ranked and clustered using the novel variable discrimination and ranking approaches based on the sum of ranking differences and the generalized pair correlation method. Available literature logP data measured on HILIC, and classical reversed-phase combining different classes of compounds have been compared with most frequently used multivariate data analysis techniques (principal component and hierarchical cluster analysis) as well as with the conclusions in the original sources. Chromatographic lipophilicity measures obtained under typical reversed-phase conditions outperform the majority of computationally estimated logPs. Oppositely, in the case of HILIC none of the many proposed chromatographic indices overcomes any of the computationally assessed logPs. Only two of them (logkmin and kmin) may be selected as recommended chromatographic lipophilicity measures. Both ranking approaches, sum of ranking differences and generalized pair correlation method, although based on different backgrounds, provides highly similar variable ordering and grouping leading to the same conclusions. Copyright © 2015. Published by Elsevier B.V.

  9. Reference Computational Meshing Strategy for Computational Fluid Dynamics Simulation of Departure from Nucleate BoilingReference Computational Meshing Strategy for Computational Fluid Dynamics Simulation of Departure from Nucleate Boiling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pointer, William David

    The objective of this effort is to establish a strategy and process for generation of suitable computational mesh for computational fluid dynamics simulations of departure from nucleate boiling in a 5 by 5 fuel rod assembly held in place by PWR mixing vane spacer grids. This mesh generation process will support ongoing efforts to develop, demonstrate and validate advanced multi-phase computational fluid dynamics methods that enable more robust identification of dryout conditions and DNB occurrence.Building upon prior efforts and experience, multiple computational meshes were developed using the native mesh generation capabilities of the commercial CFD code STAR-CCM+. These meshes weremore » used to simulate two test cases from the Westinghouse 5 by 5 rod bundle facility. The sensitivity of predicted quantities of interest to the mesh resolution was then established using two evaluation methods, the Grid Convergence Index method and the Least Squares method. This evaluation suggests that the Least Squares method can reliably establish the uncertainty associated with local parameters such as vector velocity components at a point in the domain or surface averaged quantities such as outlet velocity magnitude. However, neither method is suitable for characterization of uncertainty in global extrema such as peak fuel surface temperature, primarily because such parameters are not necessarily associated with a fixed point in space. This shortcoming is significant because the current generation algorithm for identification of DNB event conditions relies on identification of such global extrema. Ongoing efforts to identify DNB based on local surface conditions will address this challenge« less

  10. Adaptive [theta]-methods for pricing American options

    NASA Astrophysics Data System (ADS)

    Khaliq, Abdul Q. M.; Voss, David A.; Kazmi, Kamran

    2008-12-01

    We develop adaptive [theta]-methods for solving the Black-Scholes PDE for American options. By adding a small, continuous term, the Black-Scholes PDE becomes an advection-diffusion-reaction equation on a fixed spatial domain. Standard implementation of [theta]-methods would require a Newton-type iterative procedure at each time step thereby increasing the computational complexity of the methods. Our linearly implicit approach avoids such complications. We establish a general framework under which [theta]-methods satisfy a discrete version of the positivity constraint characteristic of American options, and numerically demonstrate the sensitivity of the constraint. The positivity results are established for the single-asset and independent two-asset models. In addition, we have incorporated and analyzed an adaptive time-step control strategy to increase the computational efficiency. Numerical experiments are presented for one- and two-asset American options, using adaptive exponential splitting for two-asset problems. The approach is compared with an iterative solution of the two-asset problem in terms of computational efficiency.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, Oishik, E-mail: oishik-sen@uiowa.edu; Gaul, Nicholas J., E-mail: nicholas-gaul@ramdosolutions.com; Choi, K.K., E-mail: kyung-choi@uiowa.edu

    Macro-scale computations of shocked particulate flows require closure laws that model the exchange of momentum/energy between the fluid and particle phases. Closure laws are constructed in this work in the form of surrogate models derived from highly resolved mesoscale computations of shock-particle interactions. The mesoscale computations are performed to calculate the drag force on a cluster of particles for different values of Mach Number and particle volume fraction. Two Kriging-based methods, viz. the Dynamic Kriging Method (DKG) and the Modified Bayesian Kriging Method (MBKG) are evaluated for their ability to construct surrogate models with sparse data; i.e. using the leastmore » number of mesoscale simulations. It is shown that if the input data is noise-free, the DKG method converges monotonically; convergence is less robust in the presence of noise. The MBKG method converges monotonically even with noisy input data and is therefore more suitable for surrogate model construction from numerical experiments. This work is the first step towards a full multiscale modeling of interaction of shocked particle laden flows.« less

  12. Computational analysis of water entry of a circular section at constant velocity based on Reynold's averaged Navier-Stokes method

    NASA Astrophysics Data System (ADS)

    Uddin, M. Maruf; Fuad, Muzaddid-E.-Zaman; Rahaman, Md. Mashiur; Islam, M. Rabiul

    2017-12-01

    With the rapid decrease in the cost of computational infrastructure with more efficient algorithm for solving non-linear problems, Reynold's averaged Navier-Stokes (RaNS) based Computational Fluid Dynamics (CFD) has been used widely now-a-days. As a preliminary evaluation tool, CFD is used to calculate the hydrodynamic loads on offshore installations, ships, and other structures in the ocean at initial design stages. Traditionally, wedges have been studied more than circular cylinders because cylinder section has zero deadrise angle at the instant of water impact, which increases with increase of submergence. In Present study, RaNS based commercial code ANSYS Fluent is used to simulate the water entry of a circular section at constant velocity. It is seen that present computational results were compared with experiment and other numerical method.

  13. Aeroelastic Calculations Using CFD for a Typical Business Jet Model

    NASA Technical Reports Server (NTRS)

    Gibbons, Michael D.

    1996-01-01

    Two time-accurate Computational Fluid Dynamics (CFD) codes were used to compute several flutter points for a typical business jet model. The model consisted of a rigid fuselage with a flexible semispan wing and was tested in the Transonic Dynamics Tunnel at NASA Langley Research Center where experimental flutter data were obtained from M(sub infinity) = 0.628 to M(sub infinity) = 0.888. The computational results were computed using CFD codes based on the inviscid TSD equation (CAP-TSD) and the Euler/Navier-Stokes equations (CFL3D-AE). Comparisons are made between analytical results and with experiment where appropriate. The results presented here show that the Navier-Stokes method is required near the transonic dip due to the strong viscous effects while the TSD and Euler methods used here provide good results at the lower Mach numbers.

  14. A Fuzzy Computing Model for Identifying Polarity of Chinese Sentiment Words

    PubMed Central

    Huang, Yongfeng; Wu, Xian; Li, Xing

    2015-01-01

    With the spurt of online user-generated contents on web, sentiment analysis has become a very active research issue in data mining and natural language processing. As the most important indicator of sentiment, sentiment words which convey positive and negative polarity are quite instrumental for sentiment analysis. However, most of the existing methods for identifying polarity of sentiment words only consider the positive and negative polarity by the Cantor set, and no attention is paid to the fuzziness of the polarity intensity of sentiment words. In order to improve the performance, we propose a fuzzy computing model to identify the polarity of Chinese sentiment words in this paper. There are three major contributions in this paper. Firstly, we propose a method to compute polarity intensity of sentiment morphemes and sentiment words. Secondly, we construct a fuzzy sentiment classifier and propose two different methods to compute the parameter of the fuzzy classifier. Thirdly, we conduct extensive experiments on four sentiment words datasets and three review datasets, and the experimental results indicate that our model performs better than the state-of-the-art methods. PMID:26106409

  15. Sequential Design of Experiments to Maximize Learning from Carbon Capture Pilot Plant Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soepyan, Frits B.; Morgan, Joshua C.; Omell, Benjamin P.

    Pilot plant test campaigns can be expensive and time-consuming. Therefore, it is of interest to maximize the amount of learning and the efficiency of the test campaign given the limited number of experiments that can be conducted. This work investigates the use of sequential design of experiments (SDOE) to overcome these challenges by demonstrating its usefulness for a recent solvent-based CO2 capture plant test campaign. Unlike traditional design of experiments methods, SDOE regularly uses information from ongoing experiments to determine the optimum locations in the design space for subsequent runs within the same experiment. However, there are challenges that needmore » to be addressed, including reducing the high computational burden to efficiently update the model, and the need to incorporate the methodology into a computational tool. We address these challenges by applying SDOE in combination with a software tool, the Framework for Optimization, Quantification of Uncertainty and Surrogates (FOQUS) (Miller et al., 2014a, 2016, 2017). The results of applying SDOE on a pilot plant test campaign for CO2 capture suggests that relative to traditional design of experiments methods, SDOE can more effectively reduce the uncertainty of the model, thus decreasing technical risk. Future work includes integrating SDOE into FOQUS and using SDOE to support additional large-scale pilot plant test campaigns.« less

  16. EGASP: the human ENCODE Genome Annotation Assessment Project

    PubMed Central

    Guigó, Roderic; Flicek, Paul; Abril, Josep F; Reymond, Alexandre; Lagarde, Julien; Denoeud, France; Antonarakis, Stylianos; Ashburner, Michael; Bajic, Vladimir B; Birney, Ewan; Castelo, Robert; Eyras, Eduardo; Ucla, Catherine; Gingeras, Thomas R; Harrow, Jennifer; Hubbard, Tim; Lewis, Suzanna E; Reese, Martin G

    2006-01-01

    Background We present the results of EGASP, a community experiment to assess the state-of-the-art in genome annotation within the ENCODE regions, which span 1% of the human genome sequence. The experiment had two major goals: the assessment of the accuracy of computational methods to predict protein coding genes; and the overall assessment of the completeness of the current human genome annotations as represented in the ENCODE regions. For the computational prediction assessment, eighteen groups contributed gene predictions. We evaluated these submissions against each other based on a 'reference set' of annotations generated as part of the GENCODE project. These annotations were not available to the prediction groups prior to the submission deadline, so that their predictions were blind and an external advisory committee could perform a fair assessment. Results The best methods had at least one gene transcript correctly predicted for close to 70% of the annotated genes. Nevertheless, the multiple transcript accuracy, taking into account alternative splicing, reached only approximately 40% to 50% accuracy. At the coding nucleotide level, the best programs reached an accuracy of 90% in both sensitivity and specificity. Programs relying on mRNA and protein sequences were the most accurate in reproducing the manually curated annotations. Experimental validation shows that only a very small percentage (3.2%) of the selected 221 computationally predicted exons outside of the existing annotation could be verified. Conclusion This is the first such experiment in human DNA, and we have followed the standards established in a similar experiment, GASP1, in Drosophila melanogaster. We believe the results presented here contribute to the value of ongoing large-scale annotation projects and should guide further experimental methods when being scaled up to the entire human genome sequence. PMID:16925836

  17. A new 2D segmentation method based on dynamic programming applied to computer aided detection in mammography.

    PubMed

    Timp, Sheila; Karssemeijer, Nico

    2004-05-01

    Mass segmentation plays a crucial role in computer-aided diagnosis (CAD) systems for classification of suspicious regions as normal, benign, or malignant. In this article we present a robust and automated segmentation technique--based on dynamic programming--to segment mass lesions from surrounding tissue. In addition, we propose an efficient algorithm to guarantee resulting contours to be closed. The segmentation method based on dynamic programming was quantitatively compared with two other automated segmentation methods (region growing and the discrete contour model) on a dataset of 1210 masses. For each mass an overlap criterion was calculated to determine the similarity with manual segmentation. The mean overlap percentage for dynamic programming was 0.69, for the other two methods 0.60 and 0.59, respectively. The difference in overlap percentage was statistically significant. To study the influence of the segmentation method on the performance of a CAD system two additional experiments were carried out. The first experiment studied the detection performance of the CAD system for the different segmentation methods. Free-response receiver operating characteristics analysis showed that the detection performance was nearly identical for the three segmentation methods. In the second experiment the ability of the classifier to discriminate between malignant and benign lesions was studied. For region based evaluation the area Az under the receiver operating characteristics curve was 0.74 for dynamic programming, 0.72 for the discrete contour model, and 0.67 for region growing. The difference in Az values obtained by the dynamic programming method and region growing was statistically significant. The differences between other methods were not significant.

  18. Integrating Computational Science Tools into a Thermodynamics Course

    NASA Astrophysics Data System (ADS)

    Vieira, Camilo; Magana, Alejandra J.; García, R. Edwin; Jana, Aniruddha; Krafcik, Matthew

    2018-01-01

    Computational tools and methods have permeated multiple science and engineering disciplines, because they enable scientists and engineers to process large amounts of data, represent abstract phenomena, and to model and simulate complex concepts. In order to prepare future engineers with the ability to use computational tools in the context of their disciplines, some universities have started to integrate these tools within core courses. This paper evaluates the effect of introducing three computational modules within a thermodynamics course on student disciplinary learning and self-beliefs about computation. The results suggest that using worked examples paired to computer simulations to implement these modules have a positive effect on (1) student disciplinary learning, (2) student perceived ability to do scientific computing, and (3) student perceived ability to do computer programming. These effects were identified regardless of the students' prior experiences with computer programming.

  19. Examining Neuronal Connectivity and Its Role in Learning and Memory

    NASA Astrophysics Data System (ADS)

    Gala, Rohan

    Learning and long-term memory formation are accompanied with changes in the patterns and weights of synaptic connections in the underlying neuronal network. However, the fundamental rules that drive connectivity changes, and the precise structure-function relationships within neuronal networks remain elusive. Technological improvements over the last few decades have enabled the observation of large but specific subsets of neurons and their connections in unprecedented detail. Devising robust and automated computational methods is critical to distill information from ever-increasing volumes of raw experimental data. Moreover, statistical models and theoretical frameworks are required to interpret the data and assemble evidence into understanding of brain function. In this thesis, I first describe computational methods to reconstruct connectivity based on light microscopy imaging experiments. Next, I use these methods to quantify structural changes in connectivity based on in vivo time-lapse imaging experiments. Finally, I present a theoretical model of associative learning that can explain many stereotypical features of experimentally observed connectivity.

  20. High-accuracy and real-time 3D positioning, tracking system for medical imaging applications based on 3D digital image correlation

    NASA Astrophysics Data System (ADS)

    Xue, Yuan; Cheng, Teng; Xu, Xiaohai; Gao, Zeren; Li, Qianqian; Liu, Xiaojing; Wang, Xing; Song, Rui; Ju, Xiangyang; Zhang, Qingchuan

    2017-01-01

    This paper presents a system for positioning markers and tracking the pose of a rigid object with 6 degrees of freedom in real-time using 3D digital image correlation, with two examples for medical imaging applications. Traditional DIC method was improved to meet the requirements of the real-time by simplifying the computations of integral pixel search. Experiments were carried out and the results indicated that the new method improved the computational efficiency by about 4-10 times in comparison with the traditional DIC method. The system was aimed for orthognathic surgery navigation in order to track the maxilla segment after LeFort I osteotomy. Experiments showed noise for the static point was at the level of 10-3 mm and the measurement accuracy was 0.009 mm. The system was demonstrated on skin surface shape evaluation of a hand for finger stretching exercises, which indicated a great potential on tracking muscle and skin movements.

  1. Data handling and analysis for the 1971 corn blight watch experiment

    NASA Technical Reports Server (NTRS)

    Anuta, P. E.; Phillips, T. L.

    1973-01-01

    The overall corn blight watch experiment data flow is described and the organization of the LARS/Purdue data center is discussed. Data analysis techniques are discussed in general and the use of statistical multispectral pattern recognition methods for automatic computer analysis of aircraft scanner data is described. Some of the results obtained are discussed and the implications of the experiment on future data communication requirements for earth resource survey systems is discussed.

  2. Computational challenges of structure-based approaches applied to HIV.

    PubMed

    Forli, Stefano; Olson, Arthur J

    2015-01-01

    Here, we review some of the opportunities and challenges that we face in computational modeling of HIV therapeutic targets and structural biology, both in terms of methodology development and structure-based drug design (SBDD). Computational methods have provided fundamental support to HIV research since the initial structural studies, helping to unravel details of HIV biology. Computational models have proved to be a powerful tool to analyze and understand the impact of mutations and to overcome their structural and functional influence in drug resistance. With the availability of structural data, in silico experiments have been instrumental in exploiting and improving interactions between drugs and viral targets, such as HIV protease, reverse transcriptase, and integrase. Issues such as viral target dynamics and mutational variability, as well as the role of water and estimates of binding free energy in characterizing ligand interactions, are areas of active computational research. Ever-increasing computational resources and theoretical and algorithmic advances have played a significant role in progress to date, and we envision a continually expanding role for computational methods in our understanding of HIV biology and SBDD in the future.

  3. Applying Learning Diagnosis Diagram in Computer Aided Instructions: Research, Practice and Evaluation

    ERIC Educational Resources Information Center

    Wu, YuLung

    2010-01-01

    In Taiwan, when students learn in experiment-related courses, they are often grouped into several teams. The familiar method of grouping learning is "Cooperative Learning". A well-organized grouping strategy improves cooperative learning and increases the number of activities. This study proposes a novel pedagogical method by adopting…

  4. 26 CFR 1.448-2 - Nonaccrual of certain amounts by service providers.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...-experience method is not allowed. (3) Safe harbor 3: modified Black Motor method. A taxpayer may use a... accounts receivable balance at the end of the current taxable year by a percentage (modified Black Motor... modified Black Motor moving average percentage is computed by dividing the total bad debts sustained...

  5. 26 CFR 1.448-2 - Nonaccrual of certain amounts by service providers.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...-experience method is not allowed. (3) Safe harbor 3: modified Black Motor method. A taxpayer may use a... accounts receivable balance at the end of the current taxable year by a percentage (modified Black Motor... modified Black Motor moving average percentage is computed by dividing the total bad debts sustained...

  6. 26 CFR 1.448-2 - Nonaccrual of certain amounts by service providers.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...-experience method is not allowed. (3) Safe harbor 3: modified Black Motor method. A taxpayer may use a... accounts receivable balance at the end of the current taxable year by a percentage (modified Black Motor... modified Black Motor moving average percentage is computed by dividing the total bad debts sustained...

  7. 26 CFR 1.448-2 - Nonaccrual of certain amounts by service providers.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...-experience method is not allowed. (3) Safe harbor 3: modified Black Motor method. A taxpayer may use a... accounts receivable balance at the end of the current taxable year by a percentage (modified Black Motor... modified Black Motor moving average percentage is computed by dividing the total bad debts sustained...

  8. Dispersion Interactions between Rare Gas Atoms: Testing the London Equation Using ab Initio Methods

    ERIC Educational Resources Information Center

    Halpern, Arthur M.

    2011-01-01

    A computational chemistry experiment is described in which students can use advanced ab initio quantum mechanical methods to test the ability of the London equation to account quantitatively for the attractive (dispersion) interactions between rare gas atoms. Using readily available electronic structure applications, students can calculate the…

  9. From the Teachers' Perspective: A Way of Simplicity for Multimedia Design

    ERIC Educational Resources Information Center

    Hirca, Necati

    2009-01-01

    Presently, teaching and presentation methods are changing from chalk and blackboards to interactive methods. Multimedia technology is presently used in many schools, however much of the commercially-available software programs don't allow teachers to share their experiences. Adobe Captivate 3 is a computer program that enables teachers, without…

  10. The development of an explicit thermochemical nonequilibrium algorithm and its application to compute three dimensional AFE flowfields

    NASA Technical Reports Server (NTRS)

    Palmer, Grant

    1989-01-01

    This study presents a three-dimensional explicit, finite-difference, shock-capturing numerical algorithm applied to viscous hypersonic flows in thermochemical nonequilibrium. The algorithm employs a two-temperature physical model. Equations governing the finite-rate chemical reactions are fully-coupled to the gas dynamic equations using a novel coupling technique. The new coupling method maintains stability in the explicit, finite-rate formulation while allowing relatively large global time steps. The code uses flux-vector accuracy. Comparisons with experimental data and other numerical computations verify the accuracy of the present method. The code is used to compute the three-dimensional flowfield over the Aeroassist Flight Experiment (AFE) vehicle at one of its trajectory points.

  11. The Model Experiments and Finite Element Analysis on Deformation and Failure by Excavation of Grounds in Foregoing-roof Method

    NASA Astrophysics Data System (ADS)

    Sotokoba, Yasumasa; Okajima, Kenji; Iida, Toshiaki; Tanaka, Tadatsugu

    We propose the trenchless box culvert construction method to construct box culverts in small covering soil layers while keeping roads or tracks open. When we use this construction method, it is necessary to clarify deformation and shear failure by excavation of grounds. In order to investigate the soil behavior, model experiments and elasto-plactic finite element analysis were performed. In the model experiments, it was shown that the shear failure was developed from the end of the roof to the toe of the boundary surface. In the finite element analysis, a shear band effect was introduced. Comparing the observed shear bands in model experiments with computed maximum shear strain contours, it was found that the observed direction of the shear band could be simulated reasonably by the finite element analysis. We may say that the finite element method used in this study is useful tool for this construction method.

  12. Weighted analysis of paired microarray experiments.

    PubMed

    Kristiansson, Erik; Sjögren, Anders; Rudemo, Mats; Nerman, Olle

    2005-01-01

    In microarray experiments quality often varies, for example between samples and between arrays. The need for quality control is therefore strong. A statistical model and a corresponding analysis method is suggested for experiments with pairing, including designs with individuals observed before and after treatment and many experiments with two-colour spotted arrays. The model is of mixed type with some parameters estimated by an empirical Bayes method. Differences in quality are modelled by individual variances and correlations between repetitions. The method is applied to three real and several simulated datasets. Two of the real datasets are of Affymetrix type with patients profiled before and after treatment, and the third dataset is of two-colour spotted cDNA type. In all cases, the patients or arrays had different estimated variances, leading to distinctly unequal weights in the analysis. We suggest also plots which illustrate the variances and correlations that affect the weights computed by our analysis method. For simulated data the improvement relative to previously published methods without weighting is shown to be substantial.

  13. Chemistry Notes.

    ERIC Educational Resources Information Center

    School Science Review, 1983

    1983-01-01

    Presents chemistry experiments, laboratory procedures, demonstrations, teaching suggestions, and classroom materials/activities. These include: game for teaching ionic formulas; method for balancing equations; description of useful redox series; computer programs (with listings) for water electrolysis simulation and for determining chemical…

  14. A Solution Framework for Environmental Characterization Problems

    EPA Science Inventory

    This paper describes experiences developing a grid-enabled framework for solving environmental inverse problems. The solution approach taken here couples environmental simulation models with global search methods and requires readily available computational resources of the grid ...

  15. Searching molecular structure databases with tandem mass spectra using CSI:FingerID

    PubMed Central

    Dührkop, Kai; Shen, Huibin; Meusel, Marvin; Rousu, Juho; Böcker, Sebastian

    2015-01-01

    Metabolites provide a direct functional signature of cellular state. Untargeted metabolomics experiments usually rely on tandem MS to identify the thousands of compounds in a biological sample. Today, the vast majority of metabolites remain unknown. We present a method for searching molecular structure databases using tandem MS data of small molecules. Our method computes a fragmentation tree that best explains the fragmentation spectrum of an unknown molecule. We use the fragmentation tree to predict the molecular structure fingerprint of the unknown compound using machine learning. This fingerprint is then used to search a molecular structure database such as PubChem. Our method is shown to improve on the competing methods for computational metabolite identification by a considerable margin. PMID:26392543

  16. Structural Optimization of a Force Balance Using a Computational Experiment Design

    NASA Technical Reports Server (NTRS)

    Parker, P. A.; DeLoach, R.

    2002-01-01

    This paper proposes a new approach to force balance structural optimization featuring a computational experiment design. Currently, this multi-dimensional design process requires the designer to perform a simplification by executing parameter studies on a small subset of design variables. This one-factor-at-a-time approach varies a single variable while holding all others at a constant level. Consequently, subtle interactions among the design variables, which can be exploited to achieve the design objectives, are undetected. The proposed method combines Modern Design of Experiments techniques to direct the exploration of the multi-dimensional design space, and a finite element analysis code to generate the experimental data. To efficiently search for an optimum combination of design variables and minimize the computational resources, a sequential design strategy was employed. Experimental results from the optimization of a non-traditional force balance measurement section are presented. An approach to overcome the unique problems associated with the simultaneous optimization of multiple response criteria is described. A quantitative single-point design procedure that reflects the designer's subjective impression of the relative importance of various design objectives, and a graphical multi-response optimization procedure that provides further insights into available tradeoffs among competing design objectives are illustrated. The proposed method enhances the intuition and experience of the designer by providing new perspectives on the relationships between the design variables and the competing design objectives providing a systematic foundation for advancements in structural design.

  17. An efficient method to identify differentially expressed genes in microarray experiments

    PubMed Central

    Qin, Huaizhen; Feng, Tao; Harding, Scott A.; Tsai, Chung-Jui; Zhang, Shuanglin

    2013-01-01

    Motivation Microarray experiments typically analyze thousands to tens of thousands of genes from small numbers of biological replicates. The fact that genes are normally expressed in functionally relevant patterns suggests that gene-expression data can be stratified and clustered into relatively homogenous groups. Cluster-wise dimensionality reduction should make it feasible to improve screening power while minimizing information loss. Results We propose a powerful and computationally simple method for finding differentially expressed genes in small microarray experiments. The method incorporates a novel stratification-based tight clustering algorithm, principal component analysis and information pooling. Comprehensive simulations show that our method is substantially more powerful than the popular SAM and eBayes approaches. We applied the method to three real microarray datasets: one from a Populus nitrogen stress experiment with 3 biological replicates; and two from public microarray datasets of human cancers with 10 to 40 biological replicates. In all three analyses, our method proved more robust than the popular alternatives for identification of differentially expressed genes. Availability The C++ code to implement the proposed method is available upon request for academic use. PMID:18453554

  18. Time-Accurate Computations of Isolated Circular Synthetic Jets in Crossflow

    NASA Technical Reports Server (NTRS)

    Rumsey, C. L.; Schaeffler, N. W.; Milanovic, I. M.; Zaman, K. B. M. Q.

    2007-01-01

    Results from unsteady Reynolds-averaged Navier-Stokes computations are described for two different synthetic jet flows issuing into a turbulent boundary layer crossflow through a circular orifice. In one case the jet effect is mostly contained within the boundary layer, while in the other case the jet effect extends beyond the boundary layer edge. Both cases have momentum flux ratios less than 2. Several numerical parameters are investigated, and some lessons learned regarding the CFD methods for computing these types of flow fields are summarized. Results in both cases are compared to experiment.

  19. Proof of concept of a simple computer-assisted technique for correcting bone deformities.

    PubMed

    Ma, Burton; Simpson, Amber L; Ellis, Randy E

    2007-01-01

    We propose a computer-assisted technique for correcting bone deformities using the Ilizarov method. Our technique is an improvement over prior art in that it does not require a tracking system, navigation hardware and software, or intraoperative registration. Instead, we rely on a postoperative CT scan to obtain all of the information necessary to plan the correction and compute a correction schedule for the patient. Our laboratory experiments using plastic phantoms produced deformity corrections accurate to within 3.0 degrees of rotation and 1 mm of lengthening.

  20. The H2 + + He proton transfer reaction: quantum reactive differential cross sections to be linked with future velocity mapping experiments

    NASA Astrophysics Data System (ADS)

    Hernández Vera, Mario; Wester, Roland; Gianturco, Francesco Antonio

    2018-01-01

    We construct the velocity map images of the proton transfer reaction between helium and molecular hydrogen ion {{{H}}}2+. We perform simulations of imaging experiments at one representative total collision energy taking into account the inherent aberrations of the velocity mapping in order to explore the feasibility of direct comparisons between theory and future experiments planned in our laboratory. The asymptotic angular distributions of the fragments in a 3D velocity space is determined from the quantum state-to-state differential reactive cross sections and reaction probabilities which are computed by using the time-independent coupled channel hyperspherical coordinate method. The calculations employ an earlier ab initio potential energy surface computed at the FCI/cc-pVQZ level of theory. The present simulations indicate that the planned experiments would be selective enough to differentiate between product distributions resulting from different initial internal states of the reactants.

  1. The service telemetry and control device for space experiment “GRIS”

    NASA Astrophysics Data System (ADS)

    Glyanenko, A. S.

    2016-02-01

    Problems of scientific devices control (for example, fine control of measuring paths), collecting auxiliary (service information about working capacity, conditions of experiment carrying out, etc.) and preliminary data processing are actual for any space device. Modern devices for space research it is impossible to imagine without devices that didn't use digital data processing methods and specialized or standard interfaces and computing facilities. For realization of these functions in “GRIS” experiment onboard ISS for purposes minimization of dimensions, power consumption, the concept “system-on-chip” was chosen and realized. In the programmable logical integrated scheme by Microsemi from ProASIC3 family with maximum capacity up to 3M system gates, the computing kernel and all necessary peripherals are created. In this paper we discuss structure, possibilities and resources the service telemetry and control device for “GRIS” space experiment.

  2. Computational and informatics strategies for identification of specific protein interaction partners in affinity purification mass spectrometry experiments

    PubMed Central

    Nesvizhskii, Alexey I.

    2013-01-01

    Analysis of protein interaction networks and protein complexes using affinity purification and mass spectrometry (AP/MS) is among most commonly used and successful applications of proteomics technologies. One of the foremost challenges of AP/MS data is a large number of false positive protein interactions present in unfiltered datasets. Here we review computational and informatics strategies for detecting specific protein interaction partners in AP/MS experiments, with a focus on incomplete (as opposite to genome-wide) interactome mapping studies. These strategies range from standard statistical approaches, to empirical scoring schemes optimized for a particular type of data, to advanced computational frameworks. The common denominator among these methods is the use of label-free quantitative information such as spectral counts or integrated peptide intensities that can be extracted from AP/MS data. We also discuss related issues such as combining multiple biological or technical replicates, and dealing with data generated using different tagging strategies. Computational approaches for benchmarking of scoring methods are discussed, and the need for generation of reference AP/MS datasets is highlighted. Finally, we discuss the possibility of more extended modeling of experimental AP/MS data, including integration with external information such as protein interaction predictions based on functional genomics data. PMID:22611043

  3. Numerical experiment for ultrasonic-measurement-integrated simulation of three-dimensional unsteady blood flow.

    PubMed

    Funamoto, Kenichi; Hayase, Toshiyuki; Saijo, Yoshifumi; Yambe, Tomoyuki

    2008-08-01

    Integration of ultrasonic measurement and numerical simulation is a possible way to break through limitations of existing methods for obtaining complete information on hemodynamics. We herein propose Ultrasonic-Measurement-Integrated (UMI) simulation, in which feedback signals based on the optimal estimation of errors in the velocity vector determined by measured and computed Doppler velocities at feedback points are added to the governing equations. With an eye towards practical implementation of UMI simulation with real measurement data, its efficiency for three-dimensional unsteady blood flow analysis and a method for treating low time resolution of ultrasonic measurement were investigated by a numerical experiment dealing with complicated blood flow in an aneurysm. Even when simplified boundary conditions were applied, the UMI simulation reduced the errors of velocity and pressure to 31% and 53% in the feedback domain which covered the aneurysm, respectively. Local maximum wall shear stress was estimated, showing both the proper position and the value with 1% deviance. A properly designed intermittent feedback applied only at the time when measurement data were obtained had the same computational accuracy as feedback applied at every computational time step. Hence, this feedback method is a possible solution to overcome the insufficient time resolution of ultrasonic measurement.

  4. A fast random walk algorithm for computing the pulsed-gradient spin-echo signal in multiscale porous media.

    PubMed

    Grebenkov, Denis S

    2011-02-01

    A new method for computing the signal attenuation due to restricted diffusion in a linear magnetic field gradient is proposed. A fast random walk (FRW) algorithm for simulating random trajectories of diffusing spin-bearing particles is combined with gradient encoding. As random moves of a FRW are continuously adapted to local geometrical length scales, the method is efficient for simulating pulsed-gradient spin-echo experiments in hierarchical or multiscale porous media such as concrete, sandstones, sedimentary rocks and, potentially, brain or lungs. Copyright © 2010 Elsevier Inc. All rights reserved.

  5. DONBOL: A computer program for predicting axisymmetric nozzle afterbody pressure distributions and drag at subsonic speeds

    NASA Technical Reports Server (NTRS)

    Putnam, L. E.

    1979-01-01

    A Neumann solution for inviscid external flow was coupled to a modified Reshotko-Tucker integral boundary-layer technique, the control volume method of Presz for calculating flow in the separated region, and an inviscid one-dimensional solution for the jet exhaust flow in order to predict axisymmetric nozzle afterbody pressure distributions and drag. The viscous and inviscid flows are solved iteratively until convergence is obtained. A computer algorithm of this procedure was written and is called DONBOL. A description of the computer program and a guide to its use is given. Comparisons of the predictions of this method with experiments show that the method accurately predicts the pressure distributions of boattail afterbodies which have the jet exhaust flow simulated by solid bodies. For nozzle configurations which have the jet exhaust simulated by high-pressure air, the present method significantly underpredicts the magnitude of nozzle pressure drag. This deficiency results because the method neglects the effects of jet plume entrainment. This method is limited to subsonic free-stream Mach numbers below that for which the flow over the body of revolution becomes sonic.

  6. Toward a structure determination method for biomineral-associated protein using combined solid- state NMR and computational structure prediction.

    PubMed

    Masica, David L; Ash, Jason T; Ndao, Moise; Drobny, Gary P; Gray, Jeffrey J

    2010-12-08

    Protein-biomineral interactions are paramount to materials production in biology, including the mineral phase of hard tissue. Unfortunately, the structure of biomineral-associated proteins cannot be determined by X-ray crystallography or solution nuclear magnetic resonance (NMR). Here we report a method for determining the structure of biomineral-associated proteins. The method combines solid-state NMR (ssNMR) and ssNMR-biased computational structure prediction. In addition, the algorithm is able to identify lattice geometries most compatible with ssNMR constraints, representing a quantitative, novel method for investigating crystal-face binding specificity. We use this method to determine most of the structure of human salivary statherin interacting with the mineral phase of tooth enamel. Computation and experiment converge on an ensemble of related structures and identify preferential binding at three crystal surfaces. The work represents a significant advance toward determining structure of biomineral-adsorbed protein using experimentally biased structure prediction. This method is generally applicable to proteins that can be chemically synthesized. Copyright © 2010 Elsevier Ltd. All rights reserved.

  7. Krylov-Subspace Recycling via the POD-Augmented Conjugate-Gradient Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin; Forstall, Virginia; Tuminaro, Ray

    This paper presents a new Krylov-subspace-recycling method for efficiently solving sequences of linear systems of equations characterized by varying right-hand sides and symmetric-positive-definite matrices. As opposed to typical truncation strategies used in recycling such as deflation, we propose a truncation method inspired by goal-oriented proper orthogonal decomposition (POD) from model reduction. This idea is based on the observation that model reduction aims to compute a low-dimensional subspace that contains an accurate solution; as such, we expect the proposed method to generate a low-dimensional subspace that is well suited for computing solutions that can satisfy inexact tolerances. In particular, we proposemore » specific goal-oriented POD `ingredients' that align the optimality properties of POD with the objective of Krylov-subspace recycling. To compute solutions in the resulting 'augmented' POD subspace, we propose a hybrid direct/iterative three-stage method that leverages 1) the optimal ordering of POD basis vectors, and 2) well-conditioned reduced matrices. Numerical experiments performed on solid-mechanics problems highlight the benefits of the proposed method over existing approaches for Krylov-subspace recycling.« less

  8. Krylov-Subspace Recycling via the POD-Augmented Conjugate-Gradient Method

    DOE PAGES

    Carlberg, Kevin; Forstall, Virginia; Tuminaro, Ray

    2016-01-01

    This paper presents a new Krylov-subspace-recycling method for efficiently solving sequences of linear systems of equations characterized by varying right-hand sides and symmetric-positive-definite matrices. As opposed to typical truncation strategies used in recycling such as deflation, we propose a truncation method inspired by goal-oriented proper orthogonal decomposition (POD) from model reduction. This idea is based on the observation that model reduction aims to compute a low-dimensional subspace that contains an accurate solution; as such, we expect the proposed method to generate a low-dimensional subspace that is well suited for computing solutions that can satisfy inexact tolerances. In particular, we proposemore » specific goal-oriented POD `ingredients' that align the optimality properties of POD with the objective of Krylov-subspace recycling. To compute solutions in the resulting 'augmented' POD subspace, we propose a hybrid direct/iterative three-stage method that leverages 1) the optimal ordering of POD basis vectors, and 2) well-conditioned reduced matrices. Numerical experiments performed on solid-mechanics problems highlight the benefits of the proposed method over existing approaches for Krylov-subspace recycling.« less

  9. Three-Dimensional Navier-Stokes Method with Two-Equation Turbulence Models for Efficient Numerical Simulation of Hypersonic Flows

    NASA Technical Reports Server (NTRS)

    Bardina, J. E.

    1994-01-01

    A new computational efficient 3-D compressible Reynolds-averaged implicit Navier-Stokes method with advanced two equation turbulence models for high speed flows is presented. All convective terms are modeled using an entropy satisfying higher-order Total Variation Diminishing (TVD) scheme based on implicit upwind flux-difference split approximations and arithmetic averaging procedure of primitive variables. This method combines the best features of data management and computational efficiency of space marching procedures with the generality and stability of time dependent Navier-Stokes procedures to solve flows with mixed supersonic and subsonic zones, including streamwise separated flows. Its robust stability derives from a combination of conservative implicit upwind flux-difference splitting with Roe's property U to provide accurate shock capturing capability that non-conservative schemes do not guarantee, alternating symmetric Gauss-Seidel 'method of planes' relaxation procedure coupled with a three-dimensional two-factor diagonal-dominant approximate factorization scheme, TVD flux limiters of higher-order flux differences satisfying realizability, and well-posed characteristic-based implicit boundary-point a'pproximations consistent with the local characteristics domain of dependence. The efficiency of the method is highly increased with Newton Raphson acceleration which allows convergence in essentially one forward sweep for supersonic flows. The method is verified by comparing with experiment and other Navier-Stokes methods. Here, results of adiabatic and cooled flat plate flows, compression corner flow, and 3-D hypersonic shock-wave/turbulent boundary layer interaction flows are presented. The robust 3-D method achieves a better computational efficiency of at least one order of magnitude over the CNS Navier-Stokes code. It provides cost-effective aerodynamic predictions in agreement with experiment, and the capability of predicting complex flow structures in complex geometries with good accuracy.

  10. Improving zero-training brain-computer interfaces by mixing model estimators

    NASA Astrophysics Data System (ADS)

    Verhoeven, T.; Hübner, D.; Tangermann, M.; Müller, K. R.; Dambre, J.; Kindermans, P. J.

    2017-06-01

    Objective. Brain-computer interfaces (BCI) based on event-related potentials (ERP) incorporate a decoder to classify recorded brain signals and subsequently select a control signal that drives a computer application. Standard supervised BCI decoders require a tedious calibration procedure prior to every session. Several unsupervised classification methods have been proposed that tune the decoder during actual use and as such omit this calibration. Each of these methods has its own strengths and weaknesses. Our aim is to improve overall accuracy of ERP-based BCIs without calibration. Approach. We consider two approaches for unsupervised classification of ERP signals. Learning from label proportions (LLP) was recently shown to be guaranteed to converge to a supervised decoder when enough data is available. In contrast, the formerly proposed expectation maximization (EM) based decoding for ERP-BCI does not have this guarantee. However, while this decoder has high variance due to random initialization of its parameters, it obtains a higher accuracy faster than LLP when the initialization is good. We introduce a method to optimally combine these two unsupervised decoding methods, letting one method’s strengths compensate for the weaknesses of the other and vice versa. The new method is compared to the aforementioned methods in a resimulation of an experiment with a visual speller. Main results. Analysis of the experimental results shows that the new method exceeds the performance of the previous unsupervised classification approaches in terms of ERP classification accuracy and symbol selection accuracy during the spelling experiment. Furthermore, the method shows less dependency on random initialization of model parameters and is consequently more reliable. Significance. Improving the accuracy and subsequent reliability of calibrationless BCIs makes these systems more appealing for frequent use.

  11. Integrative Utilization of Microenvironments, Biomaterials and Computational Techniques for Advanced Tissue Engineering.

    PubMed

    Shamloo, Amir; Mohammadaliha, Negar; Mohseni, Mina

    2015-10-20

    This review aims to propose the integrative implementation of microfluidic devices, biomaterials, and computational methods that can lead to a significant progress in tissue engineering and regenerative medicine researches. Simultaneous implementation of multiple techniques can be very helpful in addressing biological processes. Providing controllable biochemical and biomechanical cues within artificial extracellular matrix similar to in vivo conditions is crucial in tissue engineering and regenerative medicine researches. Microfluidic devices provide precise spatial and temporal control over cell microenvironment. Moreover, generation of accurate and controllable spatial and temporal gradients of biochemical factors is attainable inside microdevices. Since biomaterials with tunable properties are a worthwhile option to construct artificial extracellular matrix, in vitro platforms that simultaneously utilize natural, synthetic, or engineered biomaterials inside microfluidic devices are phenomenally advantageous to experimental studies in the field of tissue engineering. Additionally, collaboration between experimental and computational methods is a useful way to predict and understand mechanisms responsible for complex biological phenomena. Computational results can be verified by using experimental platforms. Computational methods can also broaden the understanding of the mechanisms behind the biological phenomena observed during experiments. Furthermore, computational methods are powerful tools to optimize the fabrication of microfluidic devices and biomaterials with specific features. Here we present a succinct review of the benefits of microfluidic devices, biomaterial, and computational methods in the case of tissue engineering and regeneration medicine. Furthermore, some breakthroughs in biological phenomena including the neuronal axon development, cancerous cell migration and blood vessel formation via angiogenesis by virtue of the aforementioned approaches are discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Computational models for predicting interactions with membrane transporters.

    PubMed

    Xu, Y; Shen, Q; Liu, X; Lu, J; Li, S; Luo, C; Gong, L; Luo, X; Zheng, M; Jiang, H

    2013-01-01

    Membrane transporters, including two members: ATP-binding cassette (ABC) transporters and solute carrier (SLC) transporters are proteins that play important roles to facilitate molecules into and out of cells. Consequently, these transporters can be major determinants of the therapeutic efficacy, toxicity and pharmacokinetics of a variety of drugs. Considering the time and expense of bio-experiments taking, research should be driven by evaluation of efficacy and safety. Computational methods arise to be a complementary choice. In this article, we provide an overview of the contribution that computational methods made in transporters field in the past decades. At the beginning, we present a brief introduction about the structure and function of major members of two families in transporters. In the second part, we focus on widely used computational methods in different aspects of transporters research. In the absence of a high-resolution structure of most of transporters, homology modeling is a useful tool to interpret experimental data and potentially guide experimental studies. We summarize reported homology modeling in this review. Researches in computational methods cover major members of transporters and a variety of topics including the classification of substrates and/or inhibitors, prediction of protein-ligand interactions, constitution of binding pocket, phenotype of non-synonymous single-nucleotide polymorphisms, and the conformation analysis that try to explain the mechanism of action. As an example, one of the most important transporters P-gp is elaborated to explain the differences and advantages of various computational models. In the third part, the challenges of developing computational methods to get reliable prediction, as well as the potential future directions in transporter related modeling are discussed.

  13. Taming Many-Parameter BSM Models with Bayesian Neural Networks

    NASA Astrophysics Data System (ADS)

    Kuchera, M. P.; Karbo, A.; Prosper, H. B.; Sanchez, A.; Taylor, J. Z.

    2017-09-01

    The search for physics Beyond the Standard Model (BSM) is a major focus of large-scale high energy physics experiments. One method is to look for specific deviations from the Standard Model that are predicted by BSM models. In cases where the model has a large number of free parameters, standard search methods become intractable due to computation time. This talk presents results using Bayesian Neural Networks, a supervised machine learning method, to enable the study of higher-dimensional models. The popular phenomenological Minimal Supersymmetric Standard Model was studied as an example of the feasibility and usefulness of this method. Graphics Processing Units (GPUs) are used to expedite the calculations. Cross-section predictions for 13 TeV proton collisions will be presented. My participation in the Conference Experience for Undergraduates (CEU) in 2004-2006 exposed me to the national and global significance of cutting-edge research. At the 2005 CEU, I presented work from the previous summer's SULI internship at Lawrence Berkeley Laboratory, where I learned to program while working on the Majorana Project. That work inspired me to follow a similar research path, which led me to my current work on computational methods applied to BSM physics.

  14. A comparative trial of paper-and-pencil versus computer administration of the Quality of Life in Reflux and Dyspepsia (QOLRAD) questionnaire.

    PubMed

    Kleinman, L; Leidy, N K; Crawley, J; Bonomi, A; Schoenfeld, P

    2001-02-01

    Although most health-related quality of life questionnaires are self-administered by means of paper and pencil, new technologies for automated computer administration are becoming more readily available. Novel methods of instrument administration must be assessed for score equivalence in addition to consistency in reliability and validity. The present study compared the psychometric characteristics (score equivalence and structure, internal consistency, and reproducibility reliability and construct validity) of the Quality of Life in Reflux And Dyspepsia (QOLRAD) questionnaire when self-administered by means of paper and pencil versus touch-screen computer. The influence of age, education, and prior experience with computers on score equivalence was also examined. This crossover trial randomized 134 patients with gastroesophageal reflux disease to 1 of 2 groups: paper-and-pencil questionnaire administration followed by computer administration or computer administration followed by use of paper and pencil. To minimize learning effects and respondent fatigue, administrations were scheduled 3 days apart. A random sample of 32 patients participated in a 1-week reproducibility evaluation of the computer-administered QOLRAD. QOLRAD scores were equivalent across the 2 methods of administration regardless of subject age, education, and prior computer use. Internal consistency levels were very high (alpha = 0.93-0.99). Interscale correlations were strong and generally consistent across methods (r = 0.7-0.87). Correlations between the QOLRAD and Short Form 36 (SF-36) were high, with no significant differences by method. Test-retest reliability of the computer-administered QOLRAD was also very high (ICC = 0.93-0.96). Results of the present study suggest that the QOLRAD is reliable and valid when self-administered by means of computer touch-screen or paper and pencil.

  15. Classifying Web Pages by Using Knowledge Bases for Entity Retrieval

    NASA Astrophysics Data System (ADS)

    Kiritani, Yusuke; Ma, Qiang; Yoshikawa, Masatoshi

    In this paper, we propose a novel method to classify Web pages by using knowledge bases for entity search, which is a kind of typical Web search for information related to a person, location or organization. First, we map a Web page to entities according to the similarities between the page and the entities. Various methods for computing such similarity are applied. For example, we can compute the similarity between a given page and a Wikipedia article describing a certain entity. The frequency of an entity appearing in the page is another factor used in computing the similarity. Second, we construct a directed acyclic graph, named PEC graph, based on the relations among Web pages, entities, and categories, by referring to YAGO, a knowledge base built on Wikipedia and WordNet. Finally, by analyzing the PEC graph, we classify Web pages into categories. The results of some preliminary experiments validate the methods proposed in this paper.

  16. Determination of efficiency of an aged HPGe detector for gaseous sources by self absorption correction and point source methods

    NASA Astrophysics Data System (ADS)

    Sarangapani, R.; Jose, M. T.; Srinivasan, T. K.; Venkatraman, B.

    2017-07-01

    Methods for the determination of efficiency of an aged high purity germanium (HPGe) detector for gaseous sources have been presented in the paper. X-ray radiography of the detector has been performed to get detector dimensions for computational purposes. The dead layer thickness of HPGe detector has been ascertained from experiments and Monte Carlo computations. Experimental work with standard point and liquid sources in several cylindrical geometries has been undertaken for obtaining energy dependant efficiency. Monte Carlo simulations have been performed for computing efficiencies for point, liquid and gaseous sources. Self absorption correction factors have been obtained using mathematical equations for volume sources and MCNP simulations. Self-absorption correction and point source methods have been used to estimate the efficiency for gaseous sources. The efficiencies determined from the present work have been used to estimate activity of cover gas sample of a fast reactor.

  17. Sources of computer self-efficacy: The relationship to outcome expectations, computer anxiety, and intention to use computers

    NASA Astrophysics Data System (ADS)

    Antoine, Marilyn V.

    2011-12-01

    The purpose of this research was to extend earlier research on sources of selfefficacy (Lent, Lopez, & Biechke, 1991; Usher & Pajares, 2009) to the information technology domain. The principal investigator examined how Bandura's (1977) sources of self-efficacy information---mastery experience, vicarious experience, verbal persuasion, and physiological states---shape computer self-efficacy beliefs and influence the decision to use or not use computers. The study took place at a mid-sized Historically Black College or University in the South. A convenience sample of 105 undergraduates was drawn from students enrolled in multiple sections of two introductory computer courses. There were 67 females and 38 males. This research was a correlational study of the following variables: sources of computer self-efficacy, general computer self-efficacy, outcome expectations, computer anxiety, and intention to use computers. The principal investigator administered a survey questionnaire containing 52 Likert items to measure the major study variables. Additionally, the survey instrument collected demographic variables such as gender, age, race, intended major, classification, technology use, technology adoption category, and whether the student owns a computer. The results reveal the following: (1) Mastery experience and verbal persuasion had statistically significant relationships to general computer self-efficacy, while vicarious experience and physiological states had non-significant relationships. Mastery experience had the strongest correlation to general computer self-efficacy. (2) All of the sources of computer self-efficacy had statistically significant relationships to personal outcome expectations. Vicarious experience had the strongest correlation to personal outcome expectations. (3) All of the sources of self-efficacy had statistically significant relationships to performance outcome expectations. Vicarious experience had the strongest correlation to performance outcome expectations. (4) Mastery experience and physiological states had statistically significant relationships to computer anxiety, while vicarious experience and verbal persuasion had non-significant relationships. Physiological states had the strongest correlation to computer anxiety. (5) Mastery experience, vicarious experience, and physiological states had statistically significant relationships to intention to use computers, while verbal persuasion had a non-significant relationship. Mastery experience had the strongest correlation to intention to use computers. Gender-related findings indicate that females reported higher average mastery experience, vicarious experience, physiological states, and intention to use computers than males. Females reported lower average general computer self-efficacy, computer anxiety, verbal persuasion, personal outcome expectations, and performance outcome expectations than males. The results of this study can be used to develop strategies for increasing general computer self-efficacy, outcome expectations, and intention to use computers. The results can also be used to develop strategies for reducing computer anxiety.

  18. Metamodels for Computer-Based Engineering Design: Survey and Recommendations

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.; Peplinski, Jesse; Koch, Patrick N.; Allen, Janet K.

    1997-01-01

    The use of statistical techniques to build approximations of expensive computer analysis codes pervades much of todays engineering design. These statistical approximations, or metamodels, are used to replace the actual expensive computer analyses, facilitating multidisciplinary, multiobjective optimization and concept exploration. In this paper we review several of these techniques including design of experiments, response surface methodology, Taguchi methods, neural networks, inductive learning, and kriging. We survey their existing application in engineering design and then address the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of statistical approximation techniques in given situations and how common pitfalls can be avoided.

  19. Computationally efficient algorithms for real-time attitude estimation

    NASA Technical Reports Server (NTRS)

    Pringle, Steven R.

    1993-01-01

    For many practical spacecraft applications, algorithms for determining spacecraft attitude must combine inputs from diverse sensors and provide redundancy in the event of sensor failure. A Kalman filter is suitable for this task, however, it may impose a computational burden which may be avoided by sub optimal methods. A suboptimal estimator is presented which was implemented successfully on the Delta Star spacecraft which performed a 9 month SDI flight experiment in 1989. This design sought to minimize algorithm complexity to accommodate the limitations of an 8K guidance computer. The algorithm used is interpreted in the framework of Kalman filtering and a derivation is given for the computation.

  20. Prediction of miRNA targets.

    PubMed

    Oulas, Anastasis; Karathanasis, Nestoras; Louloupi, Annita; Pavlopoulos, Georgios A; Poirazi, Panayiota; Kalantidis, Kriton; Iliopoulos, Ioannis

    2015-01-01

    Computational methods for miRNA target prediction are currently undergoing extensive review and evaluation. There is still a great need for improvement of these tools and bioinformatics approaches are looking towards high-throughput experiments in order to validate predictions. The combination of large-scale techniques with computational tools will not only provide greater credence to computational predictions but also lead to the better understanding of specific biological questions. Current miRNA target prediction tools utilize probabilistic learning algorithms, machine learning methods and even empirical biologically defined rules in order to build models based on experimentally verified miRNA targets. Large-scale protein downregulation assays and next-generation sequencing (NGS) are now being used to validate methodologies and compare the performance of existing tools. Tools that exhibit greater correlation between computational predictions and protein downregulation or RNA downregulation are considered the state of the art. Moreover, efficiency in prediction of miRNA targets that are concurrently verified experimentally provides additional validity to computational predictions and further highlights the competitive advantage of specific tools and their efficacy in extracting biologically significant results. In this review paper, we discuss the computational methods for miRNA target prediction and provide a detailed comparison of methodologies and features utilized by each specific tool. Moreover, we provide an overview of current state-of-the-art high-throughput methods used in miRNA target prediction.

  1. A method of computer modelling the lithium-ion batteries aging process based on the experimental characteristics

    NASA Astrophysics Data System (ADS)

    Czerepicki, A.; Koniak, M.

    2017-06-01

    The paper presents a method of modelling the processes of aging lithium-ion batteries, its implementation as a computer application and results for battery state estimation. Authors use previously developed behavioural battery model, which was built using battery operating characteristics obtained from the experiment. This model was implemented in the form of a computer program using a database to store battery characteristics. Batteries aging process is a new extended functionality of the model. Algorithm of computer simulation uses a real measurements of battery capacity as a function of the battery charge and discharge cycles number. Simulation allows to take into account the incomplete cycles of charge or discharge battery, which are characteristic for transport powered by electricity. The developed model was used to simulate the battery state estimation for different load profiles, obtained by measuring the movement of the selected means of transport.

  2. Computation of turbulent flow in a thin liquid layer of fluid involving a hydraulic jump

    NASA Technical Reports Server (NTRS)

    Rahman, M. M.; Faghri, A.; Hankey, W. L.

    1991-01-01

    Numerically computed flow fields and free surface height distributions are presented for the flow of a thin layer of liquid adjacent to a solid horizontal surface that encounters a hydraulic jump. Two kinds of flow configurations are considered: two-dimensional plane flow and axisymmetric radial flow. The computations used a boundary-fitted moving grid method with a k-epsilon model for the closure of turbulence. The free surface height was determined by an optimization procedure which minimized the error in the pressure distribution on the free surface. It was also checked against an approximate procedure involving integration of the governing equations and use of the MacCormack predictor-corrector method. The computed film height also compared reasonably well with previous experiments. A region of recirculating flow was found to be present adjacent to the solid boundary near the location of the jump, which was caused by a rapid deceleration of the flow.

  3. An Improved Computational Technique for Calculating Electromagnetic Forces and Power Absorptions Generated in Spherical and Deformed Body in Levitation Melting Devices

    NASA Technical Reports Server (NTRS)

    Zong, Jin-Ho; Szekely, Julian; Schwartz, Elliot

    1992-01-01

    An improved computational technique for calculating the electromagnetic force field, the power absorption and the deformation of an electromagnetically levitated metal sample is described. The technique is based on the volume integral method, but represents a substantial refinement; the coordinate transformation employed allows the efficient treatment of a broad class of rotationally symmetrical bodies. Computed results are presented to represent the behavior of levitation melted metal samples in a multi-coil, multi-frequency levitation unit to be used in microgravity experiments. The theoretical predictions are compared with both analytical solutions and with the results or previous computational efforts for the spherical samples and the agreement has been very good. The treatment of problems involving deformed surfaces and actually predicting the deformed shape of the specimens breaks new ground and should be the major usefulness of the proposed method.

  4. Mind the Gap! A Journey towards Computational Toxicology.

    PubMed

    Mangiatordi, Giuseppe Felice; Alberga, Domenico; Altomare, Cosimo Damiano; Carotti, Angelo; Catto, Marco; Cellamare, Saverio; Gadaleta, Domenico; Lattanzi, Gianluca; Leonetti, Francesco; Pisani, Leonardo; Stefanachi, Angela; Trisciuzzi, Daniela; Nicolotti, Orazio

    2016-09-01

    Computational methods have advanced toxicology towards the development of target-specific models based on a clear cause-effect rationale. However, the predictive potential of these models presents strengths and weaknesses. On the good side, in silico models are valuable cheap alternatives to in vitro and in vivo experiments. On the other, the unconscious use of in silico methods can mislead end-users with elusive results. The focus of this review is on the basic scientific and regulatory recommendations in the derivation and application of computational models. Attention is paid to examine the interplay between computational toxicology and drug discovery and development. Avoiding the easy temptation of an overoptimistic future, we report our view on what can, or cannot, realistically be done. Indeed, studies of safety/toxicity represent a key element of chemical prioritization programs carried out by chemical industries, and primarily by pharmaceutical companies. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Ontology based heterogeneous materials database integration and semantic query

    NASA Astrophysics Data System (ADS)

    Zhao, Shuai; Qian, Quan

    2017-10-01

    Materials digital data, high throughput experiments and high throughput computations are regarded as three key pillars of materials genome initiatives. With the fast growth of materials data, the integration and sharing of data is very urgent, that has gradually become a hot topic of materials informatics. Due to the lack of semantic description, it is difficult to integrate data deeply in semantic level when adopting the conventional heterogeneous database integration approaches such as federal database or data warehouse. In this paper, a semantic integration method is proposed to create the semantic ontology by extracting the database schema semi-automatically. Other heterogeneous databases are integrated to the ontology by means of relational algebra and the rooted graph. Based on integrated ontology, semantic query can be done using SPARQL. During the experiments, two world famous First Principle Computational databases, OQMD and Materials Project are used as the integration targets, which show the availability and effectiveness of our method.

  6. Optimization analysis of thermal management system for electric vehicle battery pack

    NASA Astrophysics Data System (ADS)

    Gong, Huiqi; Zheng, Minxin; Jin, Peng; Feng, Dong

    2018-04-01

    Electric vehicle battery pack can increase the temperature to affect the power battery system cycle life, charge-ability, power, energy, security and reliability. The Computational Fluid Dynamics simulation and experiment of the charging and discharging process of the battery pack were carried out for the thermal management system of the battery pack under the continuous charging of the battery. The simulation result and the experimental data were used to verify the rationality of the Computational Fluid Dynamics calculation model. In view of the large temperature difference of the battery module in high temperature environment, three optimization methods of the existing thermal management system of the battery pack were put forward: adjusting the installation position of the fan, optimizing the arrangement of the battery pack and reducing the fan opening temperature threshold. The feasibility of the optimization method is proved by simulation and experiment of the thermal management system of the optimized battery pack.

  7. Analysis of STM images with pure and CO-functionalized tips: A first-principles and experimental study

    NASA Astrophysics Data System (ADS)

    Gustafsson, Alexander; Okabayashi, Norio; Peronio, Angelo; Giessibl, Franz J.; Paulsson, Magnus

    2017-08-01

    We describe a first-principles method to calculate scanning tunneling microscopy (STM) images, and compare the results to well-characterized experiments combining STM with atomic force microscopy (AFM). The theory is based on density functional theory with a localized basis set, where the wave functions in the vacuum gap are computed by propagating the localized-basis wave functions into the gap using a real-space grid. Constant-height STM images are computed using Bardeen's approximation method, including averaging over the reciprocal space. We consider copper adatoms and single CO molecules adsorbed on Cu(111), scanned with a single-atom copper tip with and without CO functionalization. The calculated images agree with state-of-the-art experiments, where the atomic structure of the tip apex is determined by AFM. The comparison further allows for detailed interpretation of the STM images.

  8. Ensuring the validity of calculated subcritical limits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, H.K.

    1977-01-01

    The care taken at the Savannah River Laboratory and Plant to ensure the validity of calculated subcritical limits is described. Close attention is given to ANSI N16.1-1975, ''Validation of Calculational Methods for Nuclear Criticality Safety.'' The computer codes used for criticality safety computations, which are listed and are briefly described, have been placed in the SRL JOSHUA system to facilitate calculation and to reduce input errors. A driver module, KOKO, simplifies and standardizes input and links the codes together in various ways. For any criticality safety evaluation, correlations of the calculational methods are made with experiment to establish bias. Occasionallymore » subcritical experiments are performed expressly to provide benchmarks. Calculated subcritical limits contain an adequate but not excessive margin to allow for uncertainty in the bias. The final step in any criticality safety evaluation is the writing of a report describing the calculations and justifying the margin.« less

  9. Bifocal computational near eye light field displays and Structure parameters determination scheme for bifocal computational display.

    PubMed

    Liu, Mali; Lu, Chihao; Li, Haifeng; Liu, Xu

    2018-02-19

    We propose a bifocal computational near eye light field display (bifocal computational display) and structure parameters determination scheme (SPDS) for bifocal computational display that achieves greater depth of field (DOF), high resolution, accommodation and compact form factor. Using a liquid varifocal lens, two single-focal computational light fields are superimposed to reconstruct a virtual object's light field by time multiplex and avoid the limitation on high refresh rate. By minimizing the deviation between reconstructed light field and original light field, we propose a determination framework to determine the structure parameters of bifocal computational light field display. When applied to different objective to SPDS, it can achieve high average resolution or uniform resolution display over scene depth range. To analyze the advantages and limitation of our proposed method, we have conducted simulations and constructed a simple prototype which comprises a liquid varifocal lens, dual-layer LCDs and a uniform backlight. The results of simulation and experiments with our method show that the proposed system can achieve expected performance well. Owing to the excellent performance of our system, we motivate bifocal computational display and SPDS to contribute to a daily-use and commercial virtual reality display.

  10. Development of display design and command usage guidelines for Spacelab experiment computer applications

    NASA Technical Reports Server (NTRS)

    Dodson, D. W.; Shields, N. L., Jr.

    1979-01-01

    Individual Spacelab experiments are responsible for developing their CRT display formats and interactive command scenarios for payload crew monitoring and control of experiment operations via the Spacelab Data Display System (DDS). In order to enhance crew training and flight operations, it was important to establish some standardization of the crew/experiment interface among different experiments by providing standard methods and techniques for data presentation and experiment commanding via the DDS. In order to establish optimum usage guidelines for the Spacelab DDS, the capabilities and limitations of the hardware and Experiment Computer Operating System design had to be considered. Since the operating system software and hardware design had already been established, the Display and Command Usage Guidelines were constrained to the capabilities of the existing system design. Empirical evaluations were conducted on a DDS simulator to determine optimum operator/system interface utilization of the system capabilities. Display parameters such as information location, display density, data organization, status presentation and dynamic update effects were evaluated in terms of response times and error rates.

  11. Adjoint-Based Sensitivity and Uncertainty Analysis for Density and Composition: A User’s Guide

    DOE PAGES

    Favorite, Jeffrey A.; Perko, Zoltan; Kiedrowski, Brian C.; ...

    2017-03-01

    The ability to perform sensitivity analyses using adjoint-based first-order sensitivity theory has existed for decades. This paper provides guidance on how adjoint sensitivity methods can be used to predict the effect of material density and composition uncertainties in critical experiments, including when these uncertain parameters are correlated or constrained. Two widely used Monte Carlo codes, MCNP6 (Ref. 2) and SCALE 6.2 (Ref. 3), are both capable of computing isotopic density sensitivities in continuous energy and angle. Additionally, Perkó et al. have shown how individual isotope density sensitivities, easily computed using adjoint methods, can be combined to compute constrained first-order sensitivitiesmore » that may be used in the uncertainty analysis. This paper provides details on how the codes are used to compute first-order sensitivities and how the sensitivities are used in an uncertainty analysis. Constrained first-order sensitivities are computed in a simple example problem.« less

  12. Computation of the sound generated by isotropic turbulence

    NASA Technical Reports Server (NTRS)

    Sarkar, S.; Hussaini, M. Y.

    1993-01-01

    The acoustic radiation from isotropic turbulence is computed numerically. A hybrid direct numerical simulation approach which combines direct numerical simulation (DNS) of the turbulent flow with the Lighthill acoustic analogy is utilized. It is demonstrated that the hybrid DNS method is a feasible approach to the computation of sound generated by turbulent flows. The acoustic efficiency in the simulation of isotropic turbulence appears to be substantially less than that in subsonic jet experiments. The dominant frequency of the computed acoustic pressure is found to be somewhat larger than the dominant frequency of the energy-containing scales of motion. The acoustic power in the simulations is proportional to epsilon (M(sub t))(exp 5) where epsilon is the turbulent dissipation rate and M(sub t) is the turbulent Mach number. This is in agreement with the analytical result of Proudman (1952), but the constant of proportionality is smaller than the analytical result. Two different methods of computing the acoustic power from the DNS data bases yielded consistent results.

  13. Computational techniques to enable visualizing shapes of objects of extra spatial dimensions

    NASA Astrophysics Data System (ADS)

    Black, Don Vaughn, II

    Envisioning extra dimensions beyond the three of common experience is a daunting challenge for three dimensional observers. Intuition relies on experience gained in a three dimensional environment. Gaining experience with virtual four dimensional objects and virtual three manifolds in four-space on a personal computer may provide the basis for an intuitive grasp of four dimensions. In order to enable such a capability for ourselves, it is first necessary to devise and implement a computationally tractable method to visualize, explore, and manipulate objects of dimension beyond three on the personal computer. A technology is described in this dissertation to convert a representation of higher dimensional models into a format that may be displayed in realtime on graphics cards available on many off-the-shelf personal computers. As a result, an opportunity has been created to experience the shape of four dimensional objects on the desktop computer. The ultimate goal has been to provide the user a tangible and memorable experience with mathematical models of four dimensional objects such that the user can see the model from any user selected vantage point. By use of a 4D GUI, an arbitrary convex hull or 3D silhouette of the 4D model can be rotated, panned, scrolled, and zoomed until a suitable dimensionally reduced view or Aspect is obtained. The 4D GUI then allows the user to manipulate a 3-flat hyperplane cutting tool to slice the model at an arbitrary orientation and position to extract or "pluck" an embedded 3D slice or "aspect" from the embedding four-space. This plucked 3D aspect can be viewed from all angles via a conventional 3D viewer using three multiple POV viewports, and optionally exported to a third party CAD viewer for further manipulation. Plucking and Manipulating the Aspect provides a tangible experience for the end-user in the same manner as any 3D Computer Aided Design viewing and manipulation tool does for the engineer or a 3D video game provides for the nascent student.

  14. Validation of light water reactor calculation methods and JEF-1-based data libraries by TRX and BAPL critical experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paratte, J.M.; Pelloni, S.; Grimm, P.

    1991-04-01

    This paper analyzes the capability of various code systems and JEF-1-based nuclear data libraries to compute light water reactor lattices by comparing calculations with results from thermal reactor benchmark experiments TRX and BAPL and with previously published values. With the JEF-1 evaluation, eigenvalues are generally well predicted within 8 mk (1 mk = 0.001) or less by all code systems, and all methods give reasonable results for the measured reaction rate ratios within, or not too far from, the experimental uncertainty.

  15. About decomposition approach for solving the classification problem

    NASA Astrophysics Data System (ADS)

    Andrianova, A. A.

    2016-11-01

    This article describes the features of the application of an algorithm with using of decomposition methods for solving the binary classification problem of constructing a linear classifier based on Support Vector Machine method. Application of decomposition reduces the volume of calculations, in particular, due to the emerging possibilities to build parallel versions of the algorithm, which is a very important advantage for the solution of problems with big data. The analysis of the results of computational experiments conducted using the decomposition approach. The experiment use known data set for binary classification problem.

  16. Streamline similarity method for flow distributions and shock losses at the impeller inlet of the centrifugal pump

    NASA Astrophysics Data System (ADS)

    Zhang, Zh.

    2018-02-01

    An analytical method is presented, which enables the non-uniform velocity and pressure distributions at the impeller inlet of a pump to be accurately computed. The analyses are based on the potential flow theory and the geometrical similarity of the streamline distribution along the leading edge of the impeller blades. The method is thus called streamline similarity method (SSM). The obtained geometrical form of the flow distribution is then simply described by the geometrical variable G( s) and the first structural constant G I . As clearly demonstrated and also validated by experiments, both the flow velocity and the pressure distributions at the impeller inlet are usually highly non-uniform. This knowledge is indispensible for impeller blade designs to fulfill the shockless inlet flow condition. By introducing the second structural constant G II , the paper also presents the simple and accurate computation of the shock loss, which occurs at the impeller inlet. The introduction of two structural constants contributes immensely to the enhancement of the computational accuracies. As further indicated, all computations presented in this paper can also be well applied to the non-uniform exit flow out of an impeller of the Francis turbine for accurately computing the related mean values.

  17. Fully automated registration of first-pass myocardial perfusion MRI using independent component analysis.

    PubMed

    Milles, J; van der Geest, R J; Jerosch-Herold, M; Reiber, J H C; Lelieveldt, B P F

    2007-01-01

    This paper presents a novel method for registration of cardiac perfusion MRI. The presented method successfully corrects for breathing motion without any manual interaction using Independent Component Analysis to extract physiologically relevant features together with their time-intensity behavior. A time-varying reference image mimicking intensity changes in the data of interest is computed based on the results of ICA, and used to compute the displacement caused by breathing for each frame. Qualitative and quantitative validation of the method is carried out using 46 clinical quality, short-axis, perfusion MR datasets comprising 100 images each. Validation experiments showed a reduction of the average LV motion from 1.26+/-0.87 to 0.64+/-0.46 pixels. Time-intensity curves are also improved after registration with an average error reduced from 2.65+/-7.89% to 0.87+/-3.88% between registered data and manual gold standard. We conclude that this fully automatic ICA-based method shows an excellent accuracy, robustness and computation speed, adequate for use in a clinical environment.

  18. Determination of two-stroke engine exhaust noise by the method of characteristics

    NASA Technical Reports Server (NTRS)

    Jones, A. D.; Brown, G. L.

    1981-01-01

    A computational technique was developed for the method of characteristics solution of a one-dimensional flow in a duct as applied to the wave action in an engine exhaust system. By using the method, it was possible to compute the unsteady flow in both straight pipe and tuned expansion chamber exhaust systems as matched to the flow from the cylinder of a small two-stroke engine. The radiated exhaust noise was then determined by assuming monopole radiation from the tailpipe outlet. Very good agreement with experiment on an operation engine was achieved in the calculation of both the third octave radiated noise and the associated pressure cycles at several locations in the different exhaust systems. Of particular interest is the significance of nonlinear behavior which results in wave steepening and shock wave formation. The method computes the precise paths on the x-t plane of a finite number of C(sub +), C(sub -) and P characteristics, thereby obtaining high accuracy in determining the tailpipe outlet velocity and the radiated noise.

  19. Determination of two-stroke engine exhaust noise by the method of characteristics

    NASA Astrophysics Data System (ADS)

    Jones, A. D.; Brown, G. L.

    1981-06-01

    A computational technique was developed for the method of characteristics solution of a one-dimensional flow in a duct as applied to the wave action in an engine exhaust system. By using the method, it was possible to compute the unsteady flow in both straight pipe and tuned expansion chamber exhaust systems as matched to the flow from the cylinder of a small two-stroke engine. The radiated exhaust noise was then determined by assuming monopole radiation from the tailpipe outlet. Very good agreement with experiment on an operation engine was achieved in the calculation of both the third octave radiated noise and the associated pressure cycles at several locations in the different exhaust systems. Of particular interest is the significance of nonlinear behavior which results in wave steepening and shock wave formation. The method computes the precise paths on the x-t plane of a finite number of C(sub +), C(sub -) and P characteristics, thereby obtaining high accuracy in determining the tailpipe outlet velocity and the radiated noise.

  20. Enhanced Molecular Dynamics Methods Applied to Drug Design Projects.

    PubMed

    Ziada, Sonia; Braka, Abdennour; Diharce, Julien; Aci-Sèche, Samia; Bonnet, Pascal

    2018-01-01

    Nobel Laureate Richard P. Feynman stated: "[…] everything that living things do can be understood in terms of jiggling and wiggling of atoms […]." The importance of computer simulations of macromolecules, which use classical mechanics principles to describe atom behavior, is widely acknowledged and nowadays, they are applied in many fields such as material sciences and drug discovery. With the increase of computing power, molecular dynamics simulations can be applied to understand biological mechanisms at realistic timescales. In this chapter, we share our computational experience providing a global view of two of the widely used enhanced molecular dynamics methods to study protein structure and dynamics through the description of their characteristics, limits and we provide some examples of their applications in drug design. We also discuss the appropriate choice of software and hardware. In a detailed practical procedure, we describe how to set up, run, and analyze two main molecular dynamics methods, the umbrella sampling (US) and the accelerated molecular dynamics (aMD) methods.

  1. Kinetic Monte Carlo and cellular particle dynamics simulations of multicellular systems

    NASA Astrophysics Data System (ADS)

    Flenner, Elijah; Janosi, Lorant; Barz, Bogdan; Neagu, Adrian; Forgacs, Gabor; Kosztin, Ioan

    2012-03-01

    Computer modeling of multicellular systems has been a valuable tool for interpreting and guiding in vitro experiments relevant to embryonic morphogenesis, tumor growth, angiogenesis and, lately, structure formation following the printing of cell aggregates as bioink particles. Here we formulate two computer simulation methods: (1) a kinetic Monte Carlo (KMC) and (2) a cellular particle dynamics (CPD) method, which are capable of describing and predicting the shape evolution in time of three-dimensional multicellular systems during their biomechanical relaxation. Our work is motivated by the need of developing quantitative methods for optimizing postprinting structure formation in bioprinting-assisted tissue engineering. The KMC and CPD model parameters are determined and calibrated by using an original computational-theoretical-experimental framework applied to the fusion of two spherical cell aggregates. The two methods are used to predict the (1) formation of a toroidal structure through fusion of spherical aggregates and (2) cell sorting within an aggregate formed by two types of cells with different adhesivities.

  2. Prediction of electronic structure of organic radicaloid anions using efficient, economical multireference gradient approach.

    PubMed

    Chattopadhyay, Sudip; Chaudhuri, Rajat K; Freed, Karl F

    2011-04-28

    The improved virtual orbital-complete active space configuration interaction (IVO-CASCI) method enables an economical and reasonably accurate treatment of static correlation in systems with significant multireference character, even when using a moderate basis set. This IVO-CASCI method supplants the computationally more demanding complete active space self-consistent field (CASSCF) method by producing comparable accuracy with diminished computational effort because the IVO-CASCI approach does not require additional iterations beyond an initial SCF calculation, nor does it encounter convergence difficulties or multiple solutions that may be found in CASSCF calculations. Our IVO-CASCI analytical gradient approach is applied to compute the equilibrium geometry for the ground and lowest excited state(s) of the theoretically very challenging 2,6-pyridyne, 1,2,3-tridehydrobenzene and 1,3,5-tridehydrobenzene anionic systems for which experiments are lacking, accurate quantum calculations are almost completely absent, and commonly used calculations based on single reference configurations fail to provide reasonable results. Hence, the computational complexity provides an excellent test for the efficacy of multireference methods. The present work clearly illustrates that the IVO-CASCI analytical gradient method provides a good description of the complicated electronic quasi-degeneracies during the geometry optimization process for the radicaloid anions. The IVO-CASCI treatment produces almost identical geometries as the CASSCF calculations (performed for this study) at a fraction of the computational labor. Adiabatic energy gaps to low lying excited states likewise emerge from the IVO-CASCI and CASSCF methods as very similar. We also provide harmonic vibrational frequencies to demonstrate the stability of the computed geometries.

  3. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to control mini robotic manipulators for positional control; scalable numerical algorithms for reliability, verifications and testability. There appears no fundamental obstacle to simulating molecular compilers and molecular computers on high performance parallel computers, just as the Boeing 777 was simulated on a computer before manufacturing it.

  4. Results from the First Two Flights of the Static Computer Memory Integrity Testing Experiment

    NASA Technical Reports Server (NTRS)

    Hancock, Thomas M., III

    1999-01-01

    This paper details the scientific objectives, experiment design, data collection method, and post flight analysis following the first two flights of the Static Computer Memory Integrity Testing (SCMIT) experiment. SCMIT is designed to detect soft-event upsets in passive magnetic memory. A soft-event upset is a change in the logic state of active or passive forms of magnetic memory, commonly referred to as a "Bitflip". In its mildest form a soft-event upset can cause software exceptions, unexpected events, start spacecraft safeing (ending data collection) or corrupted fault protection and error recovery capabilities. In it's most severe form loss of mission or spacecraft can occur. Analysis after the first flight (in 1991 during STS-40) identified possible soft-event upsets to 25% of the experiment detectors. Post flight analysis after the second flight (in 1997 on STS-87) failed to find any evidence of soft-event upsets. The SCMIT experiment is currently scheduled for a third flight in December 1999 on STS-101.

  5. Accuracy of least-squares methods for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Bochev, Pavel B.; Gunzburger, Max D.

    1993-01-01

    Recently there has been substantial interest in least-squares finite element methods for velocity-vorticity-pressure formulations of the incompressible Navier-Stokes equations. The main cause for this interest is the fact that algorithms for the resulting discrete equations can be devised which require the solution of only symmetric, positive definite systems of algebraic equations. On the other hand, it is well-documented that methods using the vorticity as a primary variable often yield very poor approximations. Thus, here we study the accuracy of these methods through a series of computational experiments, and also comment on theoretical error estimates. It is found, despite the failure of standard methods for deriving error estimates, that computational evidence suggests that these methods are, at the least, nearly optimally accurate. Thus, in addition to the desirable matrix properties yielded by least-squares methods, one also obtains accurate approximations.

  6. Storyboarding: A Method for Bootstrapping the Design of Computer-Based Educational Tasks

    ERIC Educational Resources Information Center

    Jones, Ian

    2008-01-01

    There has been a recent call for the use of more systematic thought experiments when investigating learning. This paper presents a storyboarding method for capturing and sharing initial ideas and their evolution in the design of a mathematics learning task. The storyboards produced can be considered as "virtual data" created by thought experiments…

  7. Using the Metropolis Algorithm to Calculate Thermodynamic Quantities: An Undergraduate Computational Experiment

    ERIC Educational Resources Information Center

    Beddard, Godfrey S.

    2011-01-01

    Thermodynamic quantities such as the average energy, heat capacity, and entropy are calculated using a Monte Carlo method based on the Metropolis algorithm. This method is illustrated with reference to the harmonic oscillator but is particularly useful when the partition function cannot be evaluated; an example using a one-dimensional spin system…

  8. Parallel high-precision orbit propagation using the modified Picard-Chebyshev method

    NASA Astrophysics Data System (ADS)

    Koblick, Darin C.

    2012-03-01

    The modified Picard-Chebyshev method, when run in parallel, is thought to be more accurate and faster than the most efficient sequential numerical integration techniques when applied to orbit propagation problems. Previous experiments have shown that the modified Picard-Chebyshev method can have up to a one order magnitude speedup over the 12th order Runge-Kutta-Nystrom method. For this study, the evaluation of the accuracy and computational time of the modified Picard-Chebyshev method, using the Java Astrodynamics Toolkit high-precision force model, is conducted to assess its runtime performance. Simulation results of the modified Picard-Chebyshev method, implemented in MATLAB and the MATLAB Parallel Computing Toolbox, are compared against the most efficient first and second order Ordinary Differential Equation (ODE) solvers. A total of six processors were used to assess the runtime performance of the modified Picard-Chebyshev method. It was found that for all orbit propagation test cases, where the gravity model was simulated to be of higher degree and order (above 225 to increase computational overhead), the modified Picard-Chebyshev method was faster, by as much as a factor of two, than the other ODE solvers which were tested.

  9. Metabolic Flux Analysis in Isotope Labeling Experiments Using the Adjoint Approach.

    PubMed

    Mottelet, Stephane; Gaullier, Gil; Sadaka, Georges

    2017-01-01

    Comprehension of metabolic pathways is considerably enhanced by metabolic flux analysis (MFA-ILE) in isotope labeling experiments. The balance equations are given by hundreds of algebraic (stationary MFA) or ordinary differential equations (nonstationary MFA), and reducing the number of operations is therefore a crucial part of reducing the computation cost. The main bottleneck for deterministic algorithms is the computation of derivatives, particularly for nonstationary MFA. In this article, we explain how the overall identification process may be speeded up by using the adjoint approach to compute the gradient of the residual sum of squares. The proposed approach shows significant improvements in terms of complexity and computation time when it is compared with the usual (direct) approach. Numerical results are obtained for the central metabolic pathways of Escherichia coli and are validated against reference software in the stationary case. The methods and algorithms described in this paper are included in the sysmetab software package distributed under an Open Source license at http://forge.scilab.org/index.php/p/sysmetab/.

  10. The electrodynamic and hydrodynamic phenomena in magnetically-levitated molten droplets. I - Steady state behavior

    NASA Technical Reports Server (NTRS)

    Zong, Jin-Ho; Li, Benqiang; Szekely, Julian

    1992-01-01

    A mathematical formulation is given and computed results are presented describing the behavior of electromagnetically-levitated metal droplets under the conditions of microgravity. In the formulation the electromagnetic force field is calculated using a modification of the volume integral method and these results are then combined with the FIDAP code to calculate the steady state melt velocities. The specific computational results are presented for the conditions corresponding to the planned IML-2 Space Shuttle experiment, using the TEMPUS device, which has separate 'heating' and 'positioning' coils. While the computed results are necessarily specific to the input conditions, some general conclusions may be drawn from this work. These include the fact that for the planned TEMPUS experiments to positioning coils will produce only a weak melt circulation, while the heating coils are like to produce a mildly turbulent recirculating flow pattern within the samples. The computed results also allow us to assess the effect of sample size, material properties and the applied current on these phenomena.

  11. PC_Eyewitness and the sequential superiority effect: computer-based lineup administration.

    PubMed

    MacLin, Otto H; Zimmerman, Laura A; Malpass, Roy S

    2005-06-01

    Computer technology has become an increasingly important tool for conducting eyewitness identifications. In the area of lineup identifications, computerized administration offers several advantages for researchers and law enforcement. PC_Eyewitness is designed specifically to administer lineups. To assess this new lineup technology, two studies were conducted in order to replicate the results of previous studies comparing simultaneous and sequential lineups. One hundred twenty university students participated in each experiment. Experiment 1 used traditional paper-and-pencil lineup administration methods to compare simultaneous to sequential lineups. Experiment 2 used PC_Eyewitness to administer simultaneous and sequential lineups. The results of these studies were compared to the meta-analytic results reported by N. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001). No differences were found between paper-and-pencil and PC_Eyewitness lineup administration methods. The core findings of the N. Steblay et al. (2001) meta-analysis were replicated by both administration procedures. These results show that computerized lineup administration using PC_Eyewitness is an effective means for gathering eyewitness identification data.

  12. Ultrasonic Phased Array Inspection Experiments and Simulations for AN Isogrid Structural Element with Cracks

    NASA Astrophysics Data System (ADS)

    Roth, D. J.; Tokars, R. P.; Martin, R. E.; Rauser, R. W.; Aldrin, J. C.; Schumacher, E. J.

    2010-02-01

    In this investigation, a T-shaped aluminum alloy isogrid stiffener element used in aerospace applications was inspected with ultrasonic phased array methods. The isogrid stiffener element had various crack configurations emanating from bolt holes. Computational simulation methods were used to mimic the experiments in order to help understand experimental results. The results of this study indicate that it is at least partly feasible to interrogate this type of geometry with the given flaw configurations using phased array ultrasonics. The simulation methods were critical in helping explain the experimental results and, with some limitation, can be used to predict inspection results.

  13. On the Use of Statistics in Design and the Implications for Deterministic Computer Experiments

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.; Peplinski, Jesse; Koch, Patrick N.; Allen, Janet K.

    1997-01-01

    Perhaps the most prevalent use of statistics in engineering design is through Taguchi's parameter and robust design -- using orthogonal arrays to compute signal-to-noise ratios in a process of design improvement. In our view, however, there is an equally exciting use of statistics in design that could become just as prevalent: it is the concept of metamodeling whereby statistical models are built to approximate detailed computer analysis codes. Although computers continue to get faster, analysis codes always seem to keep pace so that their computational time remains non-trivial. Through metamodeling, approximations of these codes are built that are orders of magnitude cheaper to run. These metamodels can then be linked to optimization routines for fast analysis, or they can serve as a bridge for integrating analysis codes across different domains. In this paper we first review metamodeling techniques that encompass design of experiments, response surface methodology, Taguchi methods, neural networks, inductive learning, and kriging. We discuss their existing applications in engineering design and then address the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of metamodeling techniques in given situations and how common pitfalls can be avoided.

  14. Experimental Investigation of Jet Impingement Heat Transfer Using Thermochromic Liquid Crystals

    NASA Technical Reports Server (NTRS)

    Dempsey, Brian Paul

    1997-01-01

    Jet impingement cooling of a hypersonic airfoil leading edge is experimentally investigated using thermochromic liquid crystals (TLCS) to measure surface temperature. The experiment uses computer data acquisition with digital imaging of the TLCs to determine heat transfer coefficients during a transient experiment. The data reduction relies on analysis of a coupled transient conduction - convection heat transfer problem that characterizes the experiment. The recovery temperature of the jet is accounted for by running two experiments with different heating rates, thereby generating a second equation that is used to solve for the recovery temperature. The resulting solution requires a complicated numerical iteration that is handled by a computer. Because the computational data reduction method is complex, special attention is paid to error assessment. The error analysis considers random and systematic errors generated by the instrumentation along with errors generated by the approximate nature of the numerical methods. Results of the error analysis show that the experimentally determined heat transfer coefficients are accurate to within 15%. The error analysis also shows that the recovery temperature data may be in error by more than 50%. The results show that the recovery temperature data is only reliable when the recovery temperature of the jet is greater than 5 C, i.e. the jet velocity is in excess of 100 m/s. Parameters that were investigated include nozzle width, distance from the nozzle exit to the airfoil surface, and jet velocity. Heat transfer data is presented in graphical and tabular forms. An engineering analysis of hypersonic airfoil leading edge cooling is performed using the results from these experiments. Several suggestions for the improvement of the experimental technique are discussed.

  15. Design and Computational/Experimental Analysis of Low Sonic Boom Configurations

    NASA Technical Reports Server (NTRS)

    Cliff, Susan E.; Baker, Timothy J.; Hicks, Raymond M.

    1999-01-01

    Recent studies have shown that inviscid CFD codes combined with a planar extrapolation method give accurate sonic boom pressure signatures at distances greater than one body length from supersonic configurations if either adapted grids swept at the approximate Mach angle or very dense non-adapted grids are used. The validation of CFD for computing sonic boom pressure signatures provided the confidence needed to undertake the design of new supersonic transport configurations with low sonic boom characteristics. An aircraft synthesis code in combination with CFD and an extrapolation method were used to close the design. The principal configuration of this study is designated LBWT (Low Boom Wing Tail) and has a highly swept cranked arrow wing with conventional tails, and was designed to accommodate either 3 or 4 engines. The complete configuration including nacelles and boundary layer diverters was evaluated using the AIRPLANE code. This computer program solves the Euler equations on an unstructured tetrahedral mesh. Computations and wind tunnel data for the LBWT and two other low boom configurations designed at NASA Ames Research Center are presented. The two additional configurations are included to provide a basis for comparing the performance and sonic boom level of the LBWT with contemporary low boom designs and to give a broader experiment/CFD correlation study. The computational pressure signatures for the three configurations are contrasted with on-ground-track near-field experimental data from the NASA Ames 9x7 Foot Supersonic Wind Tunnel. Computed pressure signatures for the LBWT are also compared with experiment at approximately 15 degrees off ground track.

  16. A differentiable reformulation for E-optimal design of experiments in nonlinear dynamic biosystems.

    PubMed

    Telen, Dries; Van Riet, Nick; Logist, Flip; Van Impe, Jan

    2015-06-01

    Informative experiments are highly valuable for estimating parameters in nonlinear dynamic bioprocesses. Techniques for optimal experiment design ensure the systematic design of such informative experiments. The E-criterion which can be used as objective function in optimal experiment design requires the maximization of the smallest eigenvalue of the Fisher information matrix. However, one problem with the minimal eigenvalue function is that it can be nondifferentiable. In addition, no closed form expression exists for the computation of eigenvalues of a matrix larger than a 4 by 4 one. As eigenvalues are normally computed with iterative methods, state-of-the-art optimal control solvers are not able to exploit automatic differentiation to compute the derivatives with respect to the decision variables. In the current paper a reformulation strategy from the field of convex optimization is suggested to circumvent these difficulties. This reformulation requires the inclusion of a matrix inequality constraint involving positive semidefiniteness. In this paper, this positive semidefiniteness constraint is imposed via Sylverster's criterion. As a result the maximization of the minimum eigenvalue function can be formulated in standard optimal control solvers through the addition of nonlinear constraints. The presented methodology is successfully illustrated with a case study from the field of predictive microbiology. Copyright © 2015. Published by Elsevier Inc.

  17. Analysis of Test Case Computations and Experiments for the First Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Schuster, David M.; Heeg, Jennifer; Wieseman, Carol D.; Chwalowski, Pawel

    2013-01-01

    This paper compares computational and experimental data from the Aeroelastic Prediction Workshop (AePW) held in April 2012. This workshop was designed as a series of technical interchange meetings to assess the state of the art of computational methods for predicting unsteady flowfields and static and dynamic aeroelastic response. The goals are to provide an impartial forum to evaluate the effectiveness of existing computer codes and modeling techniques to simulate aeroelastic problems and to identify computational and experimental areas needing additional research and development. Three subject configurations were chosen from existing wind-tunnel data sets where there is pertinent experimental data available for comparison. Participant researchers analyzed one or more of the subject configurations, and results from all of these computations were compared at the workshop.

  18. A Computational/Experimental Study of Two Optimized Supersonic Transport Designs and the Reference H Baseline

    NASA Technical Reports Server (NTRS)

    Cliff, Susan E.; Baker, Timothy J.; Hicks, Raymond M.; Reuther, James J.

    1999-01-01

    Two supersonic transport configurations designed by use of non-linear aerodynamic optimization methods are compared with a linearly designed baseline configuration. One optimized configuration, designated Ames 7-04, was designed at NASA Ames Research Center using an Euler flow solver, and the other, designated Boeing W27, was designed at Boeing using a full-potential method. The two optimized configurations and the baseline were tested in the NASA Langley Unitary Plan Supersonic Wind Tunnel to evaluate the non-linear design optimization methodologies. In addition, the experimental results are compared with computational predictions for each of the three configurations from the Enter flow solver, AIRPLANE. The computational and experimental results both indicate moderate to substantial performance gains for the optimized configurations over the baseline configuration. The computed performance changes with and without diverters and nacelles were in excellent agreement with experiment for all three models. Comparisons of the computational and experimental cruise drag increments for the optimized configurations relative to the baseline show excellent agreement for the model designed by the Euler method, but poorer comparisons were found for the configuration designed by the full-potential code.

  19. Computational Methods for MOF/Polymer Membranes.

    PubMed

    Erucar, Ilknur; Keskin, Seda

    2016-04-01

    Metal-organic framework (MOF)/polymer mixed matrix membranes (MMMs) have received significant interest in the last decade. MOFs are incorporated into polymers to make MMMs that exhibit improved gas permeability and selectivity compared with pure polymer membranes. The fundamental challenge in this area is to choose the appropriate MOF/polymer combinations for a gas separation of interest. Even if a single polymer is considered, there are thousands of MOFs that could potentially be used as fillers in MMMs. As a result, there has been a large demand for computational studies that can accurately predict the gas separation performance of MOF/polymer MMMs prior to experiments. We have developed computational approaches to assess gas separation potentials of MOF/polymer MMMs and used them to identify the most promising MOF/polymer pairs. In this Personal Account, we aim to provide a critical overview of current computational methods for modeling MOF/polymer MMMs. We give our perspective on the background, successes, and failures that led to developments in this area and discuss the opportunities and challenges of using computational methods for MOF/polymer MMMs. © 2016 The Chemical Society of Japan & Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Binary Solid-Liquid Phase Equilibria

    ERIC Educational Resources Information Center

    Ellison, Herbert R.

    1978-01-01

    Indicates some of the information that may be obtained from a binary solid-liquid phase equilibria experiment and a method to write a computer program that will plot an ideal phase diagram to which the experimental results may be compared. (Author/CP)

  1. Computationally assisted screening and design of cell-interactive peptides by a cell-based assay using peptide arrays and a fuzzy neural network algorithm.

    PubMed

    Kaga, Chiaki; Okochi, Mina; Tomita, Yasuyuki; Kato, Ryuji; Honda, Hiroyuki

    2008-03-01

    We developed a method of effective peptide screening that combines experiments and computational analysis. The method is based on the concept that screening efficiency can be enhanced from even limited data by use of a model derived from computational analysis that serves as a guide to screening and combining the model with subsequent repeated experiments. Here we focus on cell-adhesion peptides as a model application of this peptide-screening strategy. Cell-adhesion peptides were screened by use of a cell-based assay of a peptide array. Starting with the screening data obtained from a limited, random 5-mer library (643 sequences), a rule regarding structural characteristics of cell-adhesion peptides was extracted by fuzzy neural network (FNN) analysis. According to this rule, peptides with unfavored residues in certain positions that led to inefficient binding were eliminated from the random sequences. In the restricted, second random library (273 sequences), the yield of cell-adhesion peptides having an adhesion rate more than 1.5-fold to that of the basal array support was significantly high (31%) compared with the unrestricted random library (20%). In the restricted third library (50 sequences), the yield of cell-adhesion peptides increased to 84%. We conclude that a repeated cycle of experiments screening limited numbers of peptides can be assisted by the rule-extracting feature of FNN.

  2. Predicting gene regulatory networks of soybean nodulation from RNA-Seq transcriptome data.

    PubMed

    Zhu, Mingzhu; Dahmen, Jeremy L; Stacey, Gary; Cheng, Jianlin

    2013-09-22

    High-throughput RNA sequencing (RNA-Seq) is a revolutionary technique to study the transcriptome of a cell under various conditions at a systems level. Despite the wide application of RNA-Seq techniques to generate experimental data in the last few years, few computational methods are available to analyze this huge amount of transcription data. The computational methods for constructing gene regulatory networks from RNA-Seq expression data of hundreds or even thousands of genes are particularly lacking and urgently needed. We developed an automated bioinformatics method to predict gene regulatory networks from the quantitative expression values of differentially expressed genes based on RNA-Seq transcriptome data of a cell in different stages and conditions, integrating transcriptional, genomic and gene function data. We applied the method to the RNA-Seq transcriptome data generated for soybean root hair cells in three different development stages of nodulation after rhizobium infection. The method predicted a soybean nodulation-related gene regulatory network consisting of 10 regulatory modules common for all three stages, and 24, 49 and 70 modules separately for the first, second and third stage, each containing both a group of co-expressed genes and several transcription factors collaboratively controlling their expression under different conditions. 8 of 10 common regulatory modules were validated by at least two kinds of validations, such as independent DNA binding motif analysis, gene function enrichment test, and previous experimental data in the literature. We developed a computational method to reliably reconstruct gene regulatory networks from RNA-Seq transcriptome data. The method can generate valuable hypotheses for interpreting biological data and designing biological experiments such as ChIP-Seq, RNA interference, and yeast two hybrid experiments.

  3. A computational method for estimating the PCR duplication rate in DNA and RNA-seq experiments.

    PubMed

    Bansal, Vikas

    2017-03-14

    PCR amplification is an important step in the preparation of DNA sequencing libraries prior to high-throughput sequencing. PCR amplification introduces redundant reads in the sequence data and estimating the PCR duplication rate is important to assess the frequency of such reads. Existing computational methods do not distinguish PCR duplicates from "natural" read duplicates that represent independent DNA fragments and therefore, over-estimate the PCR duplication rate for DNA-seq and RNA-seq experiments. In this paper, we present a computational method to estimate the average PCR duplication rate of high-throughput sequence datasets that accounts for natural read duplicates by leveraging heterozygous variants in an individual genome. Analysis of simulated data and exome sequence data from the 1000 Genomes project demonstrated that our method can accurately estimate the PCR duplication rate on paired-end as well as single-end read datasets which contain a high proportion of natural read duplicates. Further, analysis of exome datasets prepared using the Nextera library preparation method indicated that 45-50% of read duplicates correspond to natural read duplicates likely due to fragmentation bias. Finally, analysis of RNA-seq datasets from individuals in the 1000 Genomes project demonstrated that 70-95% of read duplicates observed in such datasets correspond to natural duplicates sampled from genes with high expression and identified outlier samples with a 2-fold greater PCR duplication rate than other samples. The method described here is a useful tool for estimating the PCR duplication rate of high-throughput sequence datasets and for assessing the fraction of read duplicates that correspond to natural read duplicates. An implementation of the method is available at https://github.com/vibansal/PCRduplicates .

  4. A New Soft Computing Method for K-Harmonic Means Clustering.

    PubMed

    Yeh, Wei-Chang; Jiang, Yunzhi; Chen, Yee-Fen; Chen, Zhe

    2016-01-01

    The K-harmonic means clustering algorithm (KHM) is a new clustering method used to group data such that the sum of the harmonic averages of the distances between each entity and all cluster centroids is minimized. Because it is less sensitive to initialization than K-means (KM), many researchers have recently been attracted to studying KHM. In this study, the proposed iSSO-KHM is based on an improved simplified swarm optimization (iSSO) and integrates a variable neighborhood search (VNS) for KHM clustering. As evidence of the utility of the proposed iSSO-KHM, we present extensive computational results on eight benchmark problems. From the computational results, the comparison appears to support the superiority of the proposed iSSO-KHM over previously developed algorithms for all experiments in the literature.

  5. Laboratory Sequence in Computational Methods for Introductory Chemistry

    NASA Astrophysics Data System (ADS)

    Cody, Jason A.; Wiser, Dawn C.

    2003-07-01

    A four-exercise laboratory sequence for introductory chemistry integrating hands-on, student-centered experience with computer modeling has been designed and implemented. The progression builds from exploration of molecular shapes to intermolecular forces and the impact of those forces on chemical separations made with gas chromatography and distillation. The sequence ends with an exploration of molecular orbitals. The students use the computers as a tool; they build the molecules, submit the calculations, and interpret the results. Because of the construction of the sequence and its placement spanning the semester break, good laboratory notebook practices are reinforced and the continuity of course content and methods between semesters is emphasized. The inclusion of these techniques in the first year of chemistry has had a positive impact on student perceptions and student learning.

  6. Improved Measures of Integrated Information

    PubMed Central

    Tegmark, Max

    2016-01-01

    Although there is growing interest in measuring integrated information in computational and cognitive systems, current methods for doing so in practice are computationally unfeasible. Existing and novel integration measures are investigated and classified by various desirable properties. A simple taxonomy of Φ-measures is presented where they are each characterized by their choice of factorization method (5 options), choice of probability distributions to compare (3 × 4 options) and choice of measure for comparing probability distributions (7 options). When requiring the Φ-measures to satisfy a minimum of attractive properties, these hundreds of options reduce to a mere handful, some of which turn out to be identical. Useful exact and approximate formulas are derived that can be applied to real-world data from laboratory experiments without posing unreasonable computational demands. PMID:27870846

  7. Experimental magic state distillation for fault-tolerant quantum computing.

    PubMed

    Souza, Alexandre M; Zhang, Jingfu; Ryan, Colm A; Laflamme, Raymond

    2011-01-25

    Any physical quantum device for quantum information processing (QIP) is subject to errors in implementation. In order to be reliable and efficient, quantum computers will need error-correcting or error-avoiding methods. Fault-tolerance achieved through quantum error correction will be an integral part of quantum computers. Of the many methods that have been discovered to implement it, a highly successful approach has been to use transversal gates and specific initial states. A critical element for its implementation is the availability of high-fidelity initial states, such as |0〉 and the 'magic state'. Here, we report an experiment, performed in a nuclear magnetic resonance (NMR) quantum processor, showing sufficient quantum control to improve the fidelity of imperfect initial magic states by distilling five of them into one with higher fidelity.

  8. Gaussian polarizable-ion tight binding.

    PubMed

    Boleininger, Max; Guilbert, Anne Ay; Horsfield, Andrew P

    2016-10-14

    To interpret ultrafast dynamics experiments on large molecules, computer simulation is required due to the complex response to the laser field. We present a method capable of efficiently computing the static electronic response of large systems to external electric fields. This is achieved by extending the density-functional tight binding method to include larger basis sets and by multipole expansion of the charge density into electrostatically interacting Gaussian distributions. Polarizabilities for a range of hydrocarbon molecules are computed for a multipole expansion up to quadrupole order, giving excellent agreement with experimental values, with average errors similar to those from density functional theory, but at a small fraction of the cost. We apply the model in conjunction with the polarizable-point-dipoles model to estimate the internal fields in amorphous poly(3-hexylthiophene-2,5-diyl).

  9. Gaussian polarizable-ion tight binding

    NASA Astrophysics Data System (ADS)

    Boleininger, Max; Guilbert, Anne AY; Horsfield, Andrew P.

    2016-10-01

    To interpret ultrafast dynamics experiments on large molecules, computer simulation is required due to the complex response to the laser field. We present a method capable of efficiently computing the static electronic response of large systems to external electric fields. This is achieved by extending the density-functional tight binding method to include larger basis sets and by multipole expansion of the charge density into electrostatically interacting Gaussian distributions. Polarizabilities for a range of hydrocarbon molecules are computed for a multipole expansion up to quadrupole order, giving excellent agreement with experimental values, with average errors similar to those from density functional theory, but at a small fraction of the cost. We apply the model in conjunction with the polarizable-point-dipoles model to estimate the internal fields in amorphous poly(3-hexylthiophene-2,5-diyl).

  10. Reliably Discriminating Stock Structure with Genetic Markers:Mixture Models with Robust and Fast Computation.

    PubMed

    Foster, Scott D; Feutry, Pierre; Grewe, Peter M; Berry, Oliver; Hui, Francis K C; Davies, Campbell R

    2018-06-26

    Delineating naturally occurring and self-sustaining sub-populations (stocks) of a species is an important task, especially for species harvested from the wild. Despite its central importance to natural resource management, analytical methods used to delineate stocks are often, and increasingly, borrowed from superficially similar analytical tasks in human genetics even though models specifically for stock identification have been previously developed. Unfortunately, the analytical tasks in resource management and human genetics are not identical { questions about humans are typically aimed at inferring ancestry (often referred to as 'admixture') rather than breeding stocks. In this article, we argue, and show through simulation experiments and an analysis of yellowfin tuna data, that ancestral analysis methods are not always appropriate for stock delineation. In this work, we advocate a variant of a previouslyintroduced and simpler model that identifies stocks directly. We also highlight that the computational aspects of the analysis, irrespective of the model, are difficult. We introduce some alternative computational methods and quantitatively compare these methods to each other and to established methods. We also present a method for quantifying uncertainty in model parameters and in assignment probabilities. In doing so, we demonstrate that point estimates can be misleading. One of the computational strategies presented here, based on an expectation-maximisation algorithm with judiciously chosen starting values, is robust and has a modest computational cost. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  11. Study on validation method for femur finite element model under multiple loading conditions

    NASA Astrophysics Data System (ADS)

    Guan, Fengjiao; Zhang, Guanjun; Liu, Jie; Wang, Shujing; Luo, Xu

    2018-03-01

    Acquisition of accurate and reliable constitutive parameters related to bio-tissue materials was beneficial to improve biological fidelity of a Finite Element (FE) model and predict impact damages more effectively. In this paper, a femur FE model was established under multiple loading conditions with diverse impact positions. Then, based on sequential response surface method and genetic algorithms, the material parameters identification was transformed to a multi-response optimization problem. Finally, the simulation results successfully coincided with force-displacement curves obtained by numerous experiments. Thus, computational accuracy and efficiency of the entire inverse calculation process were enhanced. This method was able to effectively reduce the computation time in the inverse process of material parameters. Meanwhile, the material parameters obtained by the proposed method achieved higher accuracy.

  12. [Spiral CT angiography in practice].

    PubMed

    Pavcec, Zlatko; Zokalj, Ivan; Rumboldt, Zoran; Pal, Andrej; Saghir, Hussein; Ozretić, David; Latin, Branko; Perhoć, Zeljka; Marotti, Miljenko

    2005-01-01

    Incidence of vascular diseases and development of new radiologic techniques in the last three decades has given strong impuls for introduction of non-invasive vascular diagnostic methods. Thanks to the introduction of Doppler ultrasound, new types of computed tomography (CT) and magnetic resonance (MR) scanners, non-invasive vascular diagnostic methods are replacing conventional invasive (catheter) angiographic methods. Computed tomographic angiography (CTA) is a noninvasive vascular diagnostic method based on continuous scanning with CT scanner during intravenous application of contrast material. Performing of CTA is possible after introduction of spiral CT technique whose characteristics are short imaging time and volumetric data acquisition. The main goal of this article, based on our experiences, is to review the role of CTA, performed on single-slice CT scanner, in managment of patients with vascular pathology.

  13. Investigation into discretization methods of the six-parameter Iwan model

    NASA Astrophysics Data System (ADS)

    Li, Yikun; Hao, Zhiming; Feng, Jiaquan; Zhang, Dingguo

    2017-02-01

    Iwan model is widely applied for the purpose of describing nonlinear mechanisms of jointed structures. In this paper, parameter identification procedures of the six-parameter Iwan model based on joint experiments with different preload techniques are performed. Four kinds of discretization methods deduced from stiffness equation of the six-parameter Iwan model are provided, which can be used to discretize the integral-form Iwan model into a sum of finite Jenkins elements. In finite element simulation, the influences of discretization methods and numbers of Jenkins elements on computing accuracy are discussed. Simulation results indicate that a higher accuracy can be obtained with larger numbers of Jenkins elements. It is also shown that compared with other three kinds of discretization methods, the geometric series discretization based on stiffness provides the highest computing accuracy.

  14. Subsonic aerodynamic characteristics of interacting lifting surfaces with separated flow around sharp edges predicted by a vortex-lattice method

    NASA Technical Reports Server (NTRS)

    Lamar, J. E.; Gloss, B. B.

    1975-01-01

    Because the potential flow suction along the leading and side edges of a planform can be used to determine both leading- and side-edge vortex lift, the present investigation was undertaken to apply the vortex-lattice method to computing side-edge suction force for isolated or interacting planforms. Although there is a small effect of bound vortex sweep on the computation of the side-edge suction force, the results obtained for a number of different isolated planforms produced acceptable agreement with results obtained from a method employing continuous induced-velocity distributions. By using the method outlined, better agreement between theory and experiment was noted for a wing in the presence of a canard than was previously obtained.

  15. Design and implementation of the one-step MSD adder of optical computer.

    PubMed

    Song, Kai; Yan, Liping

    2012-03-01

    On the basis of the symmetric encoding algorithm for the modified signed-digit (MSD), a 7*7 truth table that can be realized with optical methods was developed. And based on the truth table, the optical path structures and circuit implementations of the one-step MSD adder of ternary optical computer (TOC) were designed. Experiments show that the scheme is correct, feasible, and efficient. © 2012 Optical Society of America

  16. Improvement of Computer Software Quality through Software Automated Tools.

    DTIC Science & Technology

    1986-08-31

    requirement for increased emphasis on software quality assurance has lead to the creation of various methods of verification and validation. Experience...result was a vast array of methods , systems, languages and automated tools to assist in the process. Given that the primary role of quality assurance is...Unfortunately, there is no single method , tool or technique that can insure accurate, reliable and cost effective software. Therefore, government and industry

  17. Use of CYBER 203 and CYBER 205 computers for three-dimensional transonic flow calculations

    NASA Technical Reports Server (NTRS)

    Melson, N. D.; Keller, J. D.

    1983-01-01

    Experiences are discussed for modifying two three-dimensional transonic flow computer programs (FLO 22 and FLO 27) for use on the CDC CYBER 203 computer system. Both programs were originally written for use on serial machines. Several methods were attempted to optimize the execution of the two programs on the vector machine: leaving the program in a scalar form (i.e., serial computation) with compiler software used to optimize and vectorize the program, vectorizing parts of the existing algorithm in the program, and incorporating a vectorizable algorithm (ZEBRA I or ZEBRA II) in the program. Comparison runs of the programs were made on CDC CYBER 175. CYBER 203, and two pipe CDC CYBER 205 computer systems.

  18. Comparison of meaningful learning characteristics in simulated nursing practice after traditional versus computer-based simulation method: a qualitative videography study.

    PubMed

    Poikela, Paula; Ruokamo, Heli; Teräs, Marianne

    2015-02-01

    Nursing educators must ensure that nursing students acquire the necessary competencies; finding the most purposeful teaching methods and encouraging learning through meaningful learning opportunities is necessary to meet this goal. We investigated student learning in a simulated nursing practice using videography. The purpose of this paper is to examine how two different teaching methods presented students' meaningful learning in a simulated nursing experience. The 6-hour study was divided into three parts: part I, general information; part II, training; and part III, simulated nursing practice. Part II was delivered by two different methods: a computer-based simulation and a lecture. The study was carried out in the simulated nursing practice in two universities of applied sciences, in Northern Finland. The participants in parts II and I were 40 first year nursing students; 12 student volunteers continued to part III. Qualitative analysis method was used. The data were collected using video recordings and analyzed by videography. The students who used a computer-based simulation program were more likely to report meaningful learning themes than those who were first exposed to lecture method. Educators should be encouraged to use computer-based simulation teaching in conjunction with other teaching methods to ensure that nursing students are able to receive the greatest educational benefits. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. IETI – Isogeometric Tearing and Interconnecting

    PubMed Central

    Kleiss, Stefan K.; Pechstein, Clemens; Jüttler, Bert; Tomar, Satyendra

    2012-01-01

    Finite Element Tearing and Interconnecting (FETI) methods are a powerful approach to designing solvers for large-scale problems in computational mechanics. The numerical simulation problem is subdivided into a number of independent sub-problems, which are then coupled in appropriate ways. NURBS- (Non-Uniform Rational B-spline) based isogeometric analysis (IGA) applied to complex geometries requires to represent the computational domain as a collection of several NURBS geometries. Since there is a natural decomposition of the computational domain into several subdomains, NURBS-based IGA is particularly well suited for using FETI methods. This paper proposes the new IsogEometric Tearing and Interconnecting (IETI) method, which combines the advanced solver design of FETI with the exact geometry representation of IGA. We describe the IETI framework for two classes of simple model problems (Poisson and linearized elasticity) and discuss the coupling of the subdomains along interfaces (both for matching interfaces and for interfaces with T-joints, i.e. hanging nodes). Special attention is paid to the construction of a suitable preconditioner for the iterative linear solver used for the interface problem. We report several computational experiments to demonstrate the performance of the proposed IETI method. PMID:24511167

  20. A multi-time-step noise reduction method for measuring velocity statistics from particle tracking velocimetry

    NASA Astrophysics Data System (ADS)

    Machicoane, Nathanaël; López-Caballero, Miguel; Bourgoin, Mickael; Aliseda, Alberto; Volk, Romain

    2017-10-01

    We present a method to improve the accuracy of velocity measurements for fluid flow or particles immersed in it, based on a multi-time-step approach that allows for cancellation of noise in the velocity measurements. Improved velocity statistics, a critical element in turbulent flow measurements, can be computed from the combination of the velocity moments computed using standard particle tracking velocimetry (PTV) or particle image velocimetry (PIV) techniques for data sets that have been collected over different values of time intervals between images. This method produces Eulerian velocity fields and Lagrangian velocity statistics with much lower noise levels compared to standard PIV or PTV measurements, without the need of filtering and/or windowing. Particle displacement between two frames is computed for multiple different time-step values between frames in a canonical experiment of homogeneous isotropic turbulence. The second order velocity structure function of the flow is computed with the new method and compared to results from traditional measurement techniques in the literature. Increased accuracy is also demonstrated by comparing the dissipation rate of turbulent kinetic energy measured from this function against previously validated measurements.

  1. MR thermometry characterization of a hyperthermia ultrasound array designed using the k-space computational method

    PubMed Central

    Al-Bataineh, Osama M; Collins, Christopher M; Park, Eun-Joo; Lee, Hotaik; Smith, Nadine Barrie

    2006-01-01

    Background Ultrasound induced hyperthermia is a useful adjuvant to radiation therapy in the treatment of prostate cancer. A uniform thermal dose (43°C for 30 minutes) is required within the targeted cancerous volume for effective therapy. This requires specific ultrasound phased array design and appropriate thermometry method. Inhomogeneous, acoustical, three-dimensional (3D) prostate models and economical computational methods provide necessary tools to predict the appropriate shape of hyperthermia phased arrays for better focusing. This research utilizes the k-space computational method and a 3D human prostate model to design an intracavitary ultrasound probe for hyperthermia treatment of prostate cancer. Evaluation of the probe includes ex vivo and in vivo controlled hyperthermia experiments using the noninvasive magnetic resonance imaging (MRI) thermometry. Methods A 3D acoustical prostate model was created using photographic data from the Visible Human Project®. The k-space computational method was used on this coarse grid and inhomogeneous tissue model to simulate the steady state pressure wavefield of the designed phased array using the linear acoustic wave equation. To ensure the uniformity and spread of the pressure in the length of the array, and the focusing capability in the width of the array, the equally-sized elements of the 4 × 20 elements phased array were 1 × 14 mm. A probe was constructed according to the design in simulation using lead zerconate titanate (PZT-8) ceramic and a Delrin® plastic housing. Noninvasive MRI thermometry and a switching feedback controller were used to accomplish ex vivo and in vivo hyperthermia evaluations of the probe. Results Both exposimetry and k-space simulation results demonstrated acceptable agreement within 9%. With a desired temperature plateau of 43.0°C, ex vivo and in vivo controlled hyperthermia experiments showed that the MRI temperature at the steady state was 42.9 ± 0.38°C and 43.1 ± 0.80°C, respectively, for 20 minutes of heating. Conclusion Unlike conventional computational methods, the k-space method provides a powerful tool to predict pressure wavefield in large scale, 3D, inhomogeneous and coarse grid tissue models. Noninvasive MRI thermometry supports the efficacy of this probe and the feedback controller in an in vivo hyperthermia treatment of canine prostate. PMID:17064421

  2. Measuring decision weights in recognition experiments with multiple response alternatives: comparing the correlation and multinomial-logistic-regression methods.

    PubMed

    Dai, Huanping; Micheyl, Christophe

    2012-11-01

    Psychophysical "reverse-correlation" methods allow researchers to gain insight into the perceptual representations and decision weighting strategies of individual subjects in perceptual tasks. Although these methods have gained momentum, until recently their development was limited to experiments involving only two response categories. Recently, two approaches for estimating decision weights in m-alternative experiments have been put forward. One approach extends the two-category correlation method to m > 2 alternatives; the second uses multinomial logistic regression (MLR). In this article, the relative merits of the two methods are discussed, and the issues of convergence and statistical efficiency of the methods are evaluated quantitatively using Monte Carlo simulations. The results indicate that, for a range of values of the number of trials, the estimated weighting patterns are closer to their asymptotic values for the correlation method than for the MLR method. Moreover, for the MLR method, weight estimates for different stimulus components can exhibit strong correlations, making the analysis and interpretation of measured weighting patterns less straightforward than for the correlation method. These and other advantages of the correlation method, which include computational simplicity and a close relationship to other well-established psychophysical reverse-correlation methods, make it an attractive tool to uncover decision strategies in m-alternative experiments.

  3. Focused Belief Measures for Uncertainty Quantification in High Performance Semantic Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joslyn, Cliff A.; Weaver, Jesse R.

    In web-scale semantic data analytics there is a great need for methods which aggregate uncertainty claims, on the one hand respecting the information provided as accurately as possible, while on the other still being tractable. Traditional statistical methods are more robust, but only represent distributional, additive uncertainty. Generalized information theory methods, including fuzzy systems and Dempster-Shafer (DS) evidence theory, represent multiple forms of uncertainty, but are computationally and methodologically difficult. We require methods which provide an effective balance between the complete representation of the full complexity of uncertainty claims in their interaction, while satisfying the needs of both computational complexitymore » and human cognition. Here we build on J{\\o}sang's subjective logic to posit methods in focused belief measures (FBMs), where a full DS structure is focused to a single event. The resulting ternary logical structure is posited to be able to capture the minimal amount of generalized complexity needed at a maximum of computational efficiency. We demonstrate the efficacy of this approach in a web ingest experiment over the 2012 Billion Triple dataset from the Semantic Web Challenge.« less

  4. Time-reversal transcranial ultrasound beam focusing using a k-space method

    PubMed Central

    Jing, Yun; Meral, F. Can; Clement, Greg. T.

    2012-01-01

    This paper proposes the use of a k-space method to obtain the correction for transcranial ultrasound beam focusing. Mirroring past approaches, A synthetic point source at the focal point is numerically excited, and propagated through the skull, using acoustic properties acquired from registered computed tomograpy of the skull being studied. The received data outside the skull contains the correction information and can be phase conjugated (time reversed) and then physically generated to achieve a tight focusing inside the skull, by assuming quasi-plane transmission where shear waves are not present or their contribution can be neglected. Compared with the conventional finite-difference time-domain method for wave propagation simulation, it will be shown that the k-space method is significantly more accurate even for a relatively coarse spatial resolution, leading to a dramatically reduced computation time. Both numerical simulations and experiments conducted on an ex vivo human skull demonstrate that, precise focusing can be realized using the k-space method with a spatial resolution as low as only 2.56 grid points per wavelength, thus allowing treatment planning computation on the order of minutes. PMID:22290477

  5. An adjoint-based simultaneous estimation method of the asthenosphere's viscosity and afterslip using a fast and scalable finite-element adjoint solver

    NASA Astrophysics Data System (ADS)

    Agata, Ryoichiro; Ichimura, Tsuyoshi; Hori, Takane; Hirahara, Kazuro; Hashimoto, Chihiro; Hori, Muneo

    2018-04-01

    The simultaneous estimation of the asthenosphere's viscosity and coseismic slip/afterslip is expected to improve largely the consistency of the estimation results to observation data of crustal deformation collected in widely spread observation points, compared to estimations of slips only. Such an estimate can be formulated as a non-linear inverse problem of material properties of viscosity and input force that is equivalent to fault slips based on large-scale finite-element (FE) modeling of crustal deformation, in which the degree of freedom is in the order of 109. We formulated and developed a computationally efficient adjoint-based estimation method for this inverse problem, together with a fast and scalable FE solver for the associated forward and adjoint problems. In a numerical experiment that imitates the 2011 Tohoku-Oki earthquake, the advantage of the proposed method is confirmed by comparing the estimated results with those obtained using simplified estimation methods. The computational cost required for the optimization shows that the proposed method enabled the targeted estimation to be completed with moderate amount of computational resources.

  6. Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.

    PubMed

    Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai

    2017-11-01

    For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.

  7. Where next for the reproducibility agenda in computational biology?

    PubMed

    Lewis, Joanna; Breeze, Charles E; Charlesworth, Jane; Maclaren, Oliver J; Cooper, Jonathan

    2016-07-15

    The concept of reproducibility is a foundation of the scientific method. With the arrival of fast and powerful computers over the last few decades, there has been an explosion of results based on complex computational analyses and simulations. The reproducibility of these results has been addressed mainly in terms of exact replicability or numerical equivalence, ignoring the wider issue of the reproducibility of conclusions through equivalent, extended or alternative methods. We use case studies from our own research experience to illustrate how concepts of reproducibility might be applied in computational biology. Several fields have developed 'minimum information' checklists to support the full reporting of computational simulations, analyses and results, and standardised data formats and model description languages can facilitate the use of multiple systems to address the same research question. We note the importance of defining the key features of a result to be reproduced, and the expected agreement between original and subsequent results. Dynamic, updatable tools for publishing methods and results are becoming increasingly common, but sometimes come at the cost of clear communication. In general, the reproducibility of computational research is improving but would benefit from additional resources and incentives. We conclude with a series of linked recommendations for improving reproducibility in computational biology through communication, policy, education and research practice. More reproducible research will lead to higher quality conclusions, deeper understanding and more valuable knowledge.

  8. Using sobol sequences for planning computer experiments

    NASA Astrophysics Data System (ADS)

    Statnikov, I. N.; Firsov, G. I.

    2017-12-01

    Discusses the use for research of problems of multicriteria synthesis of dynamic systems method of Planning LP-search (PLP-search), which not only allows on the basis of the simulation model experiments to revise the parameter space within specified ranges of their change, but also through special randomized nature of the planning of these experiments is to apply a quantitative statistical evaluation of influence of change of varied parameters and their pairwise combinations to analyze properties of the dynamic system.Start your abstract here...

  9. Progress on the Fabric for Frontier Experiments Project at Fermilab

    NASA Astrophysics Data System (ADS)

    Box, Dennis; Boyd, Joseph; Dykstra, Dave; Garzoglio, Gabriele; Herner, Kenneth; Kirby, Michael; Kreymer, Arthur; Levshina, Tanya; Mhashilkar, Parag; Sharma, Neha

    2015-12-01

    The FabrIc for Frontier Experiments (FIFE) project is an ambitious, major-impact initiative within the Fermilab Scientific Computing Division designed to lead the computing model for Fermilab experiments. FIFE is a collaborative effort between experimenters and computing professionals to design and develop integrated computing models for experiments of varying needs and infrastructure. The major focus of the FIFE project is the development, deployment, and integration of Open Science Grid solutions for high throughput computing, data management, database access and collaboration within experiment. To accomplish this goal, FIFE has developed workflows that utilize Open Science Grid sites along with dedicated and commercial cloud resources. The FIFE project has made significant progress integrating into experiment computing operations several services including new job submission services, software and reference data distribution through CVMFS repositories, flexible data transfer client, and access to opportunistic resources on the Open Science Grid. The progress with current experiments and plans for expansion with additional projects will be discussed. FIFE has taken a leading role in the definition of the computing model for Fermilab experiments, aided in the design of computing for experiments beyond Fermilab, and will continue to define the future direction of high throughput computing for future physics experiments worldwide.

  10. Scientific Visualization and Computational Science: Natural Partners

    NASA Technical Reports Server (NTRS)

    Uselton, Samuel P.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Scientific visualization is developing rapidly, stimulated by computational science, which is gaining acceptance as a third alternative to theory and experiment. Computational science is based on numerical simulations of mathematical models derived from theory. But each individual simulation is like a hypothetical experiment; initial conditions are specified, and the result is a record of the observed conditions. Experiments can be simulated for situations that can not really be created or controlled. Results impossible to measure can be computed.. Even for observable values, computed samples are typically much denser. Numerical simulations also extend scientific exploration where the mathematics is analytically intractable. Numerical simulations are used to study phenomena from subatomic to intergalactic scales and from abstract mathematical structures to pragmatic engineering of everyday objects. But computational science methods would be almost useless without visualization. The obvious reason is that the huge amounts of data produced require the high bandwidth of the human visual system, and interactivity adds to the power. Visualization systems also provide a single context for all the activities involved from debugging the simulations, to exploring the data, to communicating the results. Most of the presentations today have their roots in image processing, where the fundamental task is: Given an image, extract information about the scene. Visualization has developed from computer graphics, and the inverse task: Given a scene description, make an image. Visualization extends the graphics paradigm by expanding the possible input. The goal is still to produce images; the difficulty is that the input is not a scene description displayable by standard graphics methods. Visualization techniques must either transform the data into a scene description or extend graphics techniques to display this odd input. Computational science is a fertile field for visualization research because the results vary so widely and include things that have no known appearance. The amount of data creates additional challenges for both hardware and software systems. Evaluations of visualization should ultimately reflect the insight gained into the scientific phenomena. So making good visualizations requires consideration of characteristics of the user and the purpose of the visualization. Knowledge about human perception and graphic design is also relevant. It is this breadth of knowledge that stimulates proposals for multidisciplinary visualization teams and intelligent visualization assistant software. Visualization is an immature field, but computational science is stimulating research on a broad front.

  11. How do particle physicists learn the programming concepts they need?

    NASA Astrophysics Data System (ADS)

    Kluth, S.; Pia, M. G.; Schoerner-Sadenius, T.; Steinbach, P.

    2015-12-01

    The ability to read, use and develop code efficiently and successfully is a key ingredient in modern particle physics. We report the experience of a training program, identified as “Advanced Programming Concepts”, that introduces software concepts, methods and techniques to work effectively on a daily basis in a HEP experiment or other programming intensive fields. This paper illustrates the principles, motivations and methods that shape the “Advanced Computing Concepts” training program, the knowledge base that it conveys, an analysis of the feedback received so far, and the integration of these concepts in the software development process of the experiments as well as its applicability to a wider audience.

  12. Robust and Imperceptible Watermarking of Video Streams for Low Power Devices

    NASA Astrophysics Data System (ADS)

    Ishtiaq, Muhammad; Jaffar, M. Arfan; Khan, Muhammad A.; Jan, Zahoor; Mirza, Anwar M.

    With the advent of internet, every aspect of life is going online. From online working to watching videos, everything is now available on the internet. With the greater business benefits, increased availability and other online business advantages, there is a major challenge of security and ownership of data. Videos downloaded from an online store can easily be shared among non-intended or unauthorized users. Invisible watermarking is used to hide copyright protection information in the videos. The existing methods of watermarking are less robust and imperceptible and also the computational complexity of these methods does not suit low power devices. In this paper, we have proposed a new method to address the problem of robustness and imperceptibility. Experiments have shown that our method has better robustness and imperceptibility as well as our method is computationally efficient than previous approaches in practice. Hence our method can easily be applied on low power devices.

  13. Parallelization strategies for continuum-generalized method of moments on the multi-thread systems

    NASA Astrophysics Data System (ADS)

    Bustamam, A.; Handhika, T.; Ernastuti, Kerami, D.

    2017-07-01

    Continuum-Generalized Method of Moments (C-GMM) covers the Generalized Method of Moments (GMM) shortfall which is not as efficient as Maximum Likelihood estimator by using the continuum set of moment conditions in a GMM framework. However, this computation would take a very long time since optimizing regularization parameter. Unfortunately, these calculations are processed sequentially whereas in fact all modern computers are now supported by hierarchical memory systems and hyperthreading technology, which allowing for parallel computing. This paper aims to speed up the calculation process of C-GMM by designing a parallel algorithm for C-GMM on the multi-thread systems. First, parallel regions are detected for the original C-GMM algorithm. There are two parallel regions in the original C-GMM algorithm, that are contributed significantly to the reduction of computational time: the outer-loop and the inner-loop. Furthermore, this parallel algorithm will be implemented with standard shared-memory application programming interface, i.e. Open Multi-Processing (OpenMP). The experiment shows that the outer-loop parallelization is the best strategy for any number of observations.

  14. Algorithms for the computation of solutions of the Ornstein-Zernike equation.

    PubMed

    Peplow, A T; Beardmore, R E; Bresme, F

    2006-10-01

    We introduce a robust and efficient methodology to solve the Ornstein-Zernike integral equation using the pseudoarc length (PAL) continuation method that reformulates the integral equation in an equivalent but nonstandard form. This enables the computation of solutions in regions where the compressibility experiences large changes or where the existence of multiple solutions and so-called branch points prevents Newton's method from converging. We illustrate the use of the algorithm with a difficult problem that arises in the numerical solution of integral equations, namely the evaluation of the so-called no-solution line of the Ornstein-Zernike hypernetted chain (HNC) integral equation for the Lennard-Jones potential. We are able to use the PAL algorithm to solve the integral equation along this line and to connect physical and nonphysical solution branches (both isotherms and isochores) where appropriate. We also show that PAL continuation can compute solutions within the no-solution region that cannot be computed when Newton and Picard methods are applied directly to the integral equation. While many solutions that we find are new, some correspond to states with negative compressibility and consequently are not physical.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koniges, A.E.; Craddock, G.G.; Schnack, D.D.

    The purpose of the workshop was to assemble workers, both within and outside of the fusion-related computations areas, for discussion regarding the issues of dynamically adaptive gridding. There were three invited talks related to adaptive gridding application experiences in various related fields of computational fluid dynamics (CFD), and nine short talks reporting on the progress of adaptive techniques in the specific areas of scrape-off-layer (SOL) modeling and magnetohydrodynamic (MHD) stability. Adaptive mesh methods have been successful in a number of diverse fields of CFD for over a decade. The method involves dynamic refinement of computed field profiles in a waymore » that disperses uniformly the numerical errors associated with discrete approximations. Because the process optimizes computational effort, adaptive mesh methods can be used to study otherwise the intractable physical problems that involve complex boundary shapes or multiple spatial/temporal scales. Recent results indicate that these adaptive techniques will be required for tokamak fluid-based simulations involving the diverted tokamak SOL modeling and MHD simulations problems related to the highest priority ITER relevant issues.Individual papers are indexed separately on the energy data bases.« less

  16. Structural variation discovery in the cancer genome using next generation sequencing: Computational solutions and perspectives

    PubMed Central

    Liu, Biao; Conroy, Jeffrey M.; Morrison, Carl D.; Odunsi, Adekunle O.; Qin, Maochun; Wei, Lei; Trump, Donald L.; Johnson, Candace S.; Liu, Song; Wang, Jianmin

    2015-01-01

    Somatic Structural Variations (SVs) are a complex collection of chromosomal mutations that could directly contribute to carcinogenesis. Next Generation Sequencing (NGS) technology has emerged as the primary means of interrogating the SVs of the cancer genome in recent investigations. Sophisticated computational methods are required to accurately identify the SV events and delineate their breakpoints from the massive amounts of reads generated by a NGS experiment. In this review, we provide an overview of current analytic tools used for SV detection in NGS-based cancer studies. We summarize the features of common SV groups and the primary types of NGS signatures that can be used in SV detection methods. We discuss the principles and key similarities and differences of existing computational programs and comment on unresolved issues related to this research field. The aim of this article is to provide a practical guide of relevant concepts, computational methods, software tools and important factors for analyzing and interpreting NGS data for the detection of SVs in the cancer genome. PMID:25849937

  17. News | Computing

    Science.gov Websites

    Support News Publications Computing for Experiments Computing for Neutrino and Muon Physics Computing for Collider Experiments Computing for Astrophysics Research and Development Accelerator Modeling ComPASS - Impact of Detector Simulation on Particle Physics Collider Experiments Daniel Elvira's paper "Impact

  18. Helicopter rotor loads using a matched asymptotic expansion technique

    NASA Technical Reports Server (NTRS)

    Pierce, G. A.; Vaidyanathan, A. R.

    1981-01-01

    The theoretical basis and computational feasibility of the Van Holten method, and its performance and range of validity by comparison with experiment and other approximate methods was examined. It is found that within the restrictions of incompressible, potential flow and the assumption of small disturbances, the method does lead to a valid description of the flow. However, the method begins to break down under conditions favoring nonlinear effects such as wake distortion and blade/rotor interaction.

  19. Semi-physical Simulation Platform of a Parafoil Nonlinear Dynamic System

    NASA Astrophysics Data System (ADS)

    Gao, Hai-Tao; Yang, Sheng-Bo; Zhu, Er-Lin; Sun, Qing-Lin; Chen, Zeng-Qiang; Kang, Xiao-Feng

    2013-11-01

    Focusing on the problems in the process of simulation and experiment on a parafoil nonlinear dynamic system, such as limited methods, high cost and low efficiency we present a semi-physical simulation platform. It is designed by connecting parts of physical objects to a computer, and remedies the defect that a computer simulation is divorced from a real environment absolutely. The main components of the platform and its functions, as well as simulation flows, are introduced. The feasibility and validity are verified through a simulation experiment. The experimental results show that the platform has significance for improving the quality of the parafoil fixed-point airdrop system, shortening the development cycle and saving cost.

  20. Simulation of Physical Experiments in Immersive Virtual Environments

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Wasfy, Tamer M.

    2001-01-01

    An object-oriented event-driven immersive Virtual environment is described for the creation of virtual labs (VLs) for simulating physical experiments. Discussion focuses on a number of aspects of the VLs, including interface devices, software objects, and various applications. The VLs interface with output devices, including immersive stereoscopic screed(s) and stereo speakers; and a variety of input devices, including body tracking (head and hands), haptic gloves, wand, joystick, mouse, microphone, and keyboard. The VL incorporates the following types of primitive software objects: interface objects, support objects, geometric entities, and finite elements. Each object encapsulates a set of properties, methods, and events that define its behavior, appearance, and functions. A container object allows grouping of several objects. Applications of the VLs include viewing the results of the physical experiment, viewing a computer simulation of the physical experiment, simulation of the experiments procedure, computational steering, and remote control of the physical experiment. In addition, the VL can be used as a risk-free (safe) environment for training. The implementation of virtual structures testing machines, virtual wind tunnels, and a virtual acoustic testing facility is described.

  1. Comparing young people's experience of technology-delivered v. face-to-face mindfulness and relaxation: two-armed qualitative focus group study.

    PubMed

    Tunney, Conall; Cooney, Patricia; Coyle, David; O'Reilly, Gary

    2017-04-01

    Background The current popularity of mindfulness-based practices has coincided with the increase in access to mobile technology. This has led to many mindfulness apps and programs becoming available, some specifically for children. However, little is known about the experience of engaging with mindfulness through these mediums. Aims To explore children's experience of mindfulness delivered both face-to-face and through a computer game to highlight any differences or similarities. Method A two-armed qualitative focus groups design was used to explore children's experiences. The first arm offered mindfulness exercises in a traditional face-to-face setting with guided meditations. The second arm offered mindfulness exercises through a computer game avatar. Results Themes of relaxation, engagement, awareness, thinking, practice and directing attention emerged from both arms of focus groups. Subthematic codes highlight key differences as well as similarities in the experience of mindfulness. Conclusions These results indicate that mindfulness delivered via technology can offer a rich experience. © The Royal College of Psychiatrists 2017.

  2. Multiaxial Cyclic Thermoplasticity Analysis with Besseling's Subvolume Method

    NASA Technical Reports Server (NTRS)

    Mcknight, R. L.

    1983-01-01

    A modification was formulated to Besseling's Subvolume Method to allow it to use multilinear stress-strain curves which are temperature dependent to perform cyclic thermoplasticity analyses. This method automotically reproduces certain aspects of real material behavior important in the analysis of Aircraft Gas Turbine Engine (AGTE) components. These include the Bauschinger effect, cross-hardening, and memory. This constitutive equation was implemented in a finite element computer program called CYANIDE. Subsequently, classical time dependent plasticity (creep) was added to the program. Since its inception, this program was assessed against laboratory and component testing and engine experience. The ability of this program to simulate AGTE material response characteristics was verified by this experience and its utility in providing data for life analyses was demonstrated. In this area of life analysis, the multiaxial thermoplasticity capabilities of the method have proved a match for the actual AGTE life experience.

  3. Computer-intensive simulation of solid-state NMR experiments using SIMPSON.

    PubMed

    Tošner, Zdeněk; Andersen, Rasmus; Stevensson, Baltzar; Edén, Mattias; Nielsen, Niels Chr; Vosegaard, Thomas

    2014-09-01

    Conducting large-scale solid-state NMR simulations requires fast computer software potentially in combination with efficient computational resources to complete within a reasonable time frame. Such simulations may involve large spin systems, multiple-parameter fitting of experimental spectra, or multiple-pulse experiment design using parameter scan, non-linear optimization, or optimal control procedures. To efficiently accommodate such simulations, we here present an improved version of the widely distributed open-source SIMPSON NMR simulation software package adapted to contemporary high performance hardware setups. The software is optimized for fast performance on standard stand-alone computers, multi-core processors, and large clusters of identical nodes. We describe the novel features for fast computation including internal matrix manipulations, propagator setups and acquisition strategies. For efficient calculation of powder averages, we implemented interpolation method of Alderman, Solum, and Grant, as well as recently introduced fast Wigner transform interpolation technique. The potential of the optimal control toolbox is greatly enhanced by higher precision gradients in combination with the efficient optimization algorithm known as limited memory Broyden-Fletcher-Goldfarb-Shanno. In addition, advanced parallelization can be used in all types of calculations, providing significant time reductions. SIMPSON is thus reflecting current knowledge in the field of numerical simulations of solid-state NMR experiments. The efficiency and novel features are demonstrated on the representative simulations. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Numerical Investigation of Different Radial Inlet Forms for Centrifugal Compressor and Influence of the Deflectors Number by Means of Computational Fluid Dynamics Methods with Computational Model Validation

    NASA Astrophysics Data System (ADS)

    Kozhukhov, Y. V.; Yun, V. K.; Reshetnikova, L. V.; Prokopovich, M. V.

    2015-08-01

    The goal of this work is numerical experiments for five different types of the centrifugal compressor's inlet chambers with the help of CFD-methods and comparison of the computational results with the results of the real experiment which was held in the Nevskiy Lenin Plant in Saint-Petersburg. In the context of one of the chambers the influence of deflectors on its characteristics was investigated. The objects of investigation are 5 inlet chambers of different types which differ from each other by deflectors’ existence and by its number. The comparative analyze of the results of numerical and real experiments was held by means of comparison of relative velocity and static pressure coefficient distribution on hub and shroud region, and also by means of loss coefficient values change for all five chambers. As a result of the numerical calculation the quantitative and qualitative departure of CFD- calculations results and real experiment were found out. The investigation of the influence of the number of deflectors on flow parameters was carried out. The results of the study prove that the presence of the deflectors on flow path significantly increases the probability of the flow separations and reversed flows appearance on them. At the same time, the complete absence of the deflectors in the chamber significantly increases circumferential distortion of the flow; however the loss coefficient decreases anyway, the high values of which are caused by the shock flow existence. Thus, the profiling of the deflectors of the inlet chamber should be given a special attention.

  5. A CLASS OF RECONSTRUCTED DISCONTINUOUS GALERKIN METHODS IN COMPUTATIONAL FLUID DYNAMICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong Luo; Yidong Xia; Robert Nourgaliev

    2011-05-01

    A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison.more » Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness.« less

  6. Successful application of the DBLOC method to the hydroxylation of camphor by cytochrome p450

    PubMed Central

    Jerome, Steven V.; Hughes, Thomas F.

    2015-01-01

    Abstract The activation barrier for the hydroxylation of camphor by cytochrome P450 was computed using a mixed quantum mechanics/molecular mechanics (QM/MM) model of the full protein‐ligand system and a fully QM calculation using a cluster model of the active site at the B3LYP/LACVP*/LACV3P** level of theory, which consisted of B3LYP/LACV3P** single point energies computed at B3LYP/LACVP* optimized geometries. From the QM/MM calculation, a barrier height of 17.5 kcal/mol was obtained, while the experimental value was known to be less than or equal to 10 kcal/mol. This process was repeated using the D3 correction for hybrid DFT in order to investigate whether the inadequate treatment of dispersion interaction was responsible for the overestimation of the barrier. While the D3 correction does reduce the computed barrier to 13.3 kcal/mol, it was still in disagreement with experiment. After application of a series of transition metal optimized localized orbital corrections (DBLOC) and without any refitting of parameters, the barrier was further reduced to 10.0 kcal/mol, which was consistent with the experimental results. The DBLOC method to C—H bond activation in methane monooxygenase (MMO) was also applied, as a second, independent test. The barrier in MMO was known, by experiment, to be 15.4 kcal/mol.1 After application of the DBLOC corrections to the MMO barrier compute by B3LYP, in a previous study, and accounting for dispersion with Grimme's D3 method, the unsigned deviation from experiment was improved from 3.2 to 2.3 kcal/mol. These results suggested that the combination of dispersion plus localized orbital corrections could yield significant quantitative improvements in modeling the catalytic chemistry of transition‐metal containing enzymes, within the limitations of the statistical errors of the model, which appear to be on the order of approximately 2 kcal/mole. PMID:26441133

  7. Ligand design by a combinatorial approach based on modeling and experiment: application to HLA-DR4

    NASA Astrophysics Data System (ADS)

    Evensen, Erik; Joseph-McCarthy, Diane; Weiss, Gregory A.; Schreiber, Stuart L.; Karplus, Martin

    2007-07-01

    Combinatorial synthesis and large scale screening methods are being used increasingly in drug discovery, particularly for finding novel lead compounds. Although these "random" methods sample larger areas of chemical space than traditional synthetic approaches, only a relatively small percentage of all possible compounds are practically accessible. It is therefore helpful to select regions of chemical space that have greater likelihood of yielding useful leads. When three-dimensional structural data are available for the target molecule this can be achieved by applying structure-based computational design methods to focus the combinatorial library. This is advantageous over the standard usage of computational methods to design a small number of specific novel ligands, because here computation is employed as part of the combinatorial design process and so is required only to determine a propensity for binding of certain chemical moieties in regions of the target molecule. This paper describes the application of the Multiple Copy Simultaneous Search (MCSS) method, an active site mapping and de novo structure-based design tool, to design a focused combinatorial library for the class II MHC protein HLA-DR4. Methods for the synthesizing and screening the computationally designed library are presented; evidence is provided to show that binding was achieved. Although the structure of the protein-ligand complex could not be determined, experimental results including cross-exclusion of a known HLA-DR4 peptide ligand (HA) by a compound from the library. Computational model building suggest that at least one of the ligands designed and identified by the methods described binds in a mode similar to that of native peptides.

  8. Fictitious Domain Methods for Fracture Models in Elasticity.

    NASA Astrophysics Data System (ADS)

    Court, S.; Bodart, O.; Cayol, V.; Koko, J.

    2014-12-01

    As surface displacements depend non linearly on sources location and shape, simplifying assumptions are generally required to reduce computation time when inverting geodetic data. We present a generic Finite Element Method designed for pressurized or sheared cracks inside a linear elastic medium. A fictitious domain method is used to take the crack into account independently of the mesh. Besides the possibility of considering heterogeneous media, the approach permits the evolution of the crack through time or more generally through iterations: The goal is to change the less things we need when the crack geometry is modified; In particular no re-meshing is required (the boundary conditions at the level of the crack are imposed by Lagrange multipliers), leading to a gain of computation time and resources with respect to classic finite element methods. This method is also robust with respect to the geometry, since we expect to observe the same behavior whatever the shape and the position of the crack. We present numerical experiments which highlight the accuracy of our method (using convergence curves), the optimality of errors, and the robustness with respect to the geometry (with computation of errors on some quantities for all kind of geometric configurations). We will also provide 2D benchmark tests. The method is then applied to Piton de la Fournaise volcano, considering a pressurized crack - inside a 3-dimensional domain - and the corresponding computation time and accuracy are compared with results from a mixed Boundary element method. In order to determine the crack geometrical characteristics, and pressure, inversions are performed combining fictitious domain computations with a near neighborhood algorithm. Performances are compared with those obtained combining a mixed boundary element method with the same inversion algorithm.

  9. TRIADIMEFON, A TRIAZOLE FUNGICIDE, INDUCES STEREOTYPED BEHAVIOR AND ALTERS MONOAMINE METABOLISM IN RATS

    EPA Science Inventory

    Triadimefon, a triazole fungicide, has been observed to increase locomotion and induce stereotyped behavior in rodents. he present experiments characterized the stereotyped behavior induced by triadimefon using a computer-supported observational method, and tested the hypothesis ...

  10. Preliminary skyshine calculations for the Poloidal Diverter Tokamak Experiment

    NASA Astrophysics Data System (ADS)

    Nigg, D. W.; Wheeler, F. J.

    1981-01-01

    A calculational model is presented to estimate the radiation dose, due to the skyshine effect, in the control room and at the site boundary of the Poloidal Diverter Experiment (PDX) facility at Princeton University which requires substantial radiation shielding. The required composition and thickness of a water-filled roof shield that would reduce this effect to an acceptable level is computed, using an efficient one-dimensional model with an Sn calculation in slab geometry. The actual neutron skyshine dose is computed using a Monte Carlo model with the neutron source at the roof surface obtained from the slab Sn calculation, and the capture gamma dose is computed using a simple point-kernel single-scatter method. It is maintained that the slab model provides the exact probability of leakage out the top surface of the roof and that it is nearly as accurate as and much less costly than multi-dimensional techniques.

  11. Preliminary skyshine calculations for the Poloidal Diverter Tokamak Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nigg, D.W.; Wheeler, F.J.

    1981-01-01

    A calculational model is presented to estimate the radiation dose, due to the skyshine effect, in the control room and at the site boundary of the Poloidal Diverter Experiment (PDX) facility at Princeton University which requires substantial radiation shielding. The required composition and thickness of a water-filled roof shield that would reduce this effect to an acceptable level is computed, using an efficient one-dimensional model with an Sn calculation in slab geometry. The actual neutron skyshine dose is computed using a Monte Carlo model with the neutron source at the roof surface obtained from the slab Sn calculation, and themore » capture gamma dose is computed using a simple point-kernel single-scatter method. It is maintained that the slab model provides the exact probability of leakage out the top surface of the roof and that it is nearly as accurate as and much less costly than multi-dimensional techniques.« less

  12. Simulating and assessing boson sampling experiments with phase-space representations

    NASA Astrophysics Data System (ADS)

    Opanchuk, Bogdan; Rosales-Zárate, Laura; Reid, Margaret D.; Drummond, Peter D.

    2018-04-01

    The search for new, application-specific quantum computers designed to outperform any classical computer is driven by the ending of Moore's law and the quantum advantages potentially obtainable. Photonic networks are promising examples, with experimental demonstrations and potential for obtaining a quantum computer to solve problems believed classically impossible. This introduces a challenge: how does one design or understand such photonic networks? One must be able to calculate observables using general methods capable of treating arbitrary inputs, dissipation, and noise. We develop complex phase-space software for simulating these photonic networks, and apply this to boson sampling experiments. Our techniques give sampling errors orders of magnitude lower than experimental correlation measurements for the same number of samples. We show that these techniques remove systematic errors in previous algorithms for estimating correlations, with large improvements in errors in some cases. In addition, we obtain a scalable channel-combination strategy for assessment of boson sampling devices.

  13. Modeling the fusion of cylindrical bioink particles in post bioprinting structure formation

    NASA Astrophysics Data System (ADS)

    McCune, Matt; Shafiee, Ashkan; Forgacs, Gabor; Kosztin, Ioan

    2015-03-01

    Cellular Particle Dynamics (CPD) is an effective computational method to describe the shape evolution and biomechanical relaxation processes in multicellular systems. Thus, CPD is a useful tool to predict the outcome of post-printing structure formation in bioprinting. The predictive power of CPD has been demonstrated for multicellular systems composed of spherical bioink units. Experiments and computer simulations were related through an independently developed theoretical formalism based on continuum mechanics. Here we generalize the CPD formalism to (i) include cylindrical bioink particles often used in specific bioprinting applications, (ii) describe the more realistic experimental situation in which both the length and the volume of the cylindrical bioink units decrease during post-printing structure formation, and (iii) directly connect CPD simulations to the corresponding experiments without the need of the intermediate continuum theory inherently based on simplifying assumptions. Work supported by NSF [PHY-0957914]. Computer time provided by the University of Missouri Bioinformatics Consortium.

  14. Efficient computation of kinship and identity coefficients on large pedigrees.

    PubMed

    Cheng, En; Elliott, Brendan; Ozsoyoglu, Z Meral

    2009-06-01

    With the rapidly expanding field of medical genetics and genetic counseling, genealogy information is becoming increasingly abundant. An important computation on pedigree data is the calculation of identity coefficients, which provide a complete description of the degree of relatedness of a pair of individuals. The areas of application of identity coefficients are numerous and diverse, from genetic counseling to disease tracking, and thus, the computation of identity coefficients merits special attention. However, the computation of identity coefficients is not done directly, but rather as the final step after computing a set of generalized kinship coefficients. In this paper, we first propose a novel Path-Counting Formula for calculating generalized kinship coefficients, which is motivated by Wright's path-counting method for computing inbreeding coefficient. We then present an efficient and scalable scheme for calculating generalized kinship coefficients on large pedigrees using NodeCodes, a special encoding scheme for expediting the evaluation of queries on pedigree graph structures. Furthermore, we propose an improved scheme using Family NodeCodes for the computation of generalized kinship coefficients, which is motivated by the significant improvement of using Family NodeCodes for inbreeding coefficient over the use of NodeCodes. We also perform experiments for evaluating the efficiency of our method, and compare it with the performance of the traditional recursive algorithm for three individuals. Experimental results demonstrate that the resulting scheme is more scalable and efficient than the traditional recursive methods for computing generalized kinship coefficients.

  15. Next-generation genotype imputation service and methods.

    PubMed

    Das, Sayantan; Forer, Lukas; Schönherr, Sebastian; Sidore, Carlo; Locke, Adam E; Kwong, Alan; Vrieze, Scott I; Chew, Emily Y; Levy, Shawn; McGue, Matt; Schlessinger, David; Stambolian, Dwight; Loh, Po-Ru; Iacono, William G; Swaroop, Anand; Scott, Laura J; Cucca, Francesco; Kronenberg, Florian; Boehnke, Michael; Abecasis, Gonçalo R; Fuchsberger, Christian

    2016-10-01

    Genotype imputation is a key component of genetic association studies, where it increases power, facilitates meta-analysis, and aids interpretation of signals. Genotype imputation is computationally demanding and, with current tools, typically requires access to a high-performance computing cluster and to a reference panel of sequenced genomes. Here we describe improvements to imputation machinery that reduce computational requirements by more than an order of magnitude with no loss of accuracy in comparison to standard imputation tools. We also describe a new web-based service for imputation that facilitates access to new reference panels and greatly improves user experience and productivity.

  16. A Sparse Reconstruction Approach for Identifying Gene Regulatory Networks Using Steady-State Experiment Data

    PubMed Central

    Zhang, Wanhong; Zhou, Tong

    2015-01-01

    Motivation Identifying gene regulatory networks (GRNs) which consist of a large number of interacting units has become a problem of paramount importance in systems biology. Situations exist extensively in which causal interacting relationships among these units are required to be reconstructed from measured expression data and other a priori information. Though numerous classical methods have been developed to unravel the interactions of GRNs, these methods either have higher computing complexities or have lower estimation accuracies. Note that great similarities exist between identification of genes that directly regulate a specific gene and a sparse vector reconstruction, which often relates to the determination of the number, location and magnitude of nonzero entries of an unknown vector by solving an underdetermined system of linear equations y = Φx. Based on these similarities, we propose a novel framework of sparse reconstruction to identify the structure of a GRN, so as to increase accuracy of causal regulation estimations, as well as to reduce their computational complexity. Results In this paper, a sparse reconstruction framework is proposed on basis of steady-state experiment data to identify GRN structure. Different from traditional methods, this approach is adopted which is well suitable for a large-scale underdetermined problem in inferring a sparse vector. We investigate how to combine the noisy steady-state experiment data and a sparse reconstruction algorithm to identify causal relationships. Efficiency of this method is tested by an artificial linear network, a mitogen-activated protein kinase (MAPK) pathway network and the in silico networks of the DREAM challenges. The performance of the suggested approach is compared with two state-of-the-art algorithms, the widely adopted total least-squares (TLS) method and those available results on the DREAM project. Actual results show that, with a lower computational cost, the proposed method can significantly enhance estimation accuracy and greatly reduce false positive and negative errors. Furthermore, numerical calculations demonstrate that the proposed algorithm may have faster convergence speed and smaller fluctuation than other methods when either estimate error or estimate bias is considered. PMID:26207991

  17. Multiparadigm Design Environments

    DTIC Science & Technology

    1992-01-01

    following results: 1. New methods for programming in terms of conceptual models 2. Design of object-oriented languages 3. Compiler optimization and...experimented with object-based methods for programming directly in terms of conceptual models, object-oriented language design, computer program...expect the3e results to have a strong influence on future ,,j :- ...... L ! . . • a mm ammmml ll Illlll • l I 1 Conceptual Programming Conceptual

  18. 26 CFR 1.404(a)-14 - Special rules in connection with the Employee Retirement Income Security Act of 1974.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... method, and experience gains and losses of previous years. (3) Limit adjustment. The term “limit... (k) of the section, where applicable) with respect to a given plan year in computing deductible... case of a plan using a spread gain funding method which maintains an unfunded liability (e.g., the...

  19. Elastic Cherenkov effects in transversely isotropic soft materials-II: Ex vivo and in vivo experiments

    NASA Astrophysics Data System (ADS)

    Li, Guo-Yang; He, Qiong; Qian, Lin-Xue; Geng, Huiying; Liu, Yanlin; Yang, Xue-Yi; Luo, Jianwen; Cao, Yanping

    2016-09-01

    In part I of this study, we investigated the elastic Cherenkov effect (ECE) in an incompressible transversely isotropic (TI) soft solid using a combined theoretical and computational approach, based on which an inverse method has been proposed to measure both the anisotropic and hyperelastic parameters of TI soft tissues. In this part, experiments were carried out to validate the inverse method and demonstrate its usefulness in practical measurements. We first performed ex vivo experiments on bovine skeletal muscles. Not only the shear moduli along and perpendicular to the direction of muscle fibers but also the elastic modulus EL and hyperelastic parameter c2 were determined. We next carried out tensile tests to determine EL, which was compared with the value obtained using the shear wave elastography method. Furthermore, we conducted in vivo experiments on the biceps brachii and gastrocnemius muscles of ten healthy volunteers. To the best of our knowledge, this study represents the first attempt to determine EL of human muscles using the dynamic elastography method and inverse analysis. The significance of our method and its potential for clinical use are discussed.

  20. Analysis of real-time numerical integration methods applied to dynamic clamp experiments.

    PubMed

    Butera, Robert J; McCarthy, Maeve L

    2004-12-01

    Real-time systems are frequently used as an experimental tool, whereby simulated models interact in real time with neurophysiological experiments. The most demanding of these techniques is known as the dynamic clamp, where simulated ion channel conductances are artificially injected into a neuron via intracellular electrodes for measurement and stimulation. Methodologies for implementing the numerical integration of the gating variables in real time typically employ first-order numerical methods, either Euler or exponential Euler (EE). EE is often used for rapidly integrating ion channel gating variables. We find via simulation studies that for small time steps, both methods are comparable, but at larger time steps, EE performs worse than Euler. We derive error bounds for both methods, and find that the error can be characterized in terms of two ratios: time step over time constant, and voltage measurement error over the slope factor of the steady-state activation curve of the voltage-dependent gating variable. These ratios reliably bound the simulation error and yield results consistent with the simulation analysis. Our bounds quantitatively illustrate how measurement error restricts the accuracy that can be obtained by using smaller step sizes. Finally, we demonstrate that Euler can be computed with identical computational efficiency as EE.

Top