Best Practice Guidelines for Computer Technology in the Montessori Early Childhood Classroom.
ERIC Educational Resources Information Center
Montminy, Peter
1999-01-01
Presents a draft for a principle-centered position statement of a Montessori early childhood program in central Pennsylvania, on the pros and cons of computer use in a Montessori 3-6 classroom. Includes computer software rating form. (Author/KB)
29 CFR 778.416 - Purpose of provisions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... POLICY OR INTERPRETATION NOT DIRECTLY RELATED TO REGULATIONS OVERTIME COMPENSATION Exceptions From the Regular Rate Principles Computing Overtime Pay on the Rate Applicable to the Type of Work Performed in...
Astrophysical reaction rates from a symmetry-informed first-principles perspective
NASA Astrophysics Data System (ADS)
Dreyfuss, Alison; Launey, Kristina; Baker, Robert; Draayer, Jerry; Dytrych, Tomas
2017-01-01
With a view toward a new unified formalism for studying bound and continuum states in nuclei, to understand stellar nucleosynthesis from a fully ab initio perspective, we studied the nature of surface α-clustering in 20Ne by considering the overlap of symplectic states with cluster-like states. We compute the spectroscopic amplitudes and factors, α-decay width, and absolute resonance strength - characterizing major contributions to the astrophysical reaction rate through a low-lying 1- resonant state in 20Ne. As a next step, we consider a fully microscopic treatment for the n+4 He system, based on the successful first-principles No-Core Shell Model/Resonating Group Method (NCSM/RGM) for light nuclei, but with the capability to reach intermediate-mass nuclei. The new model takes advantage of the symmetry-based concept central to the Symmetry-Adapted No-Core Shell Model (SA-NCSM) to reduce computational complexity in physically-informed and methodical way, with sights toward first-principles calculations of rates for important astrophysical reactions, such as the 23 Al(p , γ) 24 Si reaction, believed to have a strong influence on X-ray burst light curves. Supported by the U.S. NSF (OCI-0904874, ACI -1516338) and the U.S. DOE (DE-SC0005248), and benefitted from computing resources provided by Blue Waters and the LSU Center for Computation & Technology.
29 CFR 778.420 - Combined hourly rates and piece rates.
Code of Federal Regulations, 2010 CFR
2010-07-01
... STATEMENTS OF GENERAL POLICY OR INTERPRETATION NOT DIRECTLY RELATED TO REGULATIONS OVERTIME COMPENSATION Exceptions From the Regular Rate Principles Computing Overtime Pay on the Rate Applicable to the Type of Work... an employee works at a combination of hourly and piece rates, the payment of a rate not less than one...
Statistical mechanical theory for steady state systems. VI. Variational principles
NASA Astrophysics Data System (ADS)
Attard, Phil
2006-12-01
Several variational principles that have been proposed for nonequilibrium systems are analyzed. These include the principle of minimum rate of entropy production due to Prigogine [Introduction to Thermodynamics of Irreversible Processes (Interscience, New York, 1967)], the principle of maximum rate of entropy production, which is common on the internet and in the natural sciences, two principles of minimum dissipation due to Onsager [Phys. Rev. 37, 405 (1931)] and to Onsager and Machlup [Phys. Rev. 91, 1505 (1953)], and the principle of maximum second entropy due to Attard [J. Chem.. Phys. 122, 154101 (2005); Phys. Chem. Chem. Phys. 8, 3585 (2006)]. The approaches of Onsager and Attard are argued to be the only viable theories. These two are related, although their physical interpretation and mathematical approximations differ. A numerical comparison with computer simulation results indicates that Attard's expression is the only accurate theory. The implications for the Langevin and other stochastic differential equations are discussed.
29 CFR 778.112 - Day rates and job rates.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 29 Labor 3 2011-07-01 2011-07-01 false Day rates and job rates. 778.112 Section 778.112 Labor... Requirements Principles for Computing Overtime Pay Based on the âregular Rateâ § 778.112 Day rates and job rates. If the employee is paid a flat sum for a day's work or for doing a particular job, without regard...
29 CFR 778.112 - Day rates and job rates.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 29 Labor 3 2012-07-01 2012-07-01 false Day rates and job rates. 778.112 Section 778.112 Labor... Requirements Principles for Computing Overtime Pay Based on the âregular Rateâ § 778.112 Day rates and job rates. If the employee is paid a flat sum for a day's work or for doing a particular job, without regard...
29 CFR 778.112 - Day rates and job rates.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 29 Labor 3 2014-07-01 2014-07-01 false Day rates and job rates. 778.112 Section 778.112 Labor... Requirements Principles for Computing Overtime Pay Based on the âregular Rateâ § 778.112 Day rates and job rates. If the employee is paid a flat sum for a day's work or for doing a particular job, without regard...
29 CFR 778.112 - Day rates and job rates.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Day rates and job rates. 778.112 Section 778.112 Labor... Requirements Principles for Computing Overtime Pay Based on the âregular Rateâ § 778.112 Day rates and job rates. If the employee is paid a flat sum for a day's work or for doing a particular job, without regard...
29 CFR 778.112 - Day rates and job rates.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 29 Labor 3 2013-07-01 2013-07-01 false Day rates and job rates. 778.112 Section 778.112 Labor... Requirements Principles for Computing Overtime Pay Based on the âregular Rateâ § 778.112 Day rates and job rates. If the employee is paid a flat sum for a day's work or for doing a particular job, without regard...
29 CFR 778.115 - Employees working at two or more rates.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Overtime Pay Requirements Principles for Computing Overtime Pay Based on the âregular Rateâ § 778.115... different types of work for which different nonovertime rates of pay (of not less than the applicable minimum wage) have been established, his regular rate for that week is the weighted average of such rates...
Managers' Perceptions of the Importance of Topics for the High School Management Curriculum.
ERIC Educational Resources Information Center
Herbert, Bruce E.
1989-01-01
Small business managers rated the importance of 60 topics for a high school small business management/entrepreneurship course. Respondents (79 of 130) rated highest the topics related to management principles, human relations, and resource development. Computer/data processing and international business received low ratings. Significant…
ERIC Educational Resources Information Center
Journal of Chemical Education, 1988
1988-01-01
Contains ratings of two software packages for Apple II computers: "Acid-Base Titrations, CHM311A" and "Chemical Principles for the Introductory Laboratory, CHM 384A." Both are aimed at high school and college chemistry and are rated on ease of use, subject matter content, pedagogic value, and student reaction. (CW)
29 CFR 778.418 - Pieceworkers.
Code of Federal Regulations, 2010 CFR
2010-07-01
... applicable maximum hours standard for the particular workweek; and (4) The compensation paid for the overtime... Principles Computing Overtime Pay on the Rate Applicable to the Type of Work Performed in Overtime Hours... the basis of a piece rate for the work performed during nonovertime hours may agree with his employer...
29 CFR 778.415 - The statutory provisions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... POLICY OR INTERPRETATION NOT DIRECTLY RELATED TO REGULATIONS OVERTIME COMPENSATION Exceptions From the Regular Rate Principles Computing Overtime Pay on the Rate Applicable to the Type of Work Performed in... performance of the work, the amount paid to the employee for the number of hours worked by him in such...
1987-09-24
Some concerns take on rating (e.g., ’Zl’) that adequately reflects increased significance in the network how well the system provides each service...to how well a M.•.imum, Fair, Good); however, in specific spicific approach may be expected to achieve cases, ratings such as "plesent" or "approved...established thresholds, Supportive policies include idertification and and for detecting the fact that access to a authentication policies as well as
29 CFR 778.400 - The provisions of section 7(g)(3) of the Act.
Code of Federal Regulations, 2010 CFR
2010-07-01
... COMPENSATION Exceptions From the Regular Rate Principles Computing Overtime Pay on An âestablishedâ Rate § 778... Labor as being substantially equivalent to the average hourly earnings of the employee, exclusive of... average hourly earnings for the workweek exclusive of payments described in paragraphs (1) through (7) of...
Experimental and Computational Interrogation of Fast SCR Mechanism and Active Sites on H-Form SSZ-13
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Sichi; Zheng, Yang; Gao, Feng
Experiment and density functional theory (DFT) models are combined to develop a unified, quantitative model of the mechanism and kinetics of fast selective catalytic reduction (SCR) of NO/NO2 mixtures over H-SSZ-13 zeolite. Rates, rate orders, and apparent activation energies collected under differential conditions reveal two distinct kinetic regimes. First-principles thermodynamics simulations are used to determine the relative coverages of free Brønsted sites, chemisorbed NH4+ and physisorbed NH3 as a function of reaction conditions. First-principles metadynamics calculations show that all three sites can contribute to the rate-limiting N-N bond forming step in fast SCR. The results are used to parameterize amore » kinetic model that encompasses the full range of reaction conditions and recovers observed rate orders and apparent activation energies. Observed kinetic regimes are related to changes in most-abundant surface intermediates. Financial support was provided by the National Science Foundation GAOLI program under award number 1258690-CBET. We thank the Center for Research Computing at Notre« less
Principles of parametric estimation in modeling language competition
Zhang, Menghan; Gong, Tao
2013-01-01
It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka–Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data. PMID:23716678
Principles of parametric estimation in modeling language competition.
Zhang, Menghan; Gong, Tao
2013-06-11
It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka-Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data.
Survey of the supporting research and technology for the thermal protection of the Galileo Probe
NASA Technical Reports Server (NTRS)
Howe, J. T.; Pitts, W. C.; Lundell, J. H.
1981-01-01
The Galileo Probe, which is scheduled to be launched in 1985 and to enter the hydrogen-helium atmosphere of Jupiter up to 1,475 days later, presents thermal protection problems that are far more difficult than those experienced in previous planetary entry missions. The high entry speed of the Probe will cause forebody heating rates orders of magnitude greater than those encountered in the Apollo and Pioneer Venus missions, severe afterbody heating from base-flow radiation, and thermochemical ablation rates for carbon phenolic that rival the free-stream mass flux. This paper presents a comprehensive survey of the experimental work and computational research that provide technological support for the Probe's heat-shield design effort. The survey includes atmospheric modeling; both approximate and first-principle computations of flow fields and heat-shield material response; base heating; turbulence modelling; new computational techniques; experimental heating and materials studies; code validation efforts; and a set of 'consensus' first-principle flow-field solutions through the entry maneuver, with predictions of the corresponding thermal protection requirements.
Transient state kinetics tutorial using the kinetics simulation program, KINSIM.
Wachsstock, D H; Pollard, T D
1994-01-01
This article provides an introduction to a computer tutorial on transient state kinetics. The tutorial uses our Macintosh version of the computer program, KINSIM, that calculates the time course of reactions. KINSIM is also available for other popular computers. This program allows even those investigators not mathematically inclined to evaluate the rate constants for the transitions between the intermediates in any reaction mechanism. These rate constants are one of the insights that are essential for understanding how biochemical processes work at the molecular level. The approach is applicable not only to enzyme reactions but also to any other type of process of interest to biophysicists, cell biologists, and molecular biologists in which concentrations change with time. In principle, the same methods could be used to characterize time-dependent, large-scale processes in ecology and evolution. Completion of the tutorial takes students 6-10 h. This investment is rewarded by a deep understanding of the principles of chemical kinetics and familiarity with the tools of kinetics simulation as an approach to solve everyday problems in the laboratory. PMID:7811941
29 CFR 778.417 - General requirements of section 7(g).
Code of Federal Regulations, 2010 CFR
2010-07-01
... Exceptions From the Regular Rate Principles Computing Overtime Pay on the Rate Applicable to the Type of Work... overtime pay permitted in this section will not in any circumstances be seized upon as a device for avoiding payment of the minimum wage due for each hour, the requirement must be met that employee's average...
29 CFR 778.417 - General requirements of section 7(g).
Code of Federal Regulations, 2011 CFR
2011-07-01
... Exceptions From the Regular Rate Principles Computing Overtime Pay on the Rate Applicable to the Type of Work... overtime pay permitted in this section will not in any circumstances be seized upon as a device for avoiding payment of the minimum wage due for each hour, the requirement must be met that employee's average...
The neural circuits for arithmetic principles.
Liu, Jie; Zhang, Han; Chen, Chuansheng; Chen, Hui; Cui, Jiaxin; Zhou, Xinlin
2017-02-15
Arithmetic principles are the regularities underlying arithmetic computation. Little is known about how the brain supports the processing of arithmetic principles. The current fMRI study examined neural activation and functional connectivity during the processing of verbalized arithmetic principles, as compared to numerical computation and general language processing. As expected, arithmetic principles elicited stronger activation in bilateral horizontal intraparietal sulcus and right supramarginal gyrus than did language processing, and stronger activation in left middle temporal lobe and left orbital part of inferior frontal gyrus than did computation. In contrast, computation elicited greater activation in bilateral horizontal intraparietal sulcus (extending to posterior superior parietal lobule) than did either arithmetic principles or language processing. Functional connectivity analysis with the psychophysiological interaction approach (PPI) showed that left temporal-parietal (MTG-HIPS) connectivity was stronger during the processing of arithmetic principle and language than during computation, whereas parietal-occipital connectivities were stronger during computation than during the processing of arithmetic principles and language. Additionally, the left fronto-parietal (orbital IFG-HIPS) connectivity was stronger during the processing of arithmetic principles than during computation. The results suggest that verbalized arithmetic principles engage a neural network that overlaps but is distinct from the networks for computation and language processing. Copyright © 2016 Elsevier Inc. All rights reserved.
Development of a picosecond CO2 laser system for a high-repetition γ-source
NASA Astrophysics Data System (ADS)
Polyanskiy, Mikhail N.; Pogorelsky, Igor V.; Yakimenko, Vitaly E.; Platonenko, Victor T.
2008-10-01
The concept of a high-repetition-rate, high-average power γ-source is based on Compton backscattering from the relativistic electron beam inside a picosecond CO2 laser cavity. Proof-of-principle experiments combined with comput
High-efficiency multiphoton boson sampling
NASA Astrophysics Data System (ADS)
Wang, Hui; He, Yu; Li, Yu-Huai; Su, Zu-En; Li, Bo; Huang, He-Liang; Ding, Xing; Chen, Ming-Cheng; Liu, Chang; Qin, Jian; Li, Jin-Peng; He, Yu-Ming; Schneider, Christian; Kamp, Martin; Peng, Cheng-Zhi; Höfling, Sven; Lu, Chao-Yang; Pan, Jian-Wei
2017-06-01
Boson sampling is considered as a strong candidate to demonstrate 'quantum computational supremacy' over classical computers. However, previous proof-of-principle experiments suffered from small photon number and low sampling rates owing to the inefficiencies of the single-photon sources and multiport optical interferometers. Here, we develop two central components for high-performance boson sampling: robust multiphoton interferometers with 99% transmission rate and actively demultiplexed single-photon sources based on a quantum dot-micropillar with simultaneously high efficiency, purity and indistinguishability. We implement and validate three-, four- and five-photon boson sampling, and achieve sampling rates of 4.96 kHz, 151 Hz and 4 Hz, respectively, which are over 24,000 times faster than previous experiments. Our architecture can be scaled up for a larger number of photons and with higher sampling rates to compete with classical computers, and might provide experimental evidence against the extended Church-Turing thesis.
[Computers in biomedical research: I. Analysis of bioelectrical signals].
Vivaldi, E A; Maldonado, P
2001-08-01
A personal computer equipped with an analog-to-digital conversion card is able to input, store and display signals of biomedical interest. These signals can additionally be submitted to ad-hoc software for analysis and diagnosis. Data acquisition is based on the sampling of a signal at a given rate and amplitude resolution. The automation of signal processing conveys syntactic aspects (data transduction, conditioning and reduction); and semantic aspects (feature extraction to describe and characterize the signal and diagnostic classification). The analytical approach that is at the basis of computer programming allows for the successful resolution of apparently complex tasks. Two basic principles involved are the definition of simple fundamental functions that are then iterated and the modular subdivision of tasks. These two principles are illustrated, respectively, by presenting the algorithm that detects relevant elements for the analysis of a polysomnogram, and the task flow in systems that automate electrocardiographic reports.
The Convallis Rule for Unsupervised Learning in Cortical Networks
Yger, Pierre; Harris, Kenneth D.
2013-01-01
The phenomenology and cellular mechanisms of cortical synaptic plasticity are becoming known in increasing detail, but the computational principles by which cortical plasticity enables the development of sensory representations are unclear. Here we describe a framework for cortical synaptic plasticity termed the “Convallis rule”, mathematically derived from a principle of unsupervised learning via constrained optimization. Implementation of the rule caused a recurrent cortex-like network of simulated spiking neurons to develop rate representations of real-world speech stimuli, enabling classification by a downstream linear decoder. Applied to spike patterns used in in vitro plasticity experiments, the rule reproduced multiple results including and beyond STDP. However STDP alone produced poorer learning performance. The mathematical form of the rule is consistent with a dual coincidence detector mechanism that has been suggested by experiments in several synaptic classes of juvenile neocortex. Based on this confluence of normative, phenomenological, and mechanistic evidence, we suggest that the rule may approximate a fundamental computational principle of the neocortex. PMID:24204224
The True Growth Rate and the Inflation Balancing Principle.
ERIC Educational Resources Information Center
Thompson, Robert C.
1983-01-01
The demise of mathematics of finance as a subject is discussed and a resurgence is seen as possible, but the traditional instructional presentation is seen as in need of modernization. Financial mathematics is referred to as a beautiful subject when inflation is incorporated, provided that calculators are used in computations. (Author/MP)
Efficient calculation of atomic rate coefficients in dense plasmas
NASA Astrophysics Data System (ADS)
Aslanyan, Valentin; Tallents, Greg J.
2017-03-01
Modelling electron statistics in a cold, dense plasma by the Fermi-Dirac distribution leads to complications in the calculations of atomic rate coefficients. The Pauli exclusion principle slows down the rate of collisions as electrons must find unoccupied quantum states and adds a further computational cost. Methods to calculate these coefficients by direct numerical integration with a high degree of parallelism are presented. This degree of optimization allows the effects of degeneracy to be incorporated into a time-dependent collisional-radiative model. Example results from such a model are presented.
Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing
2011-01-01
Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century. PMID:21444779
Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing
2011-04-05
Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century.
2017-08-01
principles for effective Computer-Based Training (CBT) that can be applied broadly to Army courses to build and evaluate exemplar CBT for Army advanced...individual training courses. To assist cadre who do not have a dedicated instructional design team, the Computer-Based Training Principles Guide was...document is the resulting contents, organization, and presentation style of the Computer- Based Training Principles Guide and its companion User’s Guide
Bao, Junwei Lucas; Zhang, Xin
2016-01-01
Bond dissociation is a fundamental chemical reaction, and the first principles modeling of the kinetics of dissociation reactions with a monotonically increasing potential energy along the dissociation coordinate presents a challenge not only for modern electronic structure methods but also for kinetics theory. In this work, we use multifaceted variable-reaction-coordinate variational transition-state theory (VRC-VTST) to compute the high-pressure limit dissociation rate constant of tetrafluoroethylene (C2F4), in which the potential energies are computed by direct dynamics with the M08-HX exchange correlation functional. To treat the pressure dependence of the unimolecular rate constants, we use the recently developed system-specific quantum Rice–Ramsperger–Kassel theory. The calculations are carried out by direct dynamics using an exchange correlation functional validated against calculations that go beyond coupled-cluster theory with single, double, and triple excitations. Our computed dissociation rate constants agree well with the recent experimental measurements. PMID:27834727
Bao, Junwei Lucas; Zhang, Xin; Truhlar, Donald G
2016-11-29
Bond dissociation is a fundamental chemical reaction, and the first principles modeling of the kinetics of dissociation reactions with a monotonically increasing potential energy along the dissociation coordinate presents a challenge not only for modern electronic structure methods but also for kinetics theory. In this work, we use multifaceted variable-reaction-coordinate variational transition-state theory (VRC-VTST) to compute the high-pressure limit dissociation rate constant of tetrafluoroethylene (C 2 F 4 ), in which the potential energies are computed by direct dynamics with the M08-HX exchange correlation functional. To treat the pressure dependence of the unimolecular rate constants, we use the recently developed system-specific quantum Rice-Ramsperger-Kassel theory. The calculations are carried out by direct dynamics using an exchange correlation functional validated against calculations that go beyond coupled-cluster theory with single, double, and triple excitations. Our computed dissociation rate constants agree well with the recent experimental measurements.
Architecture for an artificial immune system.
Hofmeyr, S A; Forrest, S
2000-01-01
An artificial immune system (ARTIS) is described which incorporates many properties of natural immune systems, including diversity, distributed computation, error tolerance, dynamic learning and adaptation, and self-monitoring. ARTIS is a general framework for a distributed adaptive system and could, in principle, be applied to many domains. In this paper, ARTIS is applied to computer security in the form of a network intrusion detection system called LISYS. LISYS is described and shown to be effective at detecting intrusions, while maintaining low false positive rates. Finally, similarities and differences between ARTIS and Holland's classifier systems are discussed.
Common computational properties found in natural sensory systems
NASA Astrophysics Data System (ADS)
Brooks, Geoffrey
2009-05-01
Throughout the animal kingdom there are many existing sensory systems with capabilities desired by the human designers of new sensory and computational systems. There are a few basic design principles constantly observed among these natural mechano-, chemo-, and photo-sensory systems, principles that have been proven by the test of time. Such principles include non-uniform sampling and processing, topological computing, contrast enhancement by localized signal inhibition, graded localized signal processing, spiked signal transmission, and coarse coding, which is the computational transformation of raw data using broadly overlapping filters. These principles are outlined here with references to natural biological sensory systems as well as successful biomimetic sensory systems exploiting these natural design concepts.
Effectiveness of computer ergonomics interventions for an engineering company: a program evaluation.
Goodman, Glenn; Landis, James; George, Christina; McGuire, Sheila; Shorter, Crystal; Sieminski, Michelle; Wilson, Tamika
2005-01-01
Ergonomic principles at the computer workstation may reduce the occurrence of work related injuries commonly associated with intensive computer use. A program implemented in 2001 by an occupational therapist and a physical therapist utilized these preventative measures with education about ergonomics, individualized evaluations of computer workstations, and recommendations for ergonomic and environmental changes. This study examined program outcomes and perceived effectiveness based on review of documents, interviews, and surveys of the employees and the plant manager. The program was deemed successful as shown by 59% of all therapist recommendations and 74% of ergonomic recommendations being implemented by the company, with an 85% satisfaction rate for the ergonomic interventions and an overall employee satisfaction rate of 70%. Eighty-one percent of the physical problems reported by employees were resolved to their satisfaction one year later. Successful implementation of ergonomics programs depend upon effective communication and education of the consumers, and the support, cooperation and collaboration of management and employees.
Geolocation of LTE Subscriber Stations Based on the Timing Advance Ranging Parameter
2010-12-01
provides the maximum achievable data rates. The specifications for LTE include FDD and TDD in all of its descriptions since there is little to no...parameters used during LTE network entry are examined as they relate to calculating these distances. Computer simulation is used to demonstrate...11 Figure 4. Principles of TDD and FDD modes of
Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3
NASA Technical Reports Server (NTRS)
Lin, Shu
1998-01-01
Decoding algorithms based on the trellis representation of a code (block or convolutional) drastically reduce decoding complexity. The best known and most commonly used trellis-based decoding algorithm is the Viterbi algorithm. It is a maximum likelihood decoding algorithm. Convolutional codes with the Viterbi decoding have been widely used for error control in digital communications over the last two decades. This chapter is concerned with the application of the Viterbi decoding algorithm to linear block codes. First, the Viterbi algorithm is presented. Then, optimum sectionalization of a trellis to minimize the computational complexity of a Viterbi decoder is discussed and an algorithm is presented. Some design issues for IC (integrated circuit) implementation of a Viterbi decoder are considered and discussed. Finally, a new decoding algorithm based on the principle of compare-select-add is presented. This new algorithm can be applied to both block and convolutional codes and is more efficient than the conventional Viterbi algorithm based on the add-compare-select principle. This algorithm is particularly efficient for rate 1/n antipodal convolutional codes and their high-rate punctured codes. It reduces computational complexity by one-third compared with the Viterbi algorithm.
Why is the correlation between gene importance and gene evolutionary rate so weak?
Wang, Zhi; Zhang, Jianzhi
2009-01-01
One of the few commonly believed principles of molecular evolution is that functionally more important genes (or DNA sequences) evolve more slowly than less important ones. This principle is widely used by molecular biologists in daily practice. However, recent genomic analysis of a diverse array of organisms found only weak, negative correlations between the evolutionary rate of a gene and its functional importance, typically measured under a single benign lab condition. A frequently suggested cause of the above finding is that gene importance determined in the lab differs from that in an organism's natural environment. Here, we test this hypothesis in yeast using gene importance values experimentally determined in 418 lab conditions or computationally predicted for 10,000 nutritional conditions. In no single condition or combination of conditions did we find a much stronger negative correlation, which is explainable by our subsequent finding that always-essential (enzyme) genes do not evolve significantly more slowly than sometimes-essential or always-nonessential ones. Furthermore, we verified that functional density, approximated by the fraction of amino acid sites within protein domains, is uncorrelated with gene importance. Thus, neither the lab-nature mismatch nor a potentially biased among-gene distribution of functional density explains the observed weakness of the correlation between gene importance and evolutionary rate. We conclude that the weakness is factual, rather than artifactual. In addition to being weakened by population genetic reasons, the correlation is likely to have been further weakened by the presence of multiple nontrivial rate determinants that are independent from gene importance. These findings notwithstanding, we show that the principle of slower evolution of more important genes does have some predictive power when genes with vastly different evolutionary rates are compared, explaining why the principle can be practically useful despite the weakness of the correlation.
Electrostatic design of protein-protein association rates.
Schreiber, Gideon; Shaul, Yossi; Gottschalk, Kay E
2006-01-01
De novo design and redesign of proteins and protein complexes have made promising progress in recent years. Here, we give an overview of how to use available computer-based tools to design proteins to bind faster and tighter to their protein-complex partner by electrostatic optimization between the two proteins. Electrostatic optimization is possible because of the simple relation between the Debye-Huckel energy of interaction between a pair of proteins and their rate of association. This can be used for rapid, structure-based calculations of the electrostatic attraction between the two proteins in the complex. Using these principles, we developed two computer programs that predict the change in k(on), and as such the affinity, on introducing charged mutations. The two programs have a web interface that is available at
Integrating Computer Concepts into Principles of Accounting.
ERIC Educational Resources Information Center
Beck, Henry J.; Parrish, Roy James, Jr.
A package of instructional materials for an undergraduate principles of accounting course at Danville Community College was developed based upon the following assumptions: (1) the principles of accounting student does not need to be able to write computer programs; (2) computerized accounting concepts should be presented in this course; (3)…
Nontrivial Quantum Effects in Biology: A Skeptical Physicists' View
NASA Astrophysics Data System (ADS)
Wiseman, Howard; Eisert, Jens
The following sections are included: * Introduction * A Quantum Life Principle * A quantum chemistry principle? * The anthropic principle * Quantum Computing in the Brain * Nature did everything first? * Decoherence as the make or break issue * Quantum error correction * Uselessness of quantum algorithms for organisms * Quantum Computing in Genetics * Quantum search * Teleological aspects and the fast-track to life * Quantum Consciousness * Computability and free will * Time scales * Quantum Free Will * Predictability and free will * Determinism and free will * Acknowledgements * References
Physical Principle for Generation of Randomness
NASA Technical Reports Server (NTRS)
Zak, Michail
2009-01-01
A physical principle (more precisely, a principle that incorporates mathematical models used in physics) has been conceived as the basis of a method of generating randomness in Monte Carlo simulations. The principle eliminates the need for conventional random-number generators. The Monte Carlo simulation method is among the most powerful computational methods for solving high-dimensional problems in physics, chemistry, economics, and information processing. The Monte Carlo simulation method is especially effective for solving problems in which computational complexity increases exponentially with dimensionality. The main advantage of the Monte Carlo simulation method over other methods is that the demand on computational resources becomes independent of dimensionality. As augmented by the present principle, the Monte Carlo simulation method becomes an even more powerful computational method that is especially useful for solving problems associated with dynamics of fluids, planning, scheduling, and combinatorial optimization. The present principle is based on coupling of dynamical equations with the corresponding Liouville equation. The randomness is generated by non-Lipschitz instability of dynamics triggered and controlled by feedback from the Liouville equation. (In non-Lipschitz dynamics, the derivatives of solutions of the dynamical equations are not required to be bounded.)
PubMed on Tap: discovering design principles for online information delivery to handheld computers.
Hauser, Susan E; Demner-Fushman, Dina; Ford, Glenn; Thoma, George R
2004-01-01
Online access to biomedical information from handheld computers will be a valuable adjunct to other popular medical applications if information delivery systems are designed with handheld computers in mind. The goal of this project is to discover design principles to facilitate practitioners' access to online medical information at the point-of-care. A prototype system was developed to serve as a testbed for this research. Using the testbed, an initial evaluation has yielded several user interface design principles. Continued research is expected to discover additional user interface design principles as well as guidelines for results organization and system performance
Designing User-Computer Dialogues: Basic Principles and Guidelines.
ERIC Educational Resources Information Center
Harrell, Thomas H.
This discussion of the design of computerized psychological assessment or testing instruments stresses the importance of the well-designed computer-user interface. The principles underlying the three main functional elements of computer-user dialogue--data entry, data display, and sequential control--are discussed, and basic guidelines derived…
Radiation-driven winds of hot stars. V - Wind models for central stars of planetary nebulae
NASA Technical Reports Server (NTRS)
Pauldrach, A.; Puls, J.; Kudritzki, R. P.; Mendez, R. H.; Heap, S. R.
1988-01-01
Wind models using the recent improvements of radiation driven wind theory by Pauldrach et al. (1986) and Pauldrach (1987) are presented for central stars of planetary nebulae. The models are computed along evolutionary tracks evolving with different stellar mass from the Asymptotic Giant Branch. We show that the calculated terminal wind velocities are in agreement with the observations and allow in principle an independent determination of stellar masses and radii. The computed mass-loss rates are in qualitative agreement with the occurrence of spectroscopic stellar wind features as a function of stellar effective temperature and gravity.
Efficient Predictions of Excited State for Nanomaterials Using Aces 3 and 4
2017-12-20
by first-principle methods in the software package ACES by using large parallel computers, growing tothe exascale. 15. SUBJECT TERMS Computer...modeling, excited states, optical properties, structure, stability, activation barriers first principle methods , parallel computing 16. SECURITY...2 Progress with new density functional methods
NASA Astrophysics Data System (ADS)
Marinos, Alexandros; Briscoe, Gerard
Cloud Computing is rising fast, with its data centres growing at an unprecedented rate. However, this has come with concerns over privacy, efficiency at the expense of resilience, and environmental sustainability, because of the dependence on Cloud vendors such as Google, Amazon and Microsoft. Our response is an alternative model for the Cloud conceptualisation, providing a paradigm for Clouds in the community, utilising networked personal computers for liberation from the centralised vendor model. Community Cloud Computing (C3) offers an alternative architecture, created by combing the Cloud with paradigms from Grid Computing, principles from Digital Ecosystems, and sustainability from Green Computing, while remaining true to the original vision of the Internet. It is more technically challenging than Cloud Computing, having to deal with distributed computing issues, including heterogeneous nodes, varying quality of service, and additional security constraints. However, these are not insurmountable challenges, and with the need to retain control over our digital lives and the potential environmental consequences, it is a challenge we must pursue.
Guidelines for developing vectorizable computer programs
NASA Technical Reports Server (NTRS)
Miner, E. W.
1982-01-01
Some fundamental principles for developing computer programs which are compatible with array-oriented computers are presented. The emphasis is on basic techniques for structuring computer codes which are applicable in FORTRAN and do not require a special programming language or exact a significant penalty on a scalar computer. Researchers who are using numerical techniques to solve problems in engineering can apply these basic principles and thus develop transportable computer programs (in FORTRAN) which contain much vectorizable code. The vector architecture of the ASC is discussed so that the requirements of array processing can be better appreciated. The "vectorization" of a finite-difference viscous shock-layer code is used as an example to illustrate the benefits and some of the difficulties involved. Increases in computing speed with vectorization are illustrated with results from the viscous shock-layer code and from a finite-element shock tube code. The applicability of these principles was substantiated through running programs on other computers with array-associated computing characteristics, such as the Hewlett-Packard (H-P) 1000-F.
Computational analysis of conserved RNA secondary structure in transcriptomes and genomes.
Eddy, Sean R
2014-01-01
Transcriptomics experiments and computational predictions both enable systematic discovery of new functional RNAs. However, many putative noncoding transcripts arise instead from artifacts and biological noise, and current computational prediction methods have high false positive rates. I discuss prospects for improving computational methods for analyzing and identifying functional RNAs, with a focus on detecting signatures of conserved RNA secondary structure. An interesting new front is the application of chemical and enzymatic experiments that probe RNA structure on a transcriptome-wide scale. I review several proposed approaches for incorporating structure probing data into the computational prediction of RNA secondary structure. Using probabilistic inference formalisms, I show how all these approaches can be unified in a well-principled framework, which in turn allows RNA probing data to be easily integrated into a wide range of analyses that depend on RNA secondary structure inference. Such analyses include homology search and genome-wide detection of new structural RNAs.
Principles of proteome allocation are revealed using proteomic data and genome-scale models
Yang, Laurence; Yurkovich, James T.; Lloyd, Colton J.; Ebrahim, Ali; Saunders, Michael A.; Palsson, Bernhard O.
2016-01-01
Integrating omics data to refine or make context-specific models is an active field of constraint-based modeling. Proteomics now cover over 95% of the Escherichia coli proteome by mass. Genome-scale models of Metabolism and macromolecular Expression (ME) compute proteome allocation linked to metabolism and fitness. Using proteomics data, we formulated allocation constraints for key proteome sectors in the ME model. The resulting calibrated model effectively computed the “generalist” (wild-type) E. coli proteome and phenotype across diverse growth environments. Across 15 growth conditions, prediction errors for growth rate and metabolic fluxes were 69% and 14% lower, respectively. The sector-constrained ME model thus represents a generalist ME model reflecting both growth rate maximization and “hedging” against uncertain environments and stresses, as indicated by significant enrichment of these sectors for the general stress response sigma factor σS. Finally, the sector constraints represent a general formalism for integrating omics data from any experimental condition into constraint-based ME models. The constraints can be fine-grained (individual proteins) or coarse-grained (functionally-related protein groups) as demonstrated here. This flexible formalism provides an accessible approach for narrowing the gap between the complexity captured by omics data and governing principles of proteome allocation described by systems-level models. PMID:27857205
Principles of proteome allocation are revealed using proteomic data and genome-scale models
Yang, Laurence; Yurkovich, James T.; Lloyd, Colton J.; ...
2016-11-18
Integrating omics data to refine or make context-specific models is an active field of constraint-based modeling. Proteomics now cover over 95% of the Escherichia coli proteome by mass. Genome-scale models of Metabolism and macromolecular Expression (ME) compute proteome allocation linked to metabolism and fitness. Using proteomics data, we formulated allocation constraints for key proteome sectors in the ME model. The resulting calibrated model effectively computed the “generalist” (wild-type) E. coli proteome and phenotype across diverse growth environments. Across 15 growth conditions, prediction errors for growth rate and metabolic fluxes were 69% and 14% lower, respectively. The sector-constrained ME model thusmore » represents a generalist ME model reflecting both growth rate maximization and “hedging” against uncertain environments and stresses, as indicated by significant enrichment of these sectors for the general stress response sigma factor σS. Finally, the sector constraints represent a general formalism for integrating omics data from any experimental condition into constraint-based ME models. The constraints can be fine-grained (individual proteins) or coarse-grained (functionally-related protein groups) as demonstrated here. Furthermore, this flexible formalism provides an accessible approach for narrowing the gap between the complexity captured by omics data and governing principles of proteome allocation described by systems-level models.« less
Energy and time determine scaling in biological and computer designs
Bezerra, George; Edwards, Benjamin; Brown, James; Forrest, Stephanie
2016-01-01
Metabolic rate in animals and power consumption in computers are analogous quantities that scale similarly with size. We analyse vascular systems of mammals and on-chip networks of microprocessors, where natural selection and human engineering, respectively, have produced systems that minimize both energy dissipation and delivery times. Using a simple network model that simultaneously minimizes energy and time, our analysis explains empirically observed trends in the scaling of metabolic rate in mammals and power consumption and performance in microprocessors across several orders of magnitude in size. Just as the evolutionary transitions from unicellular to multicellular animals in biology are associated with shifts in metabolic scaling, our model suggests that the scaling of power and performance will change as computer designs transition to decentralized multi-core and distributed cyber-physical systems. More generally, a single energy–time minimization principle may govern the design of many complex systems that process energy, materials and information. This article is part of the themed issue ‘The major synthetic evolutionary transitions’. PMID:27431524
Energy and time determine scaling in biological and computer designs.
Moses, Melanie; Bezerra, George; Edwards, Benjamin; Brown, James; Forrest, Stephanie
2016-08-19
Metabolic rate in animals and power consumption in computers are analogous quantities that scale similarly with size. We analyse vascular systems of mammals and on-chip networks of microprocessors, where natural selection and human engineering, respectively, have produced systems that minimize both energy dissipation and delivery times. Using a simple network model that simultaneously minimizes energy and time, our analysis explains empirically observed trends in the scaling of metabolic rate in mammals and power consumption and performance in microprocessors across several orders of magnitude in size. Just as the evolutionary transitions from unicellular to multicellular animals in biology are associated with shifts in metabolic scaling, our model suggests that the scaling of power and performance will change as computer designs transition to decentralized multi-core and distributed cyber-physical systems. More generally, a single energy-time minimization principle may govern the design of many complex systems that process energy, materials and information.This article is part of the themed issue 'The major synthetic evolutionary transitions'. © 2016 The Author(s).
ERIC Educational Resources Information Center
Jameson, A. Keith
Presented are the teacher's guide and student materials for one of a series of self-instructional, computer-based learning modules for an introductory, undergraduate chemistry course. The student manual for this unit on Le Chatelier's principle includes objectives, prerequisites, pretest, instructions for executing the computer program, and…
CADD medicine: design is the potion that can cure my disease
NASA Astrophysics Data System (ADS)
Manas, Eric S.; Green, Darren V. S.
2017-03-01
The acronym "CADD" is often used interchangeably to refer to "Computer Aided Drug Discovery" and "Computer Aided Drug Design". While the former definition implies the use of a computer to impact one or more aspects of discovering a drug, in this paper we contend that computational chemists are most effective when they enable teams to apply true design principles as they strive to create medicines to treat human disease. We argue that teams must bring to bear multiple sub-disciplines of computational chemistry in an integrated manner in order to utilize these principles to address the multi-objective nature of the drug discovery problem. Impact, resourcing principles, and future directions for the field are also discussed, including areas of future opportunity as well as a cautionary note about hype and hubris.
Towards a unified theory for morphomechanics
Taber, Larry A.
2009-01-01
Mechanical forces are closely involved in the construction of an embryo. Experiments have suggested that mechanical feedback plays a role in regulating these forces, but the nature of this feedback is poorly understood. Here, we propose a general principle for the mechanics of morphogenesis, as governed by a pair of evolution equations based on feedback from tissue stress. In one equation, the rate of growth (or contraction) depends on the difference between the current tissue stress and a target (homeostatic) stress. In the other equation, the target stress changes at a rate that depends on the same stress difference. The parameters in these morphomechanical laws are assumed to depend on stress rate. Computational models are used to illustrate how these equations can capture a relatively wide range of behaviours observed in developing embryos, as well as show the limitations of this theory. Specific applications include growth of pressure vessels (e.g. the heart, arteries and brain), wound healing and sea urchin gastrulation. Understanding the fundamental principles of tissue construction can help engineers design new strategies for creating replacement tissues and organs in vitro. PMID:19657011
Developing an Asteroid Rotational Theory
NASA Astrophysics Data System (ADS)
Geis, Gena; Williams, Miguel; Linder, Tyler; Pakey, Donald
2018-01-01
The goal of this project is to develop a theoretical asteroid rotational theory from first principles. Starting at first principles provides a firm foundation for computer simulations which can be used to analyze multiple variables at once such as size, rotation period, tensile strength, and density. The initial theory will be presented along with early models of applying the theory to the asteroid population. Early results confirm previous work by Pravec et al. (2002) that show the majority of the asteroids larger than 200m have negligible tensile strength and have spin rates close to their critical breakup point. Additionally, results show that an object with zero tensile strength has a maximum rotational rate determined by the object’s density, not size. Therefore, an iron asteroid with a density of 8000 kg/m^3 would have a minimum spin period of 1.16h if the only forces were gravitational and centrifugal. The short-term goal is to include material forces in the simulations to determine what tensile strength will allow the high spin rates of asteroids smaller than 150m.
Metabolic networks evolve towards states of maximum entropy production.
Unrean, Pornkamol; Srienc, Friedrich
2011-11-01
A metabolic network can be described by a set of elementary modes or pathways representing discrete metabolic states that support cell function. We have recently shown that in the most likely metabolic state the usage probability of individual elementary modes is distributed according to the Boltzmann distribution law while complying with the principle of maximum entropy production. To demonstrate that a metabolic network evolves towards such state we have carried out adaptive evolution experiments with Thermoanaerobacterium saccharolyticum operating with a reduced metabolic functionality based on a reduced set of elementary modes. In such reduced metabolic network metabolic fluxes can be conveniently computed from the measured metabolite secretion pattern. Over a time span of 300 generations the specific growth rate of the strain continuously increased together with a continuous increase in the rate of entropy production. We show that the rate of entropy production asymptotically approaches the maximum entropy production rate predicted from the state when the usage probability of individual elementary modes is distributed according to the Boltzmann distribution. Therefore, the outcome of evolution of a complex biological system can be predicted in highly quantitative terms using basic statistical mechanical principles. Copyright © 2011 Elsevier Inc. All rights reserved.
Why Is the Correlation between Gene Importance and Gene Evolutionary Rate So Weak?
Wang, Zhi; Zhang, Jianzhi
2009-01-01
One of the few commonly believed principles of molecular evolution is that functionally more important genes (or DNA sequences) evolve more slowly than less important ones. This principle is widely used by molecular biologists in daily practice. However, recent genomic analysis of a diverse array of organisms found only weak, negative correlations between the evolutionary rate of a gene and its functional importance, typically measured under a single benign lab condition. A frequently suggested cause of the above finding is that gene importance determined in the lab differs from that in an organism's natural environment. Here, we test this hypothesis in yeast using gene importance values experimentally determined in 418 lab conditions or computationally predicted for 10,000 nutritional conditions. In no single condition or combination of conditions did we find a much stronger negative correlation, which is explainable by our subsequent finding that always-essential (enzyme) genes do not evolve significantly more slowly than sometimes-essential or always-nonessential ones. Furthermore, we verified that functional density, approximated by the fraction of amino acid sites within protein domains, is uncorrelated with gene importance. Thus, neither the lab-nature mismatch nor a potentially biased among-gene distribution of functional density explains the observed weakness of the correlation between gene importance and evolutionary rate. We conclude that the weakness is factual, rather than artifactual. In addition to being weakened by population genetic reasons, the correlation is likely to have been further weakened by the presence of multiple nontrivial rate determinants that are independent from gene importance. These findings notwithstanding, we show that the principle of slower evolution of more important genes does have some predictive power when genes with vastly different evolutionary rates are compared, explaining why the principle can be practically useful despite the weakness of the correlation. PMID:19132081
Microwave/Sonic Apparatus Measures Flow and Density in Pipe
NASA Technical Reports Server (NTRS)
Arndt, G. D.; Ngo, Phong; Carl, J. R.; Byerly, Kent A.
2004-01-01
An apparatus for measuring the rate of flow and the mass density of a liquid or slurry includes a special section of pipe instrumented with microwave and sonic sensors, and a computer that processes digitized readings taken by the sensors. The apparatus was conceived specifically for monitoring a flow of oil-well-drilling mud, but the basic principles of its design and operation are also applicable to monitoring flows of other liquids and slurries.
Quantum Gauss-Jordan Elimination and Simulation of Accounting Principles on Quantum Computers
NASA Astrophysics Data System (ADS)
Diep, Do Ngoc; Giang, Do Hoang; Van Minh, Nguyen
2017-06-01
The paper is devoted to a version of Quantum Gauss-Jordan Elimination and its applications. In the first part, we construct the Quantum Gauss-Jordan Elimination (QGJE) Algorithm and estimate the complexity of computation of Reduced Row Echelon Form (RREF) of N × N matrices. The main result asserts that QGJE has computation time is of order 2 N/2. The second part is devoted to a new idea of simulation of accounting by quantum computing. We first expose the actual accounting principles in a pure mathematics language. Then, we simulate the accounting principles on quantum computers. We show that, all accounting actions are exhousted by the described basic actions. The main problems of accounting are reduced to some system of linear equations in the economic model of Leontief. In this simulation, we use our constructed Quantum Gauss-Jordan Elimination to solve the problems and the complexity of quantum computing is a square root order faster than the complexity in classical computing.
Computational principles of working memory in sentence comprehension.
Lewis, Richard L; Vasishth, Shravan; Van Dyke, Julie A
2006-10-01
Understanding a sentence requires a working memory of the partial products of comprehension, so that linguistic relations between temporally distal parts of the sentence can be rapidly computed. We describe an emerging theoretical framework for this working memory system that incorporates several independently motivated principles of memory: a sharply limited attentional focus, rapid retrieval of item (but not order) information subject to interference from similar items, and activation decay (forgetting over time). A computational model embodying these principles provides an explanation of the functional capacities and severe limitations of human processing, as well as accounts of reading times. The broad implication is that the detailed nature of cross-linguistic sentence processing emerges from the interaction of general principles of human memory with the specialized task of language comprehension.
Design principles for radiation-resistant solid solutions
NASA Astrophysics Data System (ADS)
Schuler, Thomas; Trinkle, Dallas R.; Bellon, Pascal; Averback, Robert
2017-05-01
We develop a multiscale approach to quantify the increase in the recombined fraction of point defects under irradiation resulting from dilute solute additions to a solid solution. This methodology provides design principles for radiation-resistant materials. Using an existing database of solute diffusivities, we identify Sb as one of the most efficient solutes for this purpose in a Cu matrix. We perform density-functional-theory calculations to obtain binding and migration energies of Sb atoms, vacancies, and self-interstitial atoms in various configurations. The computed data informs the self-consistent mean-field formalism to calculate transport coefficients, allowing us to make quantitative predictions of the recombined fraction of point defects as a function of temperature and irradiation rate using homogeneous rate equations. We identify two different mechanisms according to which solutes lead to an increase in the recombined fraction of point defects; at low temperature, solutes slow down vacancies (kinetic effect), while at high temperature, solutes stabilize vacancies in the solid solution (thermodynamic effect). Extension to other metallic matrices and solutes are discussed.
The maximum entropy production and maximum Shannon information entropy in enzyme kinetics
NASA Astrophysics Data System (ADS)
Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš
2018-04-01
We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.
Dendritic nonlinearities are tuned for efficient spike-based computations in cortical circuits.
Ujfalussy, Balázs B; Makara, Judit K; Branco, Tiago; Lengyel, Máté
2015-12-24
Cortical neurons integrate thousands of synaptic inputs in their dendrites in highly nonlinear ways. It is unknown how these dendritic nonlinearities in individual cells contribute to computations at the level of neural circuits. Here, we show that dendritic nonlinearities are critical for the efficient integration of synaptic inputs in circuits performing analog computations with spiking neurons. We developed a theory that formalizes how a neuron's dendritic nonlinearity that is optimal for integrating synaptic inputs depends on the statistics of its presynaptic activity patterns. Based on their in vivo preynaptic population statistics (firing rates, membrane potential fluctuations, and correlations due to ensemble dynamics), our theory accurately predicted the responses of two different types of cortical pyramidal cells to patterned stimulation by two-photon glutamate uncaging. These results reveal a new computational principle underlying dendritic integration in cortical neurons by suggesting a functional link between cellular and systems--level properties of cortical circuits.
The Design of Hand Gestures for Human-Computer Interaction: Lessons from Sign Language Interpreters.
Rempel, David; Camilleri, Matt J; Lee, David L
2015-10-01
The design and selection of 3D modeled hand gestures for human-computer interaction should follow principles of natural language combined with the need to optimize gesture contrast and recognition. The selection should also consider the discomfort and fatigue associated with distinct hand postures and motions, especially for common commands. Sign language interpreters have extensive and unique experience forming hand gestures and many suffer from hand pain while gesturing. Professional sign language interpreters (N=24) rated discomfort for hand gestures associated with 47 characters and words and 33 hand postures. Clear associations of discomfort with hand postures were identified. In a nominal logistic regression model, high discomfort was associated with gestures requiring a flexed wrist, discordant adjacent fingers, or extended fingers. These and other findings should be considered in the design of hand gestures to optimize the relationship between human cognitive and physical processes and computer gesture recognition systems for human-computer input.
NASA Astrophysics Data System (ADS)
Furuya, Haruhisa; Hiratsuka, Mitsuyoshi
This article overviews the historical transition of legal protection of Computer software contracts in the Unite States and presents how it should function under Uniform Commercial Code and its amended Article 2B, Uniform Computer Information Transactions Act, and also recently-approved “Principles of the Law of Software Contracts”.
Deep hierarchies in the primate visual cortex: what can we learn for computer vision?
Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz
2013-08-01
Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.
Computational Fluid Dynamics Modeling of Nickel Hydrogen Batteries
NASA Technical Reports Server (NTRS)
Cullion, R.; Gu, W. B.; Wang, C. Y.; Timmerman, P.
2000-01-01
An electrochemical Ni-H2 battery model has been expanded to include thermal effects. A thermal energy conservation equation was derived from first principles. An electrochemical and thermal coupled model was created by the addition of this equation to an existing multiphase, electrochemical model. Charging at various rates was investigated and the results validated against experimental data. Reaction currents, pressure changes, temperature profiles, and concentration variations within the cell are predicted numerically and compared with available data and theory.
A computational model of selection by consequences: log survivor plots.
Kulubekova, Saule; McDowell, J J
2008-06-01
[McDowell, J.J, 2004. A computational model of selection by consequences. J. Exp. Anal. Behav. 81, 297-317] instantiated the principle of selection by consequences in a virtual organism with an evolving repertoire of possible behaviors undergoing selection, reproduction, and mutation over many generations. The process is based on the computational approach, which is non-deterministic and rules-based. The model proposes a causal account for operant behavior. McDowell found that the virtual organism consistently showed a hyperbolic relationship between response and reinforcement rates according to the quantitative law of effect. To continue validation of the computational model, the present study examined its behavior on the molecular level by comparing the virtual organism's IRT distributions in the form of log survivor plots to findings from live organisms. Log survivor plots did not show the "broken-stick" feature indicative of distinct bouts and pauses in responding, although the bend in slope of the plots became more defined at low reinforcement rates. The shape of the virtual organism's log survivor plots was more consistent with the data on reinforced responding in pigeons. These results suggest that log survivor plot patterns of the virtual organism were generally consistent with the findings from live organisms providing further support for the computational model of selection by consequences as a viable account of operant behavior.
NASA Astrophysics Data System (ADS)
Sochi, Taha
2016-09-01
Several deterministic and stochastic multi-variable global optimization algorithms (Conjugate Gradient, Nelder-Mead, Quasi-Newton and global) are investigated in conjunction with energy minimization principle to resolve the pressure and volumetric flow rate fields in single ducts and networks of interconnected ducts. The algorithms are tested with seven types of fluid: Newtonian, power law, Bingham, Herschel-Bulkley, Ellis, Ree-Eyring and Casson. The results obtained from all those algorithms for all these types of fluid agree very well with the analytically derived solutions as obtained from the traditional methods which are based on the conservation principles and fluid constitutive relations. The results confirm and generalize the findings of our previous investigations that the energy minimization principle is at the heart of the flow dynamics systems. The investigation also enriches the methods of computational fluid dynamics for solving the flow fields in tubes and networks for various types of Newtonian and non-Newtonian fluids.
1998-08-07
cognitive flexibility theory and generative learning theory which focus primarily on the individual student’s cognitive development , collaborative... develop "Handling Transfusion Hazards," a computer program based upon cognitive flexibility theory principles. The Program: Handling Transfusion Hazards...computer program was developed according to cognitive flexibility theory principles. A generative version was then developed by embedding
A first-principles analytical theory for 2D magnetic reconnection in electron and Hall MHD.
NASA Astrophysics Data System (ADS)
Zocco, A.; Simakov, A. N.; Chacon, L.
2007-11-01
While the relevance of two-fluid effects in fast magnetic reconnection is well-known,ootnotetextJ. Birn et al., J. Geophys. Res., 106 (A3), pp. 3715--3719 (2001) a first-principles theory --akin to Sweet and Parker's in resistive MHD-- has been elusive. Here, we present such a first principles steady-state theory for electron MHD,ootnotetextL. Chac'on, A. N. Simakov, A. Zocco, Phys. Rev. Lett., submitted and its extension to Hall.ootnotetextA. N. Simakov, L. Chac'on, in preparation The theory discretizes the extended MHD equations at the reconnection site, leading to a set of time-dependent ODEs. Their steady-state analysis provides predictions for the scaling of relevant quantities with the dissipation coefficients (e.g, resistivity and hyper-resistivity) and other relevant parameters. In particular, we will show that EMHD admits both elongated and open-X point configurations of the reconnection region, and that the reconnection rate Ez can be shown not to scale explicitly with the dissipation parameters. This analytic result confirms earlier computational work on the possibility of fast (dissipation-independent) magnetic reconnection in EMHD. We have extended the EMHD results to Hall MHD, and have found a general scaling law for the reconnection rate (and associated length scales) that bridges the gap between resistive and EMHD.
A Monte Carlo Simulation of Brownian Motion in the Freshman Laboratory
ERIC Educational Resources Information Center
Anger, C. D.; Prescott, J. R.
1970-01-01
Describes a dry- lab" experiment for the college freshman laboratory, in which the essential features of Browian motion are given principles, using the Monte Carlo technique. Calculations principles, using the Monte Carlo technique. Calculations are carried out by a computation sheme based on computer language. Bibliography. (LC)
Using Multimedia for E-Learning
ERIC Educational Resources Information Center
Mayer, R. E.
2017-01-01
This paper reviews 12 research-based principles for how to design computer-based multimedia instructional materials to promote academic learning, starting with the multimedia principle (yielding a median effect size of d = 1.67 based on five experimental comparisons), which holds that people learn better from computer-based instruction containing…
Local rules simulation of the kinetics of virus capsid self-assembly.
Schwartz, R; Shor, P W; Prevelige, P E; Berger, B
1998-12-01
A computer model is described for studying the kinetics of the self-assembly of icosahedral viral capsids. Solution of this problem is crucial to an understanding of the viral life cycle, which currently cannot be adequately addressed through laboratory techniques. The abstract simulation model employed to address this is based on the local rules theory of. Proc. Natl. Acad. Sci. USA. 91:7732-7736). It is shown that the principle of local rules, generalized with a model of kinetics and other extensions, can be used to simulate complicated problems in self-assembly. This approach allows for a computationally tractable molecular dynamics-like simulation of coat protein interactions while retaining many relevant features of capsid self-assembly. Three simple simulation experiments are presented to illustrate the use of this model. These show the dependence of growth and malformation rates on the energetics of binding interactions, the tolerance of errors in binding positions, and the concentration of subunits in the examples. These experiments demonstrate a tradeoff within the model between growth rate and fidelity of assembly for the three parameters. A detailed discussion of the computational model is also provided.
Murphy, C L; McLaws, M
2000-04-01
To adopt an evidence-based approach, professionals must be able to access, identify, interpret, and critically appraise best evidence. Critical appraisal requires essential skills, such as computer literacy and an understanding of research principles. These skills also are required for professionals to contribute to evidence. In 1996, members of the Australian Infection Control Association were surveyed to establish a profile including the extent to which they were reading infection control publications, using specific documents for policy and guideline development, developing and undertaking research, publishing research, and using computers. The relationships between demographics, computer use, and research activity were examined. The response rate was 63. 4% (630/993). The study group comprised mostly women (96.1%), and most (66.4%) were older than 40 years of age. Median infection control experience was 4 years (mean, 5.4 years; range, <12 months to 35 years). When developing guidelines and policies (92.7%; 584/630), infection control professionals reviewed State Health Department Infection Control Guidelines and Regulations. Research relating to infection control was undertaken by 21.5% (135/628) of the sample, and 27.6% (37/134) of this group published their research findings. Of the respondents (51.1%; 318/622) who used a computer to undertake infection control tasks, the majority (89.0%) used a personal computer for word processing. Regardless of infection control experience, Australian infection control professionals must be adequately prepared to contribute to, access, appraise, and where appropriate, apply best evidence to their practice. We suggest that computer literacy, an understanding of research principles, and familiarity with infection control literature are three essential skills that infection control professionals must possess and regularly exercise.
Quasispecies theory for evolution of modularity.
Park, Jeong-Man; Niestemski, Liang Ren; Deem, Michael W
2015-01-01
Biological systems are modular, and this modularity evolves over time and in different environments. A number of observations have been made of increased modularity in biological systems under increased environmental pressure. We here develop a quasispecies theory for the dynamics of modularity in populations of these systems. We show how the steady-state fitness in a randomly changing environment can be computed. We derive a fluctuation dissipation relation for the rate of change of modularity and use it to derive a relationship between rate of environmental changes and rate of growth of modularity. We also find a principle of least action for the evolved modularity at steady state. Finally, we compare our predictions to simulations of protein evolution and find them to be consistent.
1993-03-25
application of Object-Oriented Programming (OOP) and Human-Computer Interface (HCI) design principles. Knowledge gained from each topic has been incorporated...through the ap- plication of Object-Oriented Programming (OOP) and Human-Computer Interface (HCI) design principles. Knowledge gained from each topic has...programming and Human-Computer Interface (HCI) design. Knowledge gained from each is applied to the design of a Form-based interface for database data
NASA Technical Reports Server (NTRS)
Smith, Terence R.; Menon, Sudhakar; Star, Jeffrey L.; Estes, John E.
1987-01-01
This paper provides a brief survey of the history, structure and functions of 'traditional' geographic information systems (GIS), and then suggests a set of requirements that large-scale GIS should satisfy, together with a set of principles for their satisfaction. These principles, which include the systematic application of techniques from several subfields of computer science to the design and implementation of GIS and the integration of techniques from computer vision and image processing into standard GIS technology, are discussed in some detail. In particular, the paper provides a detailed discussion of questions relating to appropriate data models, data structures and computational procedures for the efficient storage, retrieval and analysis of spatially-indexed data.
ShunLi Shang; Louis G. Hector Jr.; Paul Saxe; Zi-Kui Liu; Robert J. Moon; Pablo D. Zavattieri
2014-01-01
Anisotropy and temperature dependence of structural, thermodynamic and elastic properties of crystalline cellulose Iβ were computed with first-principles density functional theory (DFT) and a semi-empirical correction for van der Waals interactions. Specifically, we report the computed temperature variation (up to 500...
ERIC Educational Resources Information Center
Orey, Michael A.; Nelson, Wayne A.
Arguing that the evolution of intelligent tutoring systems better reflects the recent theoretical developments of cognitive science than traditional computer-based instruction (CBI), this paper describes a general model for an intelligent tutoring system and suggests ways to improve CBI using design principles derived from research in cognitive…
An Undergraduate Course on Operating Systems Principles.
ERIC Educational Resources Information Center
National Academy of Engineering, Washington, DC. Commission on Education.
This report is from Task Force VIII of the COSINE Committee of the Commission on Education of the National Academy of Engineering. The task force was established to formulate subject matter for an elective undergraduate subject on computer operating systems principles for students whose major interest is in the engineering of computer systems and…
"Citizen Jane": Rethinking Design Principles for Closing the Gender Gap in Computing.
ERIC Educational Resources Information Center
Raphael, Chad
This paper identifies three rationales in the relevant literature for closing the gender gap in computing: economic, cultural and political. Each rationale implies a different set of indicators of present inequalities, disparate goals for creating equality, and distinct principles for software and web site design that aims to help girls overcome…
Extinction from a Rationalist Perspective
Gallistel, C. R.
2012-01-01
The merging of the computational theory of mind and evolutionary thinking leads to a kind of rationalism, in which enduring truths about the world have become implicit in the computations that enable the brain to cope with the experienced world. The dead reckoning computation, for example, is implemented within the brains of animals as one of the mechanisms that enables them to learn where they are (Gallistel, 1990, 1995). It integrates a velocity signal with respect to a time signal. Thus, the manner in which position and velocity relate to one another in the world is reflected in the manner in which signals representing those variables are processed in the brain. I use principles of information theory and Bayesian inference to derive from other simple principles explanations for: 1) the failure of partial reinforcement to increase reinforcements to acquisition; 2) the partial reinforcement extinction effect; 3) spontaneous recovery; 4) renewal; 5) reinstatement; 6) resurgence (aka facilitated reacquisition). Like the principle underlying dead-reckoning, these principles are grounded in analytic considerations. They are the kind of enduring truths about the world that are likely to have shaped the brain's computations. PMID:22391153
Lechner, William J; MacGlashan, James; Wray, Tyler B; Littman, Michael L
2017-01-01
Background Computer-delivered interventions have been shown to be effective in reducing alcohol consumption in heavy drinking college students. However, these computer-delivered interventions rely on mouse, keyboard, or touchscreen responses for interactions between the users and the computer-delivered intervention. The principles of motivational interviewing suggest that in-person interventions may be effective, in part, because they encourage individuals to think through and speak aloud their motivations for changing a health behavior, which current computer-delivered interventions do not allow. Objective The objective of this study was to take the initial steps toward development of a voice-based computer-delivered intervention that can ask open-ended questions and respond appropriately to users’ verbal responses, more closely mirroring a human-delivered motivational intervention. Methods We developed (1) a voice-based computer-delivered intervention that was run by a human controller and that allowed participants to speak their responses to scripted prompts delivered by speech generation software and (2) a text-based computer-delivered intervention that relied on the mouse, keyboard, and computer screen for all interactions. We randomized 60 heavy drinking college students to interact with the voice-based computer-delivered intervention and 30 to interact with the text-based computer-delivered intervention and compared their ratings of the systems as well as their motivation to change drinking and their drinking behavior at 1-month follow-up. Results Participants reported that the voice-based computer-delivered intervention engaged positively with them in the session and delivered content in a manner consistent with motivational interviewing principles. At 1-month follow-up, participants in the voice-based computer-delivered intervention condition reported significant decreases in quantity, frequency, and problems associated with drinking, and increased perceived importance of changing drinking behaviors. In comparison to the text-based computer-delivered intervention condition, those assigned to voice-based computer-delivered intervention reported significantly fewer alcohol-related problems at the 1-month follow-up (incident rate ratio 0.60, 95% CI 0.44-0.83, P=.002). The conditions did not differ significantly on perceived importance of changing drinking or on measures of drinking quantity and frequency of heavy drinking. Conclusions Results indicate that it is feasible to construct a series of open-ended questions and a bank of responses and follow-up prompts that can be used in a future fully automated voice-based computer-delivered intervention that may mirror more closely human-delivered motivational interventions to reduce drinking. Such efforts will require using advanced speech recognition capabilities and machine-learning approaches to train a program to mirror the decisions made by human controllers in the voice-based computer-delivered intervention used in this study. In addition, future studies should examine enhancements that can increase the perceived warmth and empathy of voice-based computer-delivered intervention, possibly through greater personalization, improvements in the speech generation software, and embodying the computer-delivered intervention in a physical form. PMID:28659259
1988-01-01
technique for characterizing reactive coatings. ’C. 2 INTRODUCTION This review of reserch in the author’s laboratory, which is set into a general context...obtained from the temperature depen- dence of the time to reach a specified viscosity approach the true activation energy for the chemical reactions...rate can be deduced in principle from the differences between the experimentally measured and the computed gelation and vitrification curves. The S
A Compressed Sensing Based Ultra-Wideband Communication System
2009-06-01
principle, most of the processing at the receiver can be moved to the transmitter—where energy consumption and computation are sufficient for many advanced...extended to continuous time signals. We use ∗ to denote the convolution process in a linear time-invariant (LTI) system. Assume that there is an analog...Filter Channel Low Rate A/D Processing Sparse Bit Sequence UWB Pulse Generator α̂ Waves)(RadioGHz 5 MHz125 θ Ψ Φ y θ̂ 1 ˆ arg min s.t. yθ
Programmers, professors, and parasites: credit and co-authorship in computer science.
Solomon, Justin
2009-12-01
This article presents an in-depth analysis of past and present publishing practices in academic computer science to suggest the establishment of a more consistent publishing standard. Historical precedent for academic publishing in computer science is established through the study of anecdotes as well as statistics collected from databases of published computer science papers. After examining these facts alongside information about analogous publishing situations and standards in other scientific fields, the article concludes with a list of basic principles that should be adopted in any computer science publishing standard. These principles would contribute to the reliability and scientific nature of academic publications in computer science and would allow for more straightforward discourse in future publications.
Flowfield analysis for successive oblique shock wave-turbulent boundary layer interactions
NASA Technical Reports Server (NTRS)
Sun, C. C.; Childs, M. E.
1976-01-01
A computation procedure is described for predicting the flowfields which develop when successive interactions between oblique shock waves and a turbulent boundary layer occur. Such interactions may occur, for example, in engine inlets for supersonic aircraft. Computations are carried out for axisymmetric internal flows at M 3.82 and 2.82. The effect of boundary layer bleed is considered for the M 2.82 flow. A control volume analysis is used to predict changes in the flow field across the interactions. Two bleed flow models have been considered. A turbulent boundary layer program is used to compute changes in the boundary layer between the interactions. The results given are for flows with two shock wave interactions and for bleed at the second interaction site. In principle the method described may be extended to account for additional interactions. The predicted results are compared with measured results and are shown to be in good agreement when the bleed flow rate is low (on the order of 3% of the boundary layer mass flow), or when there is no bleed. As the bleed flow rate is increased, differences between the predicted and measured results become larger. Shortcomings of the bleed flow models at higher bleed flow rates are discussed.
[Computerization and robotics in medical practice].
Dervaderics, J
1997-10-26
The article gives the outlines of all principles used in computing included the non-electrical and analog computers and the artifical intelligence followed by citing examples as well. The principles and medical utilization of virtual reality are also mentioned. There are discussed: surgical planning, image guided surgery, robotic surgery, telepresence and telesurgery, and telemedicine implemented partially via Internet.
Some Principles for the Human Use of Computers in Education.
ERIC Educational Resources Information Center
Dwyer, Thomas A.
Several principles for the effective use of computers in education are identified as a result of experiences with Project Solo, an experiment in education patterned on the dual-solo example of flight instruction in allowing the student to eventually exert more influence on his learning than his instructor. First, the essential social character of…
Designing Serious Game Interventions for Individuals with Autism.
Whyte, Elisabeth M; Smyth, Joshua M; Scherf, K Suzanne
2015-12-01
The design of "Serious games" that use game components (e.g., storyline, long-term goals, rewards) to create engaging learning experiences has increased in recent years. We examine of the core principles of serious game design and examine the current use of these principles in computer-based interventions for individuals with autism. Participants who undergo these computer-based interventions often show little evidence of the ability to generalize such learning to novel, everyday social communicative interactions. This lack of generalized learning may result, in part, from the limited use of fundamental elements of serious game design that are known to maximize learning. We suggest that future computer-based interventions should consider the full range of serious game design principles that promote generalization of learning.
Berger, Robert F
2018-02-09
In the current decade, perovskite solar cell research has emerged as a remarkably active, promising, and rapidly developing field. Alongside breakthroughs in synthesis and device engineering, halide perovskite photovoltaic materials have been the subject of predictive and explanatory computational work. In this Minireview, we focus on a subset of this computation: density functional theory (DFT)-based work highlighting the ways in which the electronic structure and band gap of this class of materials can be tuned via changes in atomic structure. We distill this body of computational literature into a set of underlying design principles for the band gap engineering of these materials, and rationalize these principles from the viewpoint of band-edge orbital character. We hope that this perspective provides guidance and insight toward the rational design and continued improvement of perovskite photovoltaics. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Selection on Network Dynamics Drives Differential Rates of Protein Domain Evolution
Mannakee, Brian K.; Gutenkunst, Ryan N.
2016-01-01
The long-held principle that functionally important proteins evolve slowly has recently been challenged by studies in mice and yeast showing that the severity of a protein knockout only weakly predicts that protein’s rate of evolution. However, the relevance of these studies to evolutionary changes within proteins is unknown, because amino acid substitutions, unlike knockouts, often only slightly perturb protein activity. To quantify the phenotypic effect of small biochemical perturbations, we developed an approach to use computational systems biology models to measure the influence of individual reaction rate constants on network dynamics. We show that this dynamical influence is predictive of protein domain evolutionary rate within networks in vertebrates and yeast, even after controlling for expression level and breadth, network topology, and knockout effect. Thus, our results not only demonstrate the importance of protein domain function in determining evolutionary rate, but also the power of systems biology modeling to uncover unanticipated evolutionary forces. PMID:27380265
Computational fluid mechanics utilizing the variational principle of modeling damping seals
NASA Technical Reports Server (NTRS)
Abernathy, J. M.
1986-01-01
A computational fluid dynamics code for application to traditional incompressible flow problems has been developed. The method is actually a slight compressibility approach which takes advantage of the bulk modulus and finite sound speed of all real fluids. The finite element numerical analog uses a dynamic differencing scheme based, in part, on a variational principle for computational fluid dynamics. The code was developed in order to study the feasibility of damping seals for high speed turbomachinery. Preliminary seal analyses have been performed.
NASA Technical Reports Server (NTRS)
Mikellides, Ioannis G.; Katz, Ira; Hofer, Richard R.; Goebel, Dan M.
2012-01-01
A proof-of-principle effort to demonstrate a technique by which erosion of the acceleration channel in Hall thrusters of the magnetic-layer type can be eliminated has been completed. The first principles of the technique, now known as "magnetic shielding," were derived based on the findings of numerical simulations in 2-D axisymmetric geometry. The simulations, in turn, guided the modification of an existing 6-kW laboratory Hall thruster. This magnetically shielded (MS) thruster was then built and tested. Because neither theory nor experiment alone can validate fully the first principles of the technique, the objective of the 2-yr effort was twofold: (1) to demonstrate in the laboratory that the erosion rates can be reduced by >order of magnitude, and (2) to demonstrate that the near-wall plasma properties can be altered according to the theoretical predictions. This paper concludes the demonstration of magnetic shielding by reporting on a wide range of comparisons between results from numerical simulations and laboratory diagnostics. Collectively, we find that the comparisons validate the theory. Near the walls of the MS thruster, theory and experiment agree: (1) the plasma potential has been sustained at values near the discharge voltage, and (2) the electron temperature has been lowered by at least 2.5-3 times compared to the unshielded (US) thruster. Also, based on carbon deposition measurements, the erosion rates at the inner and outer walls of the MS thruster are found to be lower by at least 2300 and 1875 times, respectively. Erosion was so low along these walls that the rates were below the resolution of the profilometer. Using a sputtering yield model with an energy threshold of 25 V, the simulations predict a reduction of 600 at the MS inner wall. At the outer wall ion energies are computed to be below 25 V, for which case we set the erosion to zero in the simulations. When a 50-V threshold is used the computed ion energies are below the threshold at both sides of the channel. Uncertainties, sensitivities and differences between theory and experiment are also discussed.
Real time closed loop control of an Ar and Ar/O2 plasma in an ICP
NASA Astrophysics Data System (ADS)
Faulkner, R.; Soberón, F.; McCarter, A.; Gahan, D.; Karkari, S.; Milosavljevic, V.; Hayden, C.; Islyaikin, A.; Law, V. J.; Hopkins, M. B.; Keville, B.; Iordanov, P.; Doherty, S.; Ringwood, J. V.
2006-10-01
Real time closed loop control for plasma assisted semiconductor manufacturing has been the subject of academic research for over a decade. However, due to process complexity and the lack of suitable real time metrology, progress has been elusive and genuine real time, multi-input, multi-output (MIMO) control of a plasma assisted process has yet to be successfully implemented in an industrial setting. A Splasma parameter control strategy T is required to be adopted whereby process recipes which are defined in terms of plasma properties such as critical species densities as opposed to input variables such as rf power and gas flow rates may be transferable between different chamber types. While PIC simulations and multidimensional fluid models have contributed considerably to the basic understanding of plasmas and the design of process equipment, such models require a large amount of processing time and are hence unsuitable for testing control algorithms. In contrast, linear dynamical empirical models, obtained through system identification techniques are ideal in some respects for control design since their computational requirements are comparatively small and their structure facilitates the application of classical control design techniques. However, such models provide little process insight and are specific to an operating point of a particular machine. An ideal first principles-based, control-oriented model would exhibit the simplicity and computational requirements of an empirical model and, in addition, despite sacrificing first principles detail, capture enough of the essential physics and chemistry of the process in order to provide reasonably accurate qualitative predictions. This paper will discuss the development of such a first-principles based, control-oriented model of a laboratory inductively coupled plasma chamber. The model consists of a global model of the chemical kinetics coupled to an analytical model of power deposition. Dynamics of actuators including mass flow controllers and exhaust throttle are included and sensor characteristics are also modelled. The application of this control-oriented model to achieve multivariable closed loop control of specific species e.g. atomic Oxygen and ion density using the actuators rf power, Oxygen and Argon flow rates, and pressure/exhaust flow rate in an Ar/O2 ICP plasma will be presented.
NASA Astrophysics Data System (ADS)
Gabern, Frederic; Koon, Wang S.; Marsden, Jerrold E.; Ross, Shane D.
2005-11-01
The computation, starting from basic principles, of chemical reaction rates in realistic systems (with three or more degrees of freedom) has been a longstanding goal of the chemistry community. Our current work, which merges tube dynamics with Monte Carlo methods provides some key theoretical and computational tools for achieving this goal. We use basic tools of dynamical systems theory, merging the ideas of Koon et al. [W.S. Koon, M.W. Lo, J.E. Marsden, S.D. Ross, Heteroclinic connections between periodic orbits and resonance transitions in celestial mechanics, Chaos 10 (2000) 427-469.] and De Leon et al. [N. De Leon, M.A. Mehta, R.Q. Topper, Cylindrical manifolds in phase space as mediators of chemical reaction dynamics and kinetics. I. Theory, J. Chem. Phys. 94 (1991) 8310-8328.], particularly the use of invariant manifold tubes that mediate the reaction, into a tool for the computation of lifetime distributions and rates of chemical reactions and scattering phenomena, even in systems that exhibit non-statistical behavior. Previously, the main problem with the application of tube dynamics has been with the computation of volumes in phase spaces of high dimension. The present work provides a starting point for overcoming this hurdle with some new ideas and implements them numerically. Specifically, an algorithm that uses tube dynamics to provide the initial bounding box for a Monte Carlo volume determination is used. The combination of a fine scale method for determining the phase space structure (invariant manifold theory) with statistical methods for volume computations (Monte Carlo) is the main contribution of this paper. The methodology is applied here to a three degree of freedom model problem and may be useful for higher degree of freedom systems as well.
Cortical Surround Interactions and Perceptual Salience via Natural Scene Statistics
Coen-Cagli, Ruben; Dayan, Peter; Schwartz, Odelia
2012-01-01
Spatial context in images induces perceptual phenomena associated with salience and modulates the responses of neurons in primary visual cortex (V1). However, the computational and ecological principles underlying contextual effects are incompletely understood. We introduce a model of natural images that includes grouping and segmentation of neighboring features based on their joint statistics, and we interpret the firing rates of V1 neurons as performing optimal recognition in this model. We show that this leads to a substantial generalization of divisive normalization, a computation that is ubiquitous in many neural areas and systems. A main novelty in our model is that the influence of the context on a target stimulus is determined by their degree of statistical dependence. We optimized the parameters of the model on natural image patches, and then simulated neural and perceptual responses on stimuli used in classical experiments. The model reproduces some rich and complex response patterns observed in V1, such as the contrast dependence, orientation tuning and spatial asymmetry of surround suppression, while also allowing for surround facilitation under conditions of weak stimulation. It also mimics the perceptual salience produced by simple displays, and leads to readily testable predictions. Our results provide a principled account of orientation-based contextual modulation in early vision and its sensitivity to the homogeneity and spatial arrangement of inputs, and lends statistical support to the theory that V1 computes visual salience. PMID:22396635
Computer validation in toxicology: historical review for FDA and EPA good laboratory practice.
Brodish, D L
1998-01-01
The application of computer validation principles to Good Laboratory Practice is a fairly recent phenomenon. As automated data collection systems have become more common in toxicology facilities, the U.S. Food and Drug Administration and the U.S. Environmental Protection Agency have begun to focus inspections in this area. This historical review documents the development of regulatory guidance on computer validation in toxicology over the past several decades. An overview of the components of a computer life cycle is presented, including the development of systems descriptions, validation plans, validation testing, system maintenance, SOPs, change control, security considerations, and system retirement. Examples are provided for implementation of computer validation principles on laboratory computer systems in a toxicology facility.
Solanka, Lukas; van Rossum, Mark CW; Nolan, Matthew F
2015-01-01
Neural computations underlying cognitive functions require calibration of the strength of excitatory and inhibitory synaptic connections and are associated with modulation of gamma frequency oscillations in network activity. However, principles relating gamma oscillations, synaptic strength and circuit computations are unclear. We address this in attractor network models that account for grid firing and theta-nested gamma oscillations in the medial entorhinal cortex. We show that moderate intrinsic noise massively increases the range of synaptic strengths supporting gamma oscillations and grid computation. With moderate noise, variation in excitatory or inhibitory synaptic strength tunes the amplitude and frequency of gamma activity without disrupting grid firing. This beneficial role for noise results from disruption of epileptic-like network states. Thus, moderate noise promotes independent control of multiplexed firing rate- and gamma-based computational mechanisms. Our results have implications for tuning of normal circuit function and for disorders associated with changes in gamma oscillations and synaptic strength. DOI: http://dx.doi.org/10.7554/eLife.06444.001 PMID:26146940
Dendritic nonlinearities are tuned for efficient spike-based computations in cortical circuits
Ujfalussy, Balázs B; Makara, Judit K; Branco, Tiago; Lengyel, Máté
2015-01-01
Cortical neurons integrate thousands of synaptic inputs in their dendrites in highly nonlinear ways. It is unknown how these dendritic nonlinearities in individual cells contribute to computations at the level of neural circuits. Here, we show that dendritic nonlinearities are critical for the efficient integration of synaptic inputs in circuits performing analog computations with spiking neurons. We developed a theory that formalizes how a neuron's dendritic nonlinearity that is optimal for integrating synaptic inputs depends on the statistics of its presynaptic activity patterns. Based on their in vivo preynaptic population statistics (firing rates, membrane potential fluctuations, and correlations due to ensemble dynamics), our theory accurately predicted the responses of two different types of cortical pyramidal cells to patterned stimulation by two-photon glutamate uncaging. These results reveal a new computational principle underlying dendritic integration in cortical neurons by suggesting a functional link between cellular and systems--level properties of cortical circuits. DOI: http://dx.doi.org/10.7554/eLife.10056.001 PMID:26705334
The Design of Hand Gestures for Human-Computer Interaction: Lessons from Sign Language Interpreters
Rempel, David; Camilleri, Matt J.; Lee, David L.
2015-01-01
The design and selection of 3D modeled hand gestures for human-computer interaction should follow principles of natural language combined with the need to optimize gesture contrast and recognition. The selection should also consider the discomfort and fatigue associated with distinct hand postures and motions, especially for common commands. Sign language interpreters have extensive and unique experience forming hand gestures and many suffer from hand pain while gesturing. Professional sign language interpreters (N=24) rated discomfort for hand gestures associated with 47 characters and words and 33 hand postures. Clear associations of discomfort with hand postures were identified. In a nominal logistic regression model, high discomfort was associated with gestures requiring a flexed wrist, discordant adjacent fingers, or extended fingers. These and other findings should be considered in the design of hand gestures to optimize the relationship between human cognitive and physical processes and computer gesture recognition systems for human-computer input. PMID:26028955
NASA Astrophysics Data System (ADS)
Allen, John M.; Elbasiouny, Sherif M.
2018-06-01
Objective. Computational models often require tradeoffs, such as balancing detail with efficiency; yet optimal balance should incorporate sound design features that do not bias the results of the specific scientific question under investigation. The present study examines how model design choices impact simulation results. Approach. We developed a rigorously-validated high-fidelity computational model of the spinal motoneuron pool to study three long-standing model design practices which have yet to be examined for their impact on motoneuron recruitment, firing rate, and force simulations. The practices examined were the use of: (1) generic cell models to simulate different motoneuron types, (2) discrete property ranges for different motoneuron types, and (3) biological homogeneity of cell properties within motoneuron types. Main results. Our results show that each of these practices accentuates conditions of motoneuron recruitment based on the size principle, and minimizes conditions of mixed and reversed recruitment orders, which have been observed in animal and human recordings. Specifically, strict motoneuron orderly size recruitment occurs, but in a compressed range, after which mixed and reverse motoneuron recruitment occurs due to the overlap in electrical properties of different motoneuron types. Additionally, these practices underestimate the motoneuron firing rates and force data simulated by existing models. Significance. Our results indicate that current modeling practices increase conditions of motoneuron recruitment based on the size principle, and decrease conditions of mixed and reversed recruitment order, which, in turn, impacts the predictions made by existing models on motoneuron recruitment, firing rate, and force. Additionally, mixed and reverse motoneuron recruitment generated higher muscle force than orderly size motoneuron recruitment in these simulations and represents one potential scheme to increase muscle efficiency. The examined model design practices, as well as the present results, are applicable to neuronal modeling throughout the nervous system.
Allen, John M; Elbasiouny, Sherif M
2018-06-01
Computational models often require tradeoffs, such as balancing detail with efficiency; yet optimal balance should incorporate sound design features that do not bias the results of the specific scientific question under investigation. The present study examines how model design choices impact simulation results. We developed a rigorously-validated high-fidelity computational model of the spinal motoneuron pool to study three long-standing model design practices which have yet to be examined for their impact on motoneuron recruitment, firing rate, and force simulations. The practices examined were the use of: (1) generic cell models to simulate different motoneuron types, (2) discrete property ranges for different motoneuron types, and (3) biological homogeneity of cell properties within motoneuron types. Our results show that each of these practices accentuates conditions of motoneuron recruitment based on the size principle, and minimizes conditions of mixed and reversed recruitment orders, which have been observed in animal and human recordings. Specifically, strict motoneuron orderly size recruitment occurs, but in a compressed range, after which mixed and reverse motoneuron recruitment occurs due to the overlap in electrical properties of different motoneuron types. Additionally, these practices underestimate the motoneuron firing rates and force data simulated by existing models. Our results indicate that current modeling practices increase conditions of motoneuron recruitment based on the size principle, and decrease conditions of mixed and reversed recruitment order, which, in turn, impacts the predictions made by existing models on motoneuron recruitment, firing rate, and force. Additionally, mixed and reverse motoneuron recruitment generated higher muscle force than orderly size motoneuron recruitment in these simulations and represents one potential scheme to increase muscle efficiency. The examined model design practices, as well as the present results, are applicable to neuronal modeling throughout the nervous system.
Materials Databases Infrastructure Constructed by First Principles Calculations: A Review
Lin, Lianshan
2015-10-13
The First Principles calculations, especially the calculation based on High-Throughput Density Functional Theory, have been widely accepted as the major tools in atom scale materials design. The emerging super computers, along with the powerful First Principles calculations, have accumulated hundreds of thousands of crystal and compound records. The exponential growing of computational materials information urges the development of the materials databases, which not only provide unlimited storage for the daily increasing data, but still keep the efficiency in data storage, management, query, presentation and manipulation. This review covers the most cutting edge materials databases in materials design, and their hotmore » applications such as in fuel cells. By comparing the advantages and drawbacks of these high-throughput First Principles materials databases, the optimized computational framework can be identified to fit the needs of fuel cell applications. The further development of high-throughput DFT materials database, which in essence accelerates the materials innovation, is discussed in the summary as well.« less
ERIC Educational Resources Information Center
Qian, Yizhou; Hambrusch, Susanne; Yadav, Aman; Gretter, Sarah
2018-01-01
The new Advanced Placement (AP) Computer Science (CS) Principles course increases the need for quality CS teachers and thus the need for professional development (PD). This article presents the results of a 2-year study investigating how teachers teaching the AP CS Principles course for the first time used online PD material. Our results showed…
ERIC Educational Resources Information Center
Pruett, Sharon M.
2012-01-01
The objective of this study was to compare the relationships between the subtests of the Interactive Computer Interview System and the ETS "Praxis II" Principles of Learning and Teaching examination. In particular, this study compares scores on the ICIS instrument subtests to those gathered from the same classroom teachers on the…
ERIC Educational Resources Information Center
Hammonds, S. J.
1990-01-01
A technique for the numerical identification of bacteria using normalized likelihoods calculated from a probabilistic database is described, and the principles of the technique are explained. The listing of the computer program is included. Specimen results from the program, and examples of how they should be interpreted, are given. (KR)
Extinction from a rationalist perspective.
Gallistel, C R
2012-05-01
The merging of the computational theory of mind and evolutionary thinking leads to a kind of rationalism, in which enduring truths about the world have become implicit in the computations that enable the brain to cope with the experienced world. The dead reckoning computation, for example, is implemented within the brains of animals as one of the mechanisms that enables them to learn where they are (Gallistel, 1990, 1995). It integrates a velocity signal with respect to a time signal. Thus, the manner in which position and velocity relate to one another in the world is reflected in the manner in which signals representing those variables are processed in the brain. I use principles of information theory and Bayesian inference to derive from other simple principles explanations for: (1) the failure of partial reinforcement to increase reinforcements to acquisition; (2) the partial reinforcement extinction effect; (3) spontaneous recovery; (4) renewal; (5) reinstatement; (6) resurgence (aka facilitated reacquisition). Like the principle underlying dead-reckoning, these principles are grounded in analytic considerations. They are the kind of enduring truths about the world that are likely to have shaped the brain's computations. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hu, Jiangtao; Cao, Junxing; Wang, Huazhong; Wang, Xingjian; Jiang, Xudong
2017-12-01
First-arrival traveltime computation for quasi-P waves in transversely isotropic (TI) media is the key component of tomography and depth migration. It is appealing to use the fast marching method in isotropic media as it efficiently computes traveltime along an expanding wavefront. It uses the finite difference method to solve the eikonal equation. However, applying the fast marching method in anisotropic media faces challenges because the anisotropy introduces additional nonlinearity in the eikonal equation and solving this nonlinear eikonal equation with the finite difference method is challenging. To address this problem, we present a Fermat’s principle-based fast marching method to compute traveltime in two-dimensional TI media. This method is applicable in both vertical and tilted TI (VTI and TTI) media. It computes traveltime along an expanding wavefront using Fermat’s principle instead of the eikonal equation. Thus, it does not suffer from the nonlinearity of the eikonal equation in TI media. To compute traveltime using Fermat’s principle, the explicit expression of group velocity in TI media is required to describe the ray propagation. The moveout approximation is adopted to obtain the explicit expression of group velocity. Numerical examples on both VTI and TTI models show that the traveltime contour obtained by the proposed method matches well with the wavefront from the wave equation. This shows that the proposed method could be used in depth migration and tomography.
Diller, David J; Swanson, Jon; Bayden, Alexander S; Jarosinski, Mark; Audie, Joseph
2015-01-01
Peptides provide promising templates for developing drugs to occupy a middle space between small molecules and antibodies and for targeting 'undruggable' intracellular protein-protein interactions. Importantly, rational or in cerebro design, especially when coupled with validated in silico tools, can be used to efficiently explore chemical space and identify islands of 'drug-like' peptides to satisfy diverse drug discovery program objectives. Here, we consider the underlying principles of and recent advances in rational, computer-enabled peptide drug design. In particular, we consider the impact of basic physicochemical properties, potency and ADME/Tox opportunities and challenges, and recently developed computational tools for enabling rational peptide drug design. Key principles and practices are spotlighted by recent case studies. We close with a hypothetical future case study.
Design and Properties Prediction of AMCO3F by First-Principles Calculations.
Tian, Meng; Gao, Yurui; Ouyang, Chuying; Wang, Zhaoxiang; Chen, Liquan
2017-04-19
Computer simulation accelerates the rate of identification and application of new materials. To search for new materials to meet the increasing demands of secondary batteries with higher energy density, the properties of some transition-metal fluorocarbonates ([CO 3 F] 3- ) were simulated in this work as cathode materials for Li- and Na-ion batteries based on first-principles calculations. These materials were designed by substituting the K + ions in KCuCO 3 F with Li + or Na + ions and the Cu 2+ ions with transition-metal ions such as Fe 2+ , Co 2+ , Ni 2+ , and Mn 2+ ions, respectively. The phase stability, electronic conductivity, ionic diffusion, and electrochemical potential of these materials were calculated by first-principles calculations. After taking comprehensive consideration of the kinetic and thermodynamic properties, LiCoCO 3 F and LiFeCO 3 F are believed to be promising novel cathode materials in all of the calculated AMCO 3 F (A = Li and Na; M = Fe, Mn, Co, and Ni). These results will help the design and discovery of new materials for secondary batteries.
Yao, Min; Tu, Wenlong; Chen, Xi; Zhan, Chang-Guo
2013-01-01
It has been difficult to directly measure the spontaneous hydrolysis rate of urea and, thus, 1,1,3,3-tetramethylurea (Me4U) was used as a model to determine the “experimental” rate constant for urea hydrolysis. The use of Me4U was based on an assumption that the rate of urea hydrolysis should be 2.8 times that of Me4U hydrolysis because the rate of acetamide hydrolysis is 2.8 times that of N,N-dimethyl-acetamide hydrolysis. The present first-principles electronic-structure calculations on the competing non-enzymatic hydrolysis pathways have demonstrated that the dominant pathway is the neutral hydrolysis via the CN addition for both urea (when pH<~11.6) and Me4U (regardless of pH), unlike the non-enzymatic hydrolysis of amides where alkaline hydrolysis is dominant. Based on the computational data, the substituent shift of free energy barrier calculated for the neutral hydrolysis is remarkably different from that for the alkaline hydrolysis, and the rate constant for the urea hydrolysis should be ~1.3×109-fold lower than that (4.2×10−12 s−1) measured for the Me4U hydrolysis. As a result, the rate enhancement and catalytic proficiency of urease should be 1.2×1025 and 3×1027 M−1, respectively, suggesting that urease surpasses proteases and all other enzymes in its power to enhance the rate of reaction. All of the computational results are consistent with available experimental data for Me4U, suggesting that the computational prediction for urea is reliable. PMID:24097048
Modelling of DNA-protein recognition
NASA Technical Reports Server (NTRS)
Rein, R.; Garduno, R.; Colombano, S.; Nir, S.; Haydock, K.; Macelroy, R. D.
1980-01-01
Computer model-building procedures using stereochemical principles together with theoretical energy calculations appear to be, at this stage, the most promising route toward the elucidation of DNA-protein binding schemes and recognition principles. A review of models and bonding principles is conducted and approaches to modeling are considered, taking into account possible di-hydrogen-bonding schemes between a peptide and a base (or a base pair) of a double-stranded nucleic acid in the major groove, aspects of computer graphic modeling, and a search for isogeometric helices. The energetics of recognition complexes is discussed and several models for peptide DNA recognition are presented.
Kahler, Christopher W; Lechner, William J; MacGlashan, James; Wray, Tyler B; Littman, Michael L
2017-06-28
Computer-delivered interventions have been shown to be effective in reducing alcohol consumption in heavy drinking college students. However, these computer-delivered interventions rely on mouse, keyboard, or touchscreen responses for interactions between the users and the computer-delivered intervention. The principles of motivational interviewing suggest that in-person interventions may be effective, in part, because they encourage individuals to think through and speak aloud their motivations for changing a health behavior, which current computer-delivered interventions do not allow. The objective of this study was to take the initial steps toward development of a voice-based computer-delivered intervention that can ask open-ended questions and respond appropriately to users' verbal responses, more closely mirroring a human-delivered motivational intervention. We developed (1) a voice-based computer-delivered intervention that was run by a human controller and that allowed participants to speak their responses to scripted prompts delivered by speech generation software and (2) a text-based computer-delivered intervention that relied on the mouse, keyboard, and computer screen for all interactions. We randomized 60 heavy drinking college students to interact with the voice-based computer-delivered intervention and 30 to interact with the text-based computer-delivered intervention and compared their ratings of the systems as well as their motivation to change drinking and their drinking behavior at 1-month follow-up. Participants reported that the voice-based computer-delivered intervention engaged positively with them in the session and delivered content in a manner consistent with motivational interviewing principles. At 1-month follow-up, participants in the voice-based computer-delivered intervention condition reported significant decreases in quantity, frequency, and problems associated with drinking, and increased perceived importance of changing drinking behaviors. In comparison to the text-based computer-delivered intervention condition, those assigned to voice-based computer-delivered intervention reported significantly fewer alcohol-related problems at the 1-month follow-up (incident rate ratio 0.60, 95% CI 0.44-0.83, P=.002). The conditions did not differ significantly on perceived importance of changing drinking or on measures of drinking quantity and frequency of heavy drinking. Results indicate that it is feasible to construct a series of open-ended questions and a bank of responses and follow-up prompts that can be used in a future fully automated voice-based computer-delivered intervention that may mirror more closely human-delivered motivational interventions to reduce drinking. Such efforts will require using advanced speech recognition capabilities and machine-learning approaches to train a program to mirror the decisions made by human controllers in the voice-based computer-delivered intervention used in this study. In addition, future studies should examine enhancements that can increase the perceived warmth and empathy of voice-based computer-delivered intervention, possibly through greater personalization, improvements in the speech generation software, and embodying the computer-delivered intervention in a physical form. ©Christopher W Kahler, William J Lechner, James MacGlashan, Tyler B Wray, Michael L Littman. Originally published in JMIR Mental Health (http://mental.jmir.org), 28.06.2017.
Computational Foundations of Natural Intelligence
van Gerven, Marcel
2017-01-01
New developments in AI and neuroscience are revitalizing the quest to understanding natural intelligence, offering insight about how to equip machines with human-like capabilities. This paper reviews some of the computational principles relevant for understanding natural intelligence and, ultimately, achieving strong AI. After reviewing basic principles, a variety of computational modeling approaches is discussed. Subsequently, I concentrate on the use of artificial neural networks as a framework for modeling cognitive processes. This paper ends by outlining some of the challenges that remain to fulfill the promise of machines that show human-like intelligence. PMID:29375355
A binomial stochastic kinetic approach to the Michaelis-Menten mechanism
NASA Astrophysics Data System (ADS)
Lente, Gábor
2013-05-01
This Letter presents a new method that gives an analytical approximation of the exact solution of the stochastic Michaelis-Menten mechanism without computationally demanding matrix operations. The method is based on solving the deterministic rate equations and then using the results as guiding variables of calculating probability values using binomial distributions. This principle can be generalized to a number of different kinetic schemes and is expected to be very useful in the evaluation of measurements focusing on the catalytic activity of one or a few individual enzyme molecules.
Ring-array processor distribution topology for optical interconnects
NASA Technical Reports Server (NTRS)
Li, Yao; Ha, Berlin; Wang, Ting; Wang, Sunyu; Katz, A.; Lu, X. J.; Kanterakis, E.
1992-01-01
The existing linear and rectangular processor distribution topologies for optical interconnects, although promising in many respects, cannot solve problems such as clock skews, the lack of supporting elements for efficient optical implementation, etc. The use of a ring-array processor distribution topology, however, can overcome these problems. Here, a study of the ring-array topology is conducted with an aim of implementing various fast clock rate, high-performance, compact optical networks for digital electronic multiprocessor computers. Practical design issues are addressed. Some proof-of-principle experimental results are included.
The Transfer of Abstract Principles Governing Complex Adaptive Systems
ERIC Educational Resources Information Center
Goldstone, Robert L.; Sakamoto, Yasuaki
2003-01-01
Four experiments explored participants' understanding of the abstract principles governing computer simulations of complex adaptive systems. Experiments 1, 2, and 3 showed better transfer of abstract principles across simulations that were relatively dissimilar, and that this effect was due to participants who performed relatively poorly on the…
Tamura, Koichiro; Tao, Qiqing; Kumar, Sudhir
2018-01-01
Abstract RelTime estimates divergence times by relaxing the assumption of a strict molecular clock in a phylogeny. It shows excellent performance in estimating divergence times for both simulated and empirical molecular sequence data sets in which evolutionary rates varied extensively throughout the tree. RelTime is computationally efficient and scales well with increasing size of data sets. Until now, however, RelTime has not had a formal mathematical foundation. Here, we show that the basis of the RelTime approach is a relative rate framework (RRF) that combines comparisons of evolutionary rates in sister lineages with the principle of minimum rate change between evolutionary lineages and their respective descendants. We present analytical solutions for estimating relative lineage rates and divergence times under RRF. We also discuss the relationship of RRF with other approaches, including the Bayesian framework. We conclude that RelTime will be useful for phylogenies with branch lengths derived not only from molecular data, but also morphological and biochemical traits. PMID:29893954
Principles for the wise use of computers by children.
Straker, L; Pollock, C; Maslen, B
2009-11-01
Computer use by children at home and school is now common in many countries. Child computer exposure varies with the type of computer technology available and the child's age, gender and social group. This paper reviews the current exposure data and the evidence for positive and negative effects of computer use by children. Potential positive effects of computer use by children include enhanced cognitive development and school achievement, reduced barriers to social interaction, enhanced fine motor skills and visual processing and effective rehabilitation. Potential negative effects include threats to child safety, inappropriate content, exposure to violence, bullying, Internet 'addiction', displacement of moderate/vigorous physical activity, exposure to junk food advertising, sleep displacement, vision problems and musculoskeletal problems. The case for child specific evidence-based guidelines for wise use of computers is presented based on children using computers differently to adults, being physically, cognitively and socially different to adults, being in a state of change and development and the potential to impact on later adult risk. Progress towards child-specific guidelines is reported. Finally, a set of guideline principles is presented as the basis for more detailed guidelines on the physical, cognitive and social impact of computer use by children. The principles cover computer literacy, technology safety, child safety and privacy and appropriate social, cognitive and physical development. The majority of children in affluent communities now have substantial exposure to computers. This is likely to have significant effects on child physical, cognitive and social development. Ergonomics can provide and promote guidelines for wise use of computers by children and by doing so promote the positive effects and reduce the negative effects of computer-child, and subsequent computer-adult, interaction.
ERIC Educational Resources Information Center
Dewdney, A. K.
1989-01-01
Reviews the performance of computer programs for writing poetry and prose, including MARK V. SHANEY, MELL, POETRY GENERATOR, THUNDER THOUGHT, and ORPHEUS. Discusses the writing principles of the programs. Provides additional information on computer magnification techniques. (YP)
Computer-based teaching module design: principles derived from learning theories.
Lau, K H Vincent
2014-03-01
The computer-based teaching module (CBTM), which has recently gained prominence in medical education, is a teaching format in which a multimedia program serves as a single source for knowledge acquisition rather than playing an adjunctive role as it does in computer-assisted learning (CAL). Despite empirical validation in the past decade, there is limited research into the optimisation of CBTM design. This review aims to summarise research in classic and modern multimedia-specific learning theories applied to computer learning, and to collapse the findings into a set of design principles to guide the development of CBTMs. Scopus was searched for: (i) studies of classic cognitivism, constructivism and behaviourism theories (search terms: 'cognitive theory' OR 'constructivism theory' OR 'behaviourism theory' AND 'e-learning' OR 'web-based learning') and their sub-theories applied to computer learning, and (ii) recent studies of modern learning theories applied to computer learning (search terms: 'learning theory' AND 'e-learning' OR 'web-based learning') for articles published between 1990 and 2012. The first search identified 29 studies, dominated in topic by the cognitive load, elaboration and scaffolding theories. The second search identified 139 studies, with diverse topics in connectivism, discovery and technical scaffolding. Based on their relative representation in the literature, the applications of these theories were collapsed into a list of CBTM design principles. Ten principles were identified and categorised into three levels of design: the global level (managing objectives, framing, minimising technical load); the rhetoric level (optimising modality, making modality explicit, scaffolding, elaboration, spaced repeating), and the detail level (managing text, managing devices). This review examined the literature in the application of learning theories to CAL to develop a set of principles that guide CBTM design. Further research will enable educators to take advantage of this unique teaching format as it gains increasing importance in medical education. © 2014 John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Prosise, Jeff
This document presents the principles behind modern computer graphics without straying into the arcane languages of mathematics and computer science. Illustrations accompany the clear, step-by-step explanations that describe how computers draw pictures. The 22 chapters of the book are organized into 5 sections. "Part 1: Computer Graphics in…
Causal learning with local computations.
Fernbach, Philip M; Sloman, Steven A
2009-05-01
The authors proposed and tested a psychological theory of causal structure learning based on local computations. Local computations simplify complex learning problems via cues available on individual trials to update a single causal structure hypothesis. Structural inferences from local computations make minimal demands on memory, require relatively small amounts of data, and need not respect normative prescriptions as inferences that are principled locally may violate those principles when combined. Over a series of 3 experiments, the authors found (a) systematic inferences from small amounts of data; (b) systematic inference of extraneous causal links; (c) influence of data presentation order on inferences; and (d) error reduction through pretraining. Without pretraining, a model based on local computations fitted data better than a Bayesian structural inference model. The data suggest that local computations serve as a heuristic for learning causal structure. Copyright 2009 APA, all rights reserved.
Wan, Quan; Galli, Giulia
2015-12-11
We present a first-principles framework to compute sum-frequency generation (SFG) vibrational spectra of semiconductors and insulators. The method is based on density functional theory and the use of maximally localized Wannier functions to compute the response to electric fields, and it includes the effect of electric field gradients at surfaces. In addition, it includes quadrupole contributions to SFG spectra, thus enabling the verification of the dipole approximation, whose validity determines the surface specificity of SFG spectroscopy. We compute the SFG spectra of ice I_{h} basal surfaces and identify which spectra components are affected by bulk contributions. Our results are in good agreement with experiments at low temperature.
Dynamical relations for left ventricular ejection - Flow rate, momentum, force and impulse
NASA Technical Reports Server (NTRS)
Back, L. H.; Selzer, R. H.; Gordon, D. G.; Ledbetter, D. C.; Crawford, D. W.
1984-01-01
An investigation was carried out to quantitatively evaluate left ventricular volume flow rate, momentum, force and impulse derived from application of conservation principles for mass and momentum of blood within the ventricle during the ejection phase. An automated digital image processing system was developed and applied to left ventricular angiograms which are computer processed and analyzed frame by frame to determine the dynamical relations by numerical methods. The initial experience with force and impulse has indicated that neither quantity seemed to be a sensitive indicator of coronary artery disease as evaluated by qualitative angiography for the particular patient group studied. Utilization of the dynamical relations in evaluating human left ventricular performance requires improved means of measurement and interpretation of clinical studies.
NASA Astrophysics Data System (ADS)
Haastrup, Sten; Latini, Simone; Bolotin, Kirill; Thygesen, Kristian S.
2016-07-01
Efficient conversion of photons into electrical current in two-dimensional semiconductors requires, as a first step, the dissociation of the strongly bound excitons into free electrons and holes. Here we calculate the dissociation rates and energy shift of excitons in monolayer MoS2 as a function of an applied in-plane electric field. The dissociation rates are obtained as the inverse lifetime of the resonant states of a two-dimensional hydrogenic Hamiltonian which describes the exciton within the Mott-Wannier model. The resonances are computed using complex scaling, and the effective masses and screened electron-hole interaction defining the hydrogenic Hamiltonian are computed from first principles. For field strengths above 0.1 V/nm the dissociation lifetime is shorter than 1 ps, which is below the lifetime associated with competing decay mechanisms. Interestingly, encapsulation of the MoS2 layer in just two layers of hexagonal boron nitride (h BN ), enhances the dissociation rate by around one order of magnitude due to the increased screening. This shows that dielectric engineering is an effective way to control exciton lifetimes in two-dimensional materials.
Using the Computer in Special Vocational Programs. Inservice Activities.
ERIC Educational Resources Information Center
Lane, Kenneth; Ward, Raymond
This inservice manual is intended to assist vocational education teachers in using the techniques of computer-assisted instruction in special vocational education programs. Addressed in the individual units are the following topics: the basic principles of computer-assisted instruction (TRS-80 computers and typing on a computer keyboard); money…
NASA Astrophysics Data System (ADS)
Escalada, Lawrence Todd
Quantum physics is not traditionally introduced in high school physics courses because of the level of abstraction and mathematical formalism associated with the subject. As part of the Visual Quantum Mechanics project, activity-based instructional units have been developed that introduce quantum principles to students who have limited backgrounds in physics and mathematics. This study investigates the applicability of one unit, Solids & Light, that introduces quantum principles within the context of learning about light emitting diodes. An observation protocol, attitude surveys, and questionnaires were used to examine the implementation of materials and student-teacher interactions in various secondary physics classrooms. Aspects of Solids & Light including the use of hands-on activities, interactive computer programs, inexpensive materials, and the focus on conceptual understanding were very applicable in the various physics classrooms observed. Both teachers and students gave these instructional strategies favorable ratings in motivating students to make observations and to learn. These ratings were not significantly affected by gender or students, attitudes towards physics or computers. Solid's & Light was applicable in terms of content and teaching style for some teachers. However, a mismatch of teaching styles between some instructors and the unit posed some problems in determining applicability. Observations indicated that some instructors were not able to utilize the exploratory instructional strategy of Solid's & Light. Thus, Solids & Light must include additional support necessary to make the instructor comfortable with the subject matter and pedagogical style. With these revisions, Solids & Light, will have all the key components to make its implementation in a high school physics classroom a successful one.
Computational Complexity and Human Decision-Making.
Bossaerts, Peter; Murawski, Carsten
2017-12-01
The rationality principle postulates that decision-makers always choose the best action available to them. It underlies most modern theories of decision-making. The principle does not take into account the difficulty of finding the best option. Here, we propose that computational complexity theory (CCT) provides a framework for defining and quantifying the difficulty of decisions. We review evidence showing that human decision-making is affected by computational complexity. Building on this evidence, we argue that most models of decision-making, and metacognition, are intractable from a computational perspective. To be plausible, future theories of decision-making will need to take into account both the resources required for implementing the computations implied by the theory, and the resource constraints imposed on the decision-maker by biology. Copyright © 2017 Elsevier Ltd. All rights reserved.
A computable expression of closure to efficient causation.
Mossio, Matteo; Longo, Giuseppe; Stewart, John
2009-04-07
In this paper, we propose a mathematical expression of closure to efficient causation in terms of lambda-calculus; we argue that this opens up the perspective of developing principled computer simulations of systems closed to efficient causation in an appropriate programming language. An important implication of our formulation is that, by exhibiting an expression in lambda-calculus, which is a paradigmatic formalism for computability and programming, we show that there are no conceptual or principled problems in realizing a computer simulation or model of closure to efficient causation. We conclude with a brief discussion of the question whether closure to efficient causation captures all relevant properties of living systems. We suggest that it might not be the case, and that more complex definitions could indeed create crucial some obstacles to computability.
NASA Astrophysics Data System (ADS)
Schrage, J.; Soenmez, Y.; Happel, T.; Gubler, U.; Lukowicz, P.; Mrozynski, G.
2006-02-01
From long haul, metro access and intersystem links the trend goes to applying optical interconnection technology at increasingly shorter distances. Intrasystem interconnects such as data busses between microprocessors and memory blocks are still based on copper interconnects today. This causes a bottleneck in computer systems since the achievable bandwidth of electrical interconnects is limited through the underlying physical properties. Approaches to solve this problem by embedding optical multimode polymer waveguides into the board (electro-optical circuit board technology, EOCB) have been reported earlier. The principle feasibility of optical interconnection technology in chip-to-chip applications has been validated in a number of projects. For reasons of cost considerations waveguides with large cross sections are used in order to relax alignment requirements and to allow automatic placement and assembly without any active alignment of components necessary. On the other hand the bandwidth of these highly multimodal waveguides is restricted due to mode dispersion. The advance of WDM technology towards intrasystem applications will provide sufficiently high bandwidth which is required for future high-performance computer systems: Assuming that, for example, 8 wavelength-channels with 12Gbps (SDR1) each are given, then optical on-board interconnects with data rates a magnitude higher than the data rates of electrical interconnects for distances typically found at today's computer boards and backplanes can be realized. The data rate will be twice as much, if DDR2 technology is considered towards the optical signals as well. In this paper we discuss an approach for a hybrid integrated optoelectronic WDM package which might enable the application of WDM technology to EOCB.
Matsuoka, Yu; Shimizu, Kazuyuki
2013-10-20
It is quite important to understand the basic principle embedded in the main metabolism for the interpretation of the fermentation data. For this, it may be useful to understand the regulation mechanism based on systems biology approach. In the present study, we considered the perturbation analysis together with computer simulation based on the models which include the effects of global regulators on the pathway activation for the main metabolism of Escherichia coli. Main focus is the acetate overflow metabolism and the co-fermentation of multiple carbon sources. The perturbation analysis was first made to understand the nature of the feed-forward loop formed by the activation of Pyk by FDP (F1,6BP), and the feed-back loop formed by the inhibition of Pfk by PEP in the glycolysis. Those together with the effect of transcription factor Cra caused by FDP level affected the glycolysis activity. The PTS (phosphotransferase system) acts as the feed-back system by repressing the glucose uptake rate for the increase in the glucose uptake rate. It was also shown that the increased PTS flux (or glucose consumption rate) causes PEP/PYR ratio to be decreased, and EIIA-P, Cya, cAMP-Crp decreased, where cAMP-Crp in turn repressed TCA cycle and more acetate is formed. This was further verified by the detailed computer simulation. In the case of multiple carbon sources such as glucose and xylose, it was shown that the sequential utilization of carbon sources was observed for wild type, while the co-consumption of multiple carbon sources with slow consumption rates were observed for the ptsG mutant by computer simulation, and this was verified by experiments. Moreover, the effect of a specific gene knockout such as Δpyk on the metabolic characteristics was also investigated based on the computer simulation. Copyright © 2013 Elsevier B.V. All rights reserved.
Straker, L; Maslen, B; Burgess-Limerick, R; Johnson, P; Dennerlein, J
2010-04-01
Computer use by children is common and there is concern over the potential impact of this exposure on child physical development. Recently principles for child-specific evidence-based guidelines for wise use of computers have been published and these included one concerning the facilitation of appropriate physical development. This paper reviews the evidence and presents detailed guidelines for this principle. The guidelines include encouraging a mix of sedentary and whole body movement tasks, encouraging reasonable postures during computing tasks through workstation, chair, desk, display and input device selection and adjustment and special issues regarding notebook computer use and carriage, computing skills and responding to discomfort. The evidence limitations highlight opportunities for future research. The guidelines themselves can inform parents and teachers, equipment designers and suppliers and form the basis of content for teaching children the wise use of computers. STATEMENT OF RELEVANCE: Many children use computers and computer-use habits formed in childhood may track into adulthood. Therefore child-computer interaction needs to be carefully managed. These guidelines inform those responsible for children to assist in the wise use of computers.
Cogbill, Thomas H; Ziegelbein, Kurt J
2011-02-01
The basic principles underlying computed tomography, magnetic resonance, and ultrasound are reviewed to promote better understanding of the properties and appropriate applications of these 3 common imaging modalities. A glossary of frequently used terms for each technique is appended for convenience. Risks to patient safety including contrast-induced nephropathy, radiation-induced malignancy, and nephrogenic systemic fibrosis are discussed. Copyright © 2011 Elsevier Inc. All rights reserved.
Mass storage system experiences and future needs at the National Center for Atmospheric Research
NASA Technical Reports Server (NTRS)
Olear, Bernard T.
1992-01-01
This presentation is designed to relate some of the experiences of the Scientific Computing Division at NCAR dealing with the 'data problem'. A brief history and a development of some basic Mass Storage System (MSS) principles are given. An attempt is made to show how these principles apply to the integration of various components into NCAR's MSS. There is discussion of future MSS needs for future computing environments.
Post-Fisherian Experimentation: From Physical to Virtual
Jeff Wu, C. F.
2014-04-24
Fisher's pioneering work in design of experiments has inspired further work with broader applications, especially in industrial experimentation. Three topics in physical experiments are discussed: principles of effect hierarchy, sparsity, and heredity for factorial designs, a new method called CME for de-aliasing aliased effects, and robust parameter design. The recent emergence of virtual experiments on a computer is reviewed. Here, some major challenges in computer experiments, which must go beyond Fisherian principles, are outlined.
A survey of parametrized variational principles and applications to computational mechanics
NASA Technical Reports Server (NTRS)
Felippa, Carlos A.
1993-01-01
This survey paper describes recent developments in the area of parametrized variational principles (PVP's) and selected applications to finite-element computational mechanics. A PVP is a variational principle containing free parameters that have no effect on the Euler-Lagrange equations. The theory of single-field PVP's based on gauge functions (also known as null Lagrangians) is a subset of the inverse problem of variational calculus that has limited value. On the other hand, multifield PVP's are more interesting from theoretical and practical standpoints. Following a tutorial introduction, the paper describes the recent construction of multifield PVP's in several areas of elasticity and electromagnetics. It then discusses three applications to finite-element computational mechanics: the derivation of high-performance finite elements, the development of element-level error indicators, and the constructions of finite element templates. The paper concludes with an overview of open research areas.
Semiotics, Information Science, Documents and Computers.
ERIC Educational Resources Information Center
Warner, Julian
1990-01-01
Discusses the relationship and value of semiotics to the established domains of information science. Highlights include documentation; computer operations; the language of computing; automata theory; linguistics; speech and writing; and the written language as a unifying principle for the document and the computer. (93 references) (LRW)
1990-01-01
least-squares sense by adding a penalty term proportional to the square of the divergence to the variational principle At the start of this project... principle required for stable solutions of the electromagnetic field: It must be possible to express the basis functions used in the finite element method as... principle to derive several different methods for computing stable solutions to electromagnetic field problems. To understand above principle , notice that
On Babinet's principle and diffraction associated with an arbitrary particle.
Sun, Bingqiang; Yang, Ping; Kattawar, George W; Mishchenko, Michael I
2017-12-01
Babinet's principle is widely used to compute the diffraction by a particle. However, the diffraction by a 3-D object is not totally the same as that simulated with Babinet's principle. This Letter uses a surface integral equation to exactly formulate the diffraction by an arbitrary particle and illustrate the condition for the applicability of Babinet's principle. The present results may serve to close the debate on the diffraction formalism.
Aeroelastic analysis of bridge girder section using computer modeling
DOT National Transportation Integrated Search
2001-05-01
This report describes the numerical simulation of wind flow around bridges using the Finite Element Method (FEM) and the principles of Computational Fluid Dynamics (CFD) and Computational Structural Dynamics (CSD). Since, the suspension bridges are p...
Simulating Drosophila Genetics with the Computer.
ERIC Educational Resources Information Center
Small, James W., Jr.; Edwards, Kathryn L.
1979-01-01
Presents some techniques developed to help improve student understanding of Mendelian principles through the use of a computer simulation model by the genetic system of the fruit fly. Includes discussion and evaluation of this computer assisted program. (MA)
The base rate principle and the fairness principle in social judgment
Cao, Jack; Banaji, Mahzarin R.
2016-01-01
Meet Jonathan and Elizabeth. One person is a doctor and the other is a nurse. Who is the doctor? When nothing else is known, the base rate principle favors Jonathan to be the doctor and the fairness principle favors both individuals equally. However, when individuating facts reveal who is actually the doctor, base rates and fairness become irrelevant, as the facts make the correct answer clear. In three experiments, explicit and implicit beliefs were measured before and after individuating facts were learned. These facts were either stereotypic (e.g., Jonathan is the doctor, Elizabeth is the nurse) or counterstereotypic (e.g., Elizabeth is the doctor, Jonathan is the nurse). Results showed that before individuating facts were learned, explicit beliefs followed the fairness principle, whereas implicit beliefs followed the base rate principle. After individuating facts were learned, explicit beliefs correctly aligned with stereotypic and counterstereotypic facts. Implicit beliefs, however, were immune to counterstereotypic facts and continued to follow the base rate principle. Having established the robustness and generality of these results, a fourth experiment verified that gender stereotypes played a causal role: when both individuals were male, explicit and implicit beliefs alike correctly converged with individuating facts. Taken together, these experiments demonstrate that explicit beliefs uphold fairness and incorporate obvious and relevant facts, but implicit beliefs uphold base rates and appear relatively impervious to counterstereotypic facts. PMID:27325760
The base rate principle and the fairness principle in social judgment.
Cao, Jack; Banaji, Mahzarin R
2016-07-05
Meet Jonathan and Elizabeth. One person is a doctor and the other is a nurse. Who is the doctor? When nothing else is known, the base rate principle favors Jonathan to be the doctor and the fairness principle favors both individuals equally. However, when individuating facts reveal who is actually the doctor, base rates and fairness become irrelevant, as the facts make the correct answer clear. In three experiments, explicit and implicit beliefs were measured before and after individuating facts were learned. These facts were either stereotypic (e.g., Jonathan is the doctor, Elizabeth is the nurse) or counterstereotypic (e.g., Elizabeth is the doctor, Jonathan is the nurse). Results showed that before individuating facts were learned, explicit beliefs followed the fairness principle, whereas implicit beliefs followed the base rate principle. After individuating facts were learned, explicit beliefs correctly aligned with stereotypic and counterstereotypic facts. Implicit beliefs, however, were immune to counterstereotypic facts and continued to follow the base rate principle. Having established the robustness and generality of these results, a fourth experiment verified that gender stereotypes played a causal role: when both individuals were male, explicit and implicit beliefs alike correctly converged with individuating facts. Taken together, these experiments demonstrate that explicit beliefs uphold fairness and incorporate obvious and relevant facts, but implicit beliefs uphold base rates and appear relatively impervious to counterstereotypic facts.
SEED: A Suite of Instructional Laboratories for Computer Security Education
ERIC Educational Resources Information Center
Du, Wenliang; Wang, Ronghua
2008-01-01
The security and assurance of our computing infrastructure has become a national priority. To address this priority, higher education has gradually incorporated the principles of computer and information security into the mainstream undergraduate and graduate computer science curricula. To achieve effective education, learning security principles…
Methods and principles for determining task dependent interface content
NASA Technical Reports Server (NTRS)
Shalin, Valerie L.; Geddes, Norman D.; Mikesell, Brian G.
1992-01-01
Computer generated information displays provide a promising technology for offsetting the increasing complexity of the National Airspace System. To realize this promise, however, we must extend and adapt the domain-dependent knowledge that informally guides the design of traditional dedicated displays. In our view, the successful exploitation of computer generated displays revolves around the idea of information management, that is, the identification, organization, and presentation of relevant and timely information in a complex task environment. The program of research that is described leads to methods and principles for information management in the domain of commercial aviation. The multi-year objective of the proposed program of research is to develop methods and principles for determining task dependent interface content.
Physical properties of biological entities: an introduction to the ontology of physics for biology.
Cook, Daniel L; Bookstein, Fred L; Gennari, John H
2011-01-01
As biomedical investigators strive to integrate data and analyses across spatiotemporal scales and biomedical domains, they have recognized the benefits of formalizing languages and terminologies via computational ontologies. Although ontologies for biological entities-molecules, cells, organs-are well-established, there are no principled ontologies of physical properties-energies, volumes, flow rates-of those entities. In this paper, we introduce the Ontology of Physics for Biology (OPB), a reference ontology of classical physics designed for annotating biophysical content of growing repositories of biomedical datasets and analytical models. The OPB's semantic framework, traceable to James Clerk Maxwell, encompasses modern theories of system dynamics and thermodynamics, and is implemented as a computational ontology that references available upper ontologies. In this paper we focus on the OPB classes that are designed for annotating physical properties encoded in biomedical datasets and computational models, and we discuss how the OPB framework will facilitate biomedical knowledge integration. © 2011 Cook et al.
Simulating Human Cognition in the Domain of Air Traffic Control
NASA Technical Reports Server (NTRS)
Freed, Michael; Johnston, James C.; Null, Cynthia H. (Technical Monitor)
1995-01-01
Experiments intended to assess performance in human-machine interactions are often prohibitively expensive, unethical or otherwise impractical to run. Approximations of experimental results can be obtained, in principle, by simulating the behavior of subjects using computer models of human mental behavior. Computer simulation technology has been developed for this purpose. Our goal is to produce a cognitive model suitable to guide the simulation machinery and enable it to closely approximate a human subject's performance in experimental conditions. The described model is designed to simulate a variety of cognitive behaviors involved in routine air traffic control. As the model is elaborated, our ability to predict the effects of novel circumstances on controller error rates and other performance characteristics should increase. This will enable the system to project the impact of proposed changes to air traffic control procedures and equipment on controller performance.
ERIC Educational Resources Information Center
Clark, Joy L.; Hegji, Charles E.
1997-01-01
Notes that using spreadsheets to teach microeconomics principles enables learning by doing in the exploration of basic concepts. Introduction of increasingly complex topics leads to exploration of theory and managerial decision making. (SK)
Endocrine Disruptor Screening Program (EDSP) Universe of Chemicals and General Validation Principles
This document was developed by the EPA to provide guidance to staff and managers regarding the EDSP universe of chemicals and general validation principles for consideration of computational toxicology tools for chemical prioritization.
Dynamics of non-holonomic systems with stochastic transport
NASA Astrophysics Data System (ADS)
Holm, D. D.; Putkaradze, V.
2018-01-01
This paper formulates a variational approach for treating observational uncertainty and/or computational model errors as stochastic transport in dynamical systems governed by action principles under non-holonomic constraints. For this purpose, we derive, analyse and numerically study the example of an unbalanced spherical ball rolling under gravity along a stochastic path. Our approach uses the Hamilton-Pontryagin variational principle, constrained by a stochastic rolling condition, which we show is equivalent to the corresponding stochastic Lagrange-d'Alembert principle. In the example of the rolling ball, the stochasticity represents uncertainty in the observation and/or error in the computational simulation of the angular velocity of rolling. The influence of the stochasticity on the deterministically conserved quantities is investigated both analytically and numerically. Our approach applies to a wide variety of stochastic, non-holonomically constrained systems, because it preserves the mathematical properties inherited from the variational principle.
A feedback model of figure-ground assignment.
Domijan, Drazen; Setić, Mia
2008-05-30
A computational model is proposed in order to explain how bottom-up and top-down signals are combined into a unified perception of figure and background. The model is based on the interaction between the ventral and the dorsal stream. The dorsal stream computes saliency based on boundary signals provided by the simple and the complex cortical cells. Output from the dorsal stream is projected to the surface network which serves as a blackboard on which the surface representation is formed. The surface network is a recurrent network which segregates different surfaces by assigning different firing rates to them. The figure is labeled by the maximal firing rate. Computer simulations showed that the model correctly assigns figural status to the surface with a smaller size, a greater contrast, convexity, surroundedness, horizontal-vertical orientation and a higher spatial frequency content. The simple gradient of activity in the dorsal stream enables the simulation of the new principles of the lower region and the top-bottom polarity. The model also explains how the exogenous attention and the endogenous attention may reverse the figural assignment. Due to the local excitation in the surface network, neural activity at the cued region will spread over the whole surface representation. Therefore, the model implements the object-based attentional selection.
No-signaling quantum key distribution: solution by linear programming
NASA Astrophysics Data System (ADS)
Hwang, Won-Young; Bae, Joonwoo; Killoran, Nathan
2015-02-01
We outline a straightforward approach for obtaining a secret key rate using only no-signaling constraints and linear programming. Assuming an individual attack, we consider all possible joint probabilities. Initially, we study only the case where Eve has binary outcomes, and we impose constraints due to the no-signaling principle and given measurement outcomes. Within the remaining space of joint probabilities, by using linear programming, we get bound on the probability of Eve correctly guessing Bob's bit. We then make use of an inequality that relates this guessing probability to the mutual information between Bob and a more general Eve, who is not binary-restricted. Putting our computed bound together with the Csiszár-Körner formula, we obtain a positive key generation rate. The optimal value of this rate agrees with known results, but was calculated in a more straightforward way, offering the potential of generalization to different scenarios.
NASA Astrophysics Data System (ADS)
Natale, Joseph; Hentschel, George
Firing-rate networks offer a coarse model of signal propagation in the brain. Here we analyze sparse, 2D planar firing-rate networks with no synapses beyond a certain cutoff distance. Additionally, we impose Dale's Principle to ensure that each neuron makes only or inhibitory outgoing connections. Using spectral methods, we find that the number of neurons participating in excitations of the network becomes insignificant whenever the connectivity cutoff is tuned to a value near or below the average interneuron separation. Further, neural activations exceeding a certain threshold stay confined to a small region of space. This behavior is an instance of Anderson localization, a disorder-induced phase transition by which an information channel is rendered unable to transmit signals. We discuss several potential implications of localization for both local and long-range computation in the brain. This work was supported in part by Grants JSMF/ 220020321 and NSF/IOS/1208126.
Toward a reaction rate model of condensed-phase RDX decomposition under high temperatures
NASA Astrophysics Data System (ADS)
Schweigert, Igor
2014-03-01
Shock ignition of energetic molecular solids is driven by microstructural heterogeneities, at which even moderate stresses can result in sufficiently high temperatures to initiate material decomposition and the release of the chemical energy. Mesoscale modeling of these ``hot spots'' requires a chemical reaction rate model that describes the energy release with a sub-microsecond resolution and under a wide range of temperatures. No such model is available even for well-studied energetic materials such as RDX. In this presentation, I will describe an ongoing effort to develop a reaction rate model of condensed-phase RDX decomposition under high temperatures using first-principles molecular dynamics, transition-state theory, and reaction network analysis. This work was supported by the Naval Research Laboratory, by the Office of Naval Research, and by the DOD High Performance Computing Modernization Program Software Application Institute for Multiscale Reactive Modeling of Insensitive Munitions.
Toward a reaction rate model of condensed-phase RDX decomposition under high temperatures
NASA Astrophysics Data System (ADS)
Schweigert, Igor
2015-06-01
Shock ignition of energetic molecular solids is driven by microstructural heterogeneities, at which even moderate stresses can result in sufficiently high temperatures to initiate material decomposition and chemical energy release. Mesoscale modeling of these ``hot spots'' requires a reaction rate model that describes the energy release with a sub-microsecond resolution and under a wide range of temperatures. No such model is available even for well-studied energetic materials such as RDX. In this presentation, I will describe an ongoing effort to develop a reaction rate model of condensed-phase RDX decomposition under high temperatures using first-principles molecular dynamics, transition-state theory, and reaction network analysis. This work was supported by the Naval Research Laboratory, by the Office of Naval Research, and by the DoD High Performance Computing Modernization Program Software Application Institute for Multiscale Reactive Modeling of Insensitive Munitions.
Modeling macro-and microstructures of Gas-Metal-Arc Welded HSLA-100 steel
NASA Astrophysics Data System (ADS)
Yang, Z.; Debroy, T.
1999-06-01
Fluid flow and heat transfer during gas-metal-arc welding (GMAW) of HSLA-100 steel were studied using a transient, three-dimensional, turbulent heat transfer and fluid flow model. The temperature and velocity fields, cooling rates, and shape and size of the fusion and heat-affected zones (HAZs) were calculated. A continuous-cooling-transformation (CCT) diagram was computed to aid in the understanding of the observed weld metal microstructure. The computed results demonstrate that the dissipation of heat and momentum in the weld pool is significantly aided by turbulence, thus suggesting that previous modeling results based on laminar flow need to be re-examined. A comparison of the calculated fusion and HAZ geometries with their corresponding measured values showed good agreement. Furthermore, “finger” penetration, a unique geometric characteristic of gas-metal-arc weld pools, could be satisfactorily predicted from the model. The ability to predict these geometric variables and the agreement between the calculated and the measured cooling rates indicate the appropriateness of using a turbulence model for accurate calculations. The microstructure of the weld metal consisted mainly of acicular ferrite with small amounts of bainite. At high heat inputs, small amounts of allotriomorphic and Widmanstätten ferrite were also observed. The observed microstructures are consistent with those expected from the computed CCT diagram and the cooling rates. The results presented here demonstrate significant promise for understanding both macro-and microstructures of steel welds from the combination of the fundamental principles from both transport phenomena and phase transformation theory.
A compact model for electroosmotic flows in microfluidic devices
NASA Astrophysics Data System (ADS)
Qiao, R.; Aluru, N. R.
2002-09-01
A compact model to compute flow rate and pressure in microfluidic devices is presented. The microfluidic flow can be driven by either an applied electric field or a combined electric field and pressure gradient. A step change in the ζ-potential on a channel wall is treated by a pressure source in the compact model. The pressure source is obtained from the pressure Poisson equation and conservation of mass principle. In the proposed compact model, the complex fluidic network is simplified by an electrical circuit. The compact model can predict the flow rate, pressure distribution and other basic characteristics in microfluidic channels quickly with good accuracy when compared to detailed numerical simulation. Using the compact model, fluidic mixing and dispersion control are studied in a complex microfluidic network.
38 CFR 4.55 - Principles of combined ratings for muscle injuries.
Code of Federal Regulations, 2014 CFR
2014-07-01
... ratings for muscle injuries. 4.55 Section 4.55 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF... Principles of combined ratings for muscle injuries. (a) A muscle injury rating will not be combined with a peripheral nerve paralysis rating of the same body part, unless the injuries affect entirely different...
38 CFR 4.55 - Principles of combined ratings for muscle injuries.
Code of Federal Regulations, 2013 CFR
2013-07-01
... ratings for muscle injuries. 4.55 Section 4.55 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF... Principles of combined ratings for muscle injuries. (a) A muscle injury rating will not be combined with a peripheral nerve paralysis rating of the same body part, unless the injuries affect entirely different...
38 CFR 4.55 - Principles of combined ratings for muscle injuries.
Code of Federal Regulations, 2011 CFR
2011-07-01
... ratings for muscle injuries. 4.55 Section 4.55 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF... Principles of combined ratings for muscle injuries. (a) A muscle injury rating will not be combined with a peripheral nerve paralysis rating of the same body part, unless the injuries affect entirely different...
38 CFR 4.55 - Principles of combined ratings for muscle injuries.
Code of Federal Regulations, 2010 CFR
2010-07-01
... ratings for muscle injuries. 4.55 Section 4.55 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF... Principles of combined ratings for muscle injuries. (a) A muscle injury rating will not be combined with a peripheral nerve paralysis rating of the same body part, unless the injuries affect entirely different...
38 CFR 4.55 - Principles of combined ratings for muscle injuries.
Code of Federal Regulations, 2012 CFR
2012-07-01
... ratings for muscle injuries. 4.55 Section 4.55 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF... Principles of combined ratings for muscle injuries. (a) A muscle injury rating will not be combined with a peripheral nerve paralysis rating of the same body part, unless the injuries affect entirely different...
Case Study: Audio-Guided Learning, with Computer Graphics.
ERIC Educational Resources Information Center
Koumi, Jack; Daniels, Judith
1994-01-01
Describes teaching packages which involve the use of audiotape recordings with personal computers in Open University (United Kingdom) mathematics courses. Topics addressed include software development; computer graphics; pedagogic principles for distance education; feedback, including course evaluations and student surveys; and future plans.…
The Evolution of Instructional Design Principles for Intelligent Computer-Assisted Instruction.
ERIC Educational Resources Information Center
Dede, Christopher; Swigger, Kathleen
1988-01-01
Discusses and compares the design and development of computer assisted instruction (CAI) and intelligent computer assisted instruction (ICAI). Topics discussed include instructional systems design (ISD), artificial intelligence, authoring languages, intelligent tutoring systems (ITS), qualitative models, and emerging issues in instructional…
Leopold, Luna Bergere; Thomas, William L.
1956-01-01
When the vegetal cover is removed from a land surface, the rate of removal of the soil material, at least initially, increases rapidly. So well known is this principle that it hardly needs restatement.If attention is focused on any individual drainage basin in its natural state, large or small, and inquiry is made as to the rate of denudation, a quantitative answer is not easily obtained. The possible error in any computation of rate of sediment production from any given drainage basin is considerable. Significant variations are found in sediment yields from closely adjacent watersheds which appear to be generally similar. To make a quantitative evaluation of the change in the rate of denudation when the natural vegetation is disturbed is, therefore, even more difficult. Considering the fact that "soil conservation" has been promoted to the status of a science, our lack of ability to answer what is apparently so simple a question may seem surprising. Let us look at some of the reasons.
Entanglement-Based Machine Learning on a Quantum Computer
NASA Astrophysics Data System (ADS)
Cai, X.-D.; Wu, D.; Su, Z.-E.; Chen, M.-C.; Wang, X.-L.; Li, Li; Liu, N.-L.; Lu, C.-Y.; Pan, J.-W.
2015-03-01
Machine learning, a branch of artificial intelligence, learns from previous experience to optimize performance, which is ubiquitous in various fields such as computer sciences, financial analysis, robotics, and bioinformatics. A challenge is that machine learning with the rapidly growing "big data" could become intractable for classical computers. Recently, quantum machine learning algorithms [Lloyd, Mohseni, and Rebentrost, arXiv.1307.0411] were proposed which could offer an exponential speedup over classical algorithms. Here, we report the first experimental entanglement-based classification of two-, four-, and eight-dimensional vectors to different clusters using a small-scale photonic quantum computer, which are then used to implement supervised and unsupervised machine learning. The results demonstrate the working principle of using quantum computers to manipulate and classify high-dimensional vectors, the core mathematical routine in machine learning. The method can, in principle, be scaled to larger numbers of qubits, and may provide a new route to accelerate machine learning.
Design Principles for Computer-Assisted Instruction in Histology Education: An Exploratory Study
ERIC Educational Resources Information Center
Deniz, Hasan; Cakir, Hasan
2006-01-01
The purpose of this paper is to describe the development process and the key components of a computer-assisted histology material. Computer-assisted histology material is designed to supplement traditional histology education in a large Midwestern university. Usability information of the computer-assisted instruction (CAI) material was obtained…
Aids to Computer-Based Multimedia Learning.
ERIC Educational Resources Information Center
Mayer, Richard E.; Moreno, Roxana
2002-01-01
Presents a cognitive theory of multimedia learning that draws on dual coding theory, cognitive load theory, and constructivist learning theory and derives some principles of instructional design for fostering multimedia learning. These include principles of multiple representation, contiguity, coherence, modality, and redundancy. (SLD)
Teaching "Filing Rules"--Via Computer-Aided Instruction.
ERIC Educational Resources Information Center
Agneberg, Craig
A computer software package has been developed to teach and test students on the Rules for Alphabetical Filing of the Association of Records Managers and Administrators (ARMA). The following computer assisted instruction principles were used in developing the program: gaining attention, stating objectives, providing direction, reviewing…
Computer Series, 98. Electronics for Scientists: A Computer-Intensive Approach.
ERIC Educational Resources Information Center
Scheeline, Alexander; Mork, Brian J.
1988-01-01
Reports the design for a principles-before-details presentation of electronics for an instrumental analysis class. Uses computers for data collection and simulations. Requires one semester with two 2.5-hour periods and two lectures per week. Includes lab and lecture syllabi. (MVL)
Accomplishment Summary 1968-1969. Biological Computer Laboratory.
ERIC Educational Resources Information Center
Von Foerster, Heinz; And Others
This report summarizes theoretical, applied, and experimental studies in the areas of computational principles in complex intelligent systems, cybernetics, multivalued logic, and the mechanization of cognitive processes. This work is summarized under the following topic headings: properties of complex dynamic systems; computers and the language…
Computational complexity of the landscape II-Cosmological considerations
NASA Astrophysics Data System (ADS)
Denef, Frederik; Douglas, Michael R.; Greene, Brian; Zukowski, Claire
2018-05-01
We propose a new approach for multiverse analysis based on computational complexity, which leads to a new family of "computational" measure factors. By defining a cosmology as a space-time containing a vacuum with specified properties (for example small cosmological constant) together with rules for how time evolution will produce the vacuum, we can associate global time in a multiverse with clock time on a supercomputer which simulates it. We argue for a principle of "limited computational complexity" governing early universe dynamics as simulated by this supercomputer, which translates to a global measure for regulating the infinities of eternal inflation. The rules for time evolution can be thought of as a search algorithm, whose details should be constrained by a stronger principle of "minimal computational complexity". Unlike previously studied global measures, ours avoids standard equilibrium considerations and the well-known problems of Boltzmann Brains and the youngness paradox. We also give various definitions of the computational complexity of a cosmology, and argue that there are only a few natural complexity classes.
Optimized Materials From First Principles Simulations: Are We There Yet?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galli, G; Gygi, F
2005-07-26
In the past thirty years, the use of scientific computing has become pervasive in all disciplines: collection and interpretation of most experimental data is carried out using computers, and physical models in computable form, with various degrees of complexity and sophistication, are utilized in all fields of science. However, full prediction of physical and chemical phenomena based on the basic laws of Nature, using computer simulations, is a revolution still in the making, and it involves some formidable theoretical and computational challenges. We illustrate the progress and successes obtained in recent years in predicting fundamental properties of materials in condensedmore » phases and at the nanoscale, using ab-initio, quantum simulations. We also discuss open issues related to the validation of the approximate, first principles theories used in large scale simulations, and the resulting complex interplay between computation and experiment. Finally, we describe some applications, with focus on nanostructures and liquids, both at ambient and under extreme conditions.« less
Optimizing Variational Quantum Algorithms Using Pontryagin’s Minimum Principle
Yang, Zhi -Cheng; Rahmani, Armin; Shabani, Alireza; ...
2017-05-18
We use Pontryagin’s minimum principle to optimize variational quantum algorithms. We show that for a fixed computation time, the optimal evolution has a bang-bang (square pulse) form, both for closed and open quantum systems with Markovian decoherence. Our findings support the choice of evolution ansatz in the recently proposed quantum approximate optimization algorithm. Focusing on the Sherrington-Kirkpatrick spin glass as an example, we find a system-size independent distribution of the duration of pulses, with characteristic time scale set by the inverse of the coupling constants in the Hamiltonian. The optimality of the bang-bang protocols and the characteristic time scale ofmore » the pulses provide an efficient parametrization of the protocol and inform the search for effective hybrid (classical and quantum) schemes for tackling combinatorial optimization problems. Moreover, we find that the success rates of our optimal bang-bang protocols remain high even in the presence of weak external noise and coupling to a thermal bath.« less
Optimizing Variational Quantum Algorithms Using Pontryagin’s Minimum Principle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zhi -Cheng; Rahmani, Armin; Shabani, Alireza
We use Pontryagin’s minimum principle to optimize variational quantum algorithms. We show that for a fixed computation time, the optimal evolution has a bang-bang (square pulse) form, both for closed and open quantum systems with Markovian decoherence. Our findings support the choice of evolution ansatz in the recently proposed quantum approximate optimization algorithm. Focusing on the Sherrington-Kirkpatrick spin glass as an example, we find a system-size independent distribution of the duration of pulses, with characteristic time scale set by the inverse of the coupling constants in the Hamiltonian. The optimality of the bang-bang protocols and the characteristic time scale ofmore » the pulses provide an efficient parametrization of the protocol and inform the search for effective hybrid (classical and quantum) schemes for tackling combinatorial optimization problems. Moreover, we find that the success rates of our optimal bang-bang protocols remain high even in the presence of weak external noise and coupling to a thermal bath.« less
Exner, Kai S; Anton, Josef; Jacob, Timo; Over, Herbert
2016-06-20
Current progress in modern electrocatalysis research is spurred by theory, frequently based on ab initio thermodynamics, where the stable reaction intermediates at the electrode surface are identified, while the actual energy barriers are ignored. This approach is popular in that a simple tool is available for searching for promising electrode materials. However, thermodynamics alone may be misleading to assess the catalytic activity of an electrochemical reaction as we exemplify with the chlorine evolution reaction (CER) over a RuO2 (110) model electrode. The full procedure is introduced, starting from the stable reaction intermediates, computing the energy barriers, and finally performing microkinetic simulations, all performed under the influence of the solvent and the electrode potential. Full kinetics from first-principles allows the rate-determining step in the CER to be identified and the experimentally observed change in the Tafel slope to be explained. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Computer simulation as a teaching aid in pharmacy management--Part 1: Principles of accounting.
Morrison, D J
1987-06-01
The need for pharmacists to develop management expertise through participation in formal courses is now widely acknowledged. Many schools of pharmacy lay the foundations for future management training by providing introductory courses as an integral or elective part of the undergraduate syllabus. The benefit of such courses may, however, be limited by the lack of opportunity for the student to apply the concepts and procedures in a practical working environment. Computer simulations provide a means to overcome this problem, particularly in the field of resource management. In this, the first of two articles, the use of a computer model to demonstrate basic accounting principles is described.
Gussy, M G; Knevel, R J M; Sigurdson, V; Karlberg, G
2006-08-01
Globalization and concurrent development in computer and communication technology has increased interest in collaborative online teaching and learning for students in higher education institutions. Many institutions and teachers have introduced computer-supported programmes in areas including dental hygiene. The potential for the use of this technology is exciting; however, its introduction should be careful and considered. We suggest that educators wanting to introduce computer-supported programmes make explicit their pedagogical principles and then select technologies that support and exploit these principles. This paper describes this process as it was applied to the development of an international web-based collaborative learning programme for dental hygiene students.
NASA Astrophysics Data System (ADS)
Eisenbach, Markus
The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code. This work has been sponsored by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Material Sciences and Engineering Division and by the Office of Advanced Scientific Computing. This work used resources of the Oak Ridge Leadership Computing Facility, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.
Water System Adaptation To Hydrological Changes: Module 7, Adaptation Principles and Considerations
This course will introduce students to the fundamental principles of water system adaptation to hydrological changes, with emphasis on data analysis and interpretation, technical planning, and computational modeling. Starting with real-world scenarios and adaptation needs, the co...
ERIC Educational Resources Information Center
Sniecinski, Jozef
This paper reviews efforts which have been made to improve the effectiveness of teaching through the development of principles of programed teaching and the construction of teaching machines, concluding that a combination of computer technology and programed teaching principles offers an efficient approach to improving teaching. Three different…
Cognitive architectures and language acquisition: a case study in pronoun comprehension.
VAN Rij, Jacolien; VAN Rijn, Hedderik; Hendriks, Petra
2010-06-01
In this paper we discuss a computational cognitive model of children's poor performance on pronoun interpretation (the so-called Delay of Principle B Effect, or DPBE). This cognitive model is based on a theoretical account that attributes the DPBE to children's inability as hearers to also take into account the speaker's perspective. The cognitive model predicts that child hearers are unable to do so because their speed of linguistic processing is too limited to perform this second step in interpretation. We tested this hypothesis empirically in a psycholinguistic study, in which we slowed down the speech rate to give children more time for interpretation, and in a computational simulation study. The results of the two studies confirm the predictions of our model. Moreover, these studies show that embedding a theory of linguistic competence in a cognitive architecture allows for the generation of detailed and testable predictions with respect to linguistic performance.
In situ single-atom array synthesis using dynamic holographic optical tweezers
Kim, Hyosub; Lee, Woojun; Lee, Han-gyeol; Jo, Hanlae; Song, Yunheung; Ahn, Jaewook
2016-01-01
Establishing a reliable method to form scalable neutral-atom platforms is an essential cornerstone for quantum computation, quantum simulation and quantum many-body physics. Here we demonstrate a real-time transport of single atoms using holographic microtraps controlled by a liquid-crystal spatial light modulator. For this, an analytical design approach to flicker-free microtrap movement is devised and cold rubidium atoms are simultaneously rearranged with 2N motional degrees of freedom, representing unprecedented space controllability. We also accomplish an in situ feedback control for single-atom rearrangements with the high success rate of 99% for up to 10 μm translation. We hope this proof-of-principle demonstration of high-fidelity atom-array preparations will be useful for deterministic loading of N single atoms, especially on arbitrary lattice locations, and also for real-time qubit shuttling in high-dimensional quantum computing architectures. PMID:27796372
Wang, Zhen; Antoniou, Dimitri; Schwartz, Steven D.; Schramm, Vern L.
2016-01-01
Escherichia coli dihydrofolate reductase (ecDHFR) is used to study fundamental principles of enzyme catalysis. It remains controversial whether fast protein motions are coupled to the hydride transfer catalyzed by ecDHFR. Previous studies with heavy ecDHFR proteins labeled with 13C, 15N, and nonexchangeable 2H reported enzyme mass-dependent hydride transfer kinetics for ecDHFR. Here, we report refined experimental and computational studies to establish that hydride transfer is independent of protein mass. Instead, we found the rate constant for substrate dissociation to be faster for heavy DHFR. Previously reported kinetic differences between light and heavy DHFRs likely arise from kinetic steps other than the chemical step. This study confirms that fast (femtosecond to picosecond) protein motions in ecDHFR are not coupled to hydride transfer and provides an integrative computational and experimental approach to resolve fast dynamics coupled to chemical steps in enzyme catalysis. PMID:26652185
12 CFR 615.5206 - Permanent capital ratio computation.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Permanent capital ratio computation. 615.5206... capital ratio computation. (a) The institution's permanent capital ratio is determined on the basis of the financial statements of the institution prepared in accordance with generally accepted accounting principles...
Web-Based Learning in the Computer-Aided Design Curriculum.
ERIC Educational Resources Information Center
Sung, Wen-Tsai; Ou, S. C.
2002-01-01
Applies principles of constructivism and virtual reality (VR) to computer-aided design (CAD) curriculum, particularly engineering, by integrating network, VR and CAD technologies into a Web-based learning environment that expands traditional two-dimensional computer graphics into a three-dimensional real-time simulation that enhances user…
Implementing Computer Algebra Enabled Questions for the Assessment and Learning of Mathematics
ERIC Educational Resources Information Center
Sangwin, Christopher J.; Naismith, Laura
2008-01-01
We present principles for the design of an online system to support computer algebra enabled questions for use within the teaching and learning of mathematics in higher education. The introduction of a computer algebra system (CAS) into a computer aided assessment (CAA) system affords sophisticated response processing of student provided answers.…
NASA Astrophysics Data System (ADS)
Anderson, John R.; Boyle, C. Franklin; Reiser, Brian J.
1985-04-01
Cognitive psychology, artificial intelligence, and computer technology have advanced to the point where it is feasible to build computer systems that are as effective as intelligent human tutors. Computer tutors based on a set of pedagogical principles derived from the ACT theory of cognition have been developed for teaching students to do proofs in geometry and to write computer programs in the language LISP.
Anderson, J R; Boyle, C F; Reiser, B J
1985-04-26
Cognitive psychology, artificial intelligence, and computer technology have advanced to the point where it is feasible to build computer systems that are as effective as intelligent human tutors. Computer tutors based on a set of pedagogical principles derived from the ACT theory of cognition have been developed for teaching students to do proofs in geometry and to write computer programs in the language LISP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Ariano, Giacomo Mauro
2010-05-04
I will argue that the proposal of establishing operational foundations of Quantum Theory should have top-priority, and that the Lucien Hardy's program on Quantum Gravity should be paralleled by an analogous program on Quantum Field Theory (QFT), which needs to be reformulated, notwithstanding its experimental success. In this paper, after reviewing recently suggested operational 'principles of the quantumness', I address the problem on whether Quantum Theory and Special Relativity are unrelated theories, or instead, if the one implies the other. I show how Special Relativity can be indeed derived from causality of Quantum Theory, within the computational paradigm 'the universemore » is a huge quantum computer', reformulating QFT as a Quantum-Computational Field Theory (QCFT). In QCFT Special Relativity emerges from the fabric of the computational network, which also naturally embeds gauge invariance. In this scheme even the quantization rule and the Planck constant can in principle be derived as emergent from the underlying causal tapestry of space-time. In this way Quantum Theory remains the only theory operating the huge computer of the universe.Is the computational paradigm only a speculative tautology (theory as simulation of reality), or does it have a scientific value? The answer will come from Occam's razor, depending on the mathematical simplicity of QCFT. Here I will just start scratching the surface of QCFT, analyzing simple field theories, including Dirac's. The number of problems and unmotivated recipes that plague QFT strongly motivates us to undertake the QCFT project, since QCFT makes all such problems manifest, and forces a re-foundation of QFT.« less
ERIC Educational Resources Information Center
Daellenbach, Lawrence A.; And Others
The purpose of this study was to determine the effect of computer assisted instruction (CAI) on the cognitive and affective development of college students enrolled in a principles of macroeconomics course. The hypotheses of the experiment were stated as follows: In relation to the traditional principles course, the experimental treatment will…
Computational fluid mechanics utilizing the variational principle of modeling damping seals
NASA Technical Reports Server (NTRS)
Abernathy, J. M.; Farmer, R.
1985-01-01
An analysis for modeling damping seals for use in Space Shuttle main engine turbomachinery is being produced. Development of a computational fluid mechanics code for turbulent, incompressible flow is required.
Monte Carlo Computer Simulation of a Rainbow.
ERIC Educational Resources Information Center
Olson, Donald; And Others
1990-01-01
Discusses making a computer-simulated rainbow using principles of physics, such as reflection and refraction. Provides BASIC program for the simulation. Appends a program illustrating the effects of dispersion of the colors. (YP)
ERIC Educational Resources Information Center
Dewhurst, D. G.; And Others
1989-01-01
An interactive computer-assisted learning program written for the BBC microcomputer to teach the basic principles of genetic engineering is described. Discussed are the hardware requirements software, use of the program, and assessment. (Author/CW)
Scale Up in Education. Volume 1: Ideas in Principle
ERIC Educational Resources Information Center
Schneider, Barbara Ed.; McDonald, Sarah-Kathryn Ed.
2006-01-01
"Scale Up in Education, Volume 1: Ideas in Principle" examines the challenges of "scaling up" from a multidisciplinary perspective. It brings together contributions from disciplines that routinely take promising innovations to scale, including medicine, business, engineering, computing, and education. Together the contributors explore appropriate…
First Principles Optical Absorption Spectra of Organic Molecules Adsorbed on Titania Nanoparticles
NASA Astrophysics Data System (ADS)
Baishya, Kopinjol; Ogut, Serdar; Mete, Ersen; Gulseren, Oguz; Ellialtioglu, Sinasi
2012-02-01
We present results from first principles computations on passivated rutile TiO2 nanoparticles in both free-standing and dye-sensitized configurations to investigate the size dependence of their optical absorption spectra. The computations are performed using time-dependent density functional theory (TDDFT) as well as GW-Bethe-Salpeter-Equation (GWBSE) methods and compared with each other. We interpret the first principles spectra for free-standing TiO2 nanoparticles within the framework of the classical Mie-Gans theory using the bulk dielectric function of TiO2. We investigate the effects of the titania support on the absorption spectra of a particular set of perylene-diimide (PDI) derived dye molecules, namely brominated PDI (Br2C24H8N2O4) and its glycine and aspartine derivatives.
The growth of language: Universal Grammar, experience, and principles of computation.
Yang, Charles; Crain, Stephen; Berwick, Robert C; Chomsky, Noam; Bolhuis, Johan J
2017-10-01
Human infants develop language remarkably rapidly and without overt instruction. We argue that the distinctive ontogenesis of child language arises from the interplay of three factors: domain-specific principles of language (Universal Grammar), external experience, and properties of non-linguistic domains of cognition including general learning mechanisms and principles of efficient computation. We review developmental evidence that children make use of hierarchically composed structures ('Merge') from the earliest stages and at all levels of linguistic organization. At the same time, longitudinal trajectories of development show sensitivity to the quantity of specific patterns in the input, which suggests the use of probabilistic processes as well as inductive learning mechanisms that are suitable for the psychological constraints on language acquisition. By considering the place of language in human biology and evolution, we propose an approach that integrates principles from Universal Grammar and constraints from other domains of cognition. We outline some initial results of this approach as well as challenges for future research. Copyright © 2017 Elsevier Ltd. All rights reserved.
Mass storage system experiences and future needs at the National Center for Atmospheric Research
NASA Technical Reports Server (NTRS)
Olear, Bernard T.
1991-01-01
A summary and viewgraphs of a discussion presented at the National Space Science Data Center (NSSDC) Mass Storage Workshop is included. Some of the experiences of the Scientific Computing Division at the National Center for Atmospheric Research (NCAR) dealing the the 'data problem' are discussed. A brief history and a development of some basic mass storage system (MSS) principles are given. An attempt is made to show how these principles apply to the integration of various components into NCAR's MSS. Future MSS needs for future computing environments is discussed.
Bioinspired principles for large-scale networked sensor systems: an overview.
Jacobsen, Rune Hylsberg; Zhang, Qi; Toftegaard, Thomas Skjødeberg
2011-01-01
Biology has often been used as a source of inspiration in computer science and engineering. Bioinspired principles have found their way into network node design and research due to the appealing analogies between biological systems and large networks of small sensors. This paper provides an overview of bioinspired principles and methods such as swarm intelligence, natural time synchronization, artificial immune system and intercellular information exchange applicable for sensor network design. Bioinspired principles and methods are discussed in the context of routing, clustering, time synchronization, optimal node deployment, localization and security and privacy.
The virtual-casing principle and Helmholtz's theorem
Hanson, J. D.
2015-09-10
The virtual-casing principle is used in plasma physics to convert a Biot–Savart integration over a current distribution into a surface integral over a surface that encloses the current. In many circumstances, use of virtual casing can significantly speed up the computation of magnetic fields. In this paper, a virtual-casing principle is derived for a general vector field with arbitrary divergence and curl. This form of the virtual-casing principle is thus applicable to both magnetostatic fields and electrostatic fields. The result is then related to Helmholtz's theorem.
The virtual-casing principle and Helmholtz’s theorem
Hanson, J. D.
2015-09-10
The virtual-casing principle is used in plasma physics to convert a Biot–Savart integration over a current distribution into a surface integral over a surface that encloses the current. In many circumstances, use of virtual casing can significantly speed up the computation of magnetic fields. In this paper, a virtual-casing principle is derived for a general vector field with arbitrary divergence and curl. This form of the virtual-casing principle is thus applicable to both magnetostatic fields and electrostatic fields. The result is then related to Helmholtz’s theorem.
Genetic Algorithms and Nucleation in VIH-AIDS transition.
NASA Astrophysics Data System (ADS)
Barranon, Armando
2003-03-01
VIH to AIDS transition has been modeled via a genetic algorithm that uses boom-boom principle and where population evolution is simulated with a cellular automaton based on SIR model. VIH to AIDS transition is signed by nucleation of infected cells and low probability of infection are obtained for different mutation rates in agreement with clinical results. A power law is obtained with a critical exponent close to the critical exponent of cubic, spherical percolation, colossal magnetic resonance, Ising Model and liquid-gas phase transition in heavy ion collisions. Computations were carried out at UAM-A Supercomputing Lab and author acknowledges financial support from Division of CBI at UAM-A.
Constructing Precisely Computing Networks with Biophysical Spiking Neurons.
Schwemmer, Michael A; Fairhall, Adrienne L; Denéve, Sophie; Shea-Brown, Eric T
2015-07-15
While spike timing has been shown to carry detailed stimulus information at the sensory periphery, its possible role in network computation is less clear. Most models of computation by neural networks are based on population firing rates. In equivalent spiking implementations, firing is assumed to be random such that averaging across populations of neurons recovers the rate-based approach. Recently, however, Denéve and colleagues have suggested that the spiking behavior of neurons may be fundamental to how neuronal networks compute, with precise spike timing determined by each neuron's contribution to producing the desired output (Boerlin and Denéve, 2011; Boerlin et al., 2013). By postulating that each neuron fires to reduce the error in the network's output, it was demonstrated that linear computations can be performed by networks of integrate-and-fire neurons that communicate through instantaneous synapses. This left open, however, the possibility that realistic networks, with conductance-based neurons with subthreshold nonlinearity and the slower timescales of biophysical synapses, may not fit into this framework. Here, we show how the spike-based approach can be extended to biophysically plausible networks. We then show that our network reproduces a number of key features of cortical networks including irregular and Poisson-like spike times and a tight balance between excitation and inhibition. Lastly, we discuss how the behavior of our model scales with network size or with the number of neurons "recorded" from a larger computing network. These results significantly increase the biological plausibility of the spike-based approach to network computation. We derive a network of neurons with standard spike-generating currents and synapses with realistic timescales that computes based upon the principle that the precise timing of each spike is important for the computation. We then show that our network reproduces a number of key features of cortical networks including irregular, Poisson-like spike times, and a tight balance between excitation and inhibition. These results significantly increase the biological plausibility of the spike-based approach to network computation, and uncover how several components of biological networks may work together to efficiently carry out computation. Copyright © 2015 the authors 0270-6474/15/3510112-23$15.00/0.
The Use of Audio in Computer-Based Instruction.
ERIC Educational Resources Information Center
Koroghlanian, Carol M.; Sullivan, Howard J.
This study investigated the effects of audio and text density on the achievement, time-in-program, and attitudes of 134 undergraduates. Data concerning the subjects' preexisting computer skills and experience, as well as demographic information, were also collected. The instruction in visual design principles was delivered by computer and included…
Graphical User Interface Programming in Introductory Computer Science.
ERIC Educational Resources Information Center
Skolnick, Michael M.; Spooner, David L.
Modern computing systems exploit graphical user interfaces for interaction with users; as a result, introductory computer science courses must begin to teach the principles underlying such interfaces. This paper presents an approach to graphical user interface (GUI) implementation that is simple enough for beginning students to understand, yet…
Using the Computer in Evolution Studies
ERIC Educational Resources Information Center
Mariner, James L.
1973-01-01
Describes a high school biology exercise in which a computer greatly reduces time spent on calculations. Genetic equilibrium demonstrated by the Hardy-Weinberg principle and the subsequent effects of violating any of its premises are more readily understood when frequencies of alleles through many generations are calculated by the computer. (JR)
ERIC Educational Resources Information Center
May, Donald M.; And Others
The minicomputer-based Computerized Diagnostic and Decision Training (CDDT) system described combines the principles of artificial intelligence, decision theory, and adaptive computer assisted instruction for training in electronic troubleshooting. The system incorporates an adaptive computer program which learns the student's diagnostic and…
Advanced CNC Programming (EZ-CAM). 439-366.
ERIC Educational Resources Information Center
Casey, Joe
This document contains two units for an advanced course in computer numerical control (CNC) for computer-aided manufacturing. It is intended to familiarize students with the principles and techniques necessary to create proper CNC programs using computer software. Each unit consists of an introduction, instructional objectives, learning materials,…
Commercial absorption chiller models for evaluation of control strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koeppel, E.A.; Klein, S.A.; Mitchell, J.W.
1995-08-01
A steady-state computer simulation model of a direct fired double-effect water-lithium bromide absorption chiller in the parallel-flow configuration was developed from first principles. Unknown model parameters such as heat transfer coefficients were determined by matching the model`s calculated state points and coefficient of performance (COP) against nominal full-load operating data and COPs obtained from a manufacturer`s catalog. The model compares favorably with the manufacturer`s performance ratings for varying water circuit (chilled and cooling) temperatures at full load conditions and for chiller part-load performance. The model was used (1) to investigate the effect of varying the water circuit flow rates withmore » the chiller load and (2) to optimize chiller part-load performance with respect to the distribution and flow of the weak solution.« less
Combinatorial Histone Acetylation Patterns Are Generated by Motif-Specific Reactions.
Blasi, Thomas; Feller, Christian; Feigelman, Justin; Hasenauer, Jan; Imhof, Axel; Theis, Fabian J; Becker, Peter B; Marr, Carsten
2016-01-27
Post-translational modifications (PTMs) are pivotal to cellular information processing, but how combinatorial PTM patterns ("motifs") are set remains elusive. We develop a computational framework, which we provide as open source code, to investigate the design principles generating the combinatorial acetylation patterns on histone H4 in Drosophila melanogaster. We find that models assuming purely unspecific or lysine site-specific acetylation rates were insufficient to explain the experimentally determined motif abundances. Rather, these abundances were best described by an ensemble of models with acetylation rates that were specific to motifs. The model ensemble converged upon four acetylation pathways; we validated three of these using independent data from a systematic enzyme depletion study. Our findings suggest that histone acetylation patterns originate through specific pathways involving motif-specific acetylation activity. Copyright © 2016 Elsevier Inc. All rights reserved.
Engineering models for catastrophe risk and their application to insurance
NASA Astrophysics Data System (ADS)
Dong, Weimin
2002-06-01
Internationally earthquake insurance, like all other insurance (fire, auto), adopted actuarial approach in the past, which is, based on historical loss experience to determine insurance rate. Due to the fact that earthquake is a rare event with severe consequence, irrational determination of premium rate and lack of understanding scale of potential loss led to many insurance companies insolvent after Northridge earthquake in 1994. Along with recent advances in earth science, computer science and engineering, computerized loss estimation methodologies based on first principles have been developed to the point that losses from destructive earthquakes can be quantified with reasonable accuracy using scientific modeling techniques. This paper intends to introduce how engineering models can assist to quantify earthquake risk and how insurance industry can use this information to manage their risk in the United States and abroad.
34 CFR 76.560 - General indirect cost rates; exceptions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... rates; exceptions. (a) The differences between direct and indirect costs and the principles for... in the cost principles for— (1) Institutions of higher education, at 34 CFR 74.27; (2) Hospitals, at...
NASA Astrophysics Data System (ADS)
Thurner, Stefan; Corominas-Murtra, Bernat; Hanel, Rudolf
2017-09-01
There are at least three distinct ways to conceptualize entropy: entropy as an extensive thermodynamic quantity of physical systems (Clausius, Boltzmann, Gibbs), entropy as a measure for information production of ergodic sources (Shannon), and entropy as a means for statistical inference on multinomial processes (Jaynes maximum entropy principle). Even though these notions represent fundamentally different concepts, the functional form of the entropy for thermodynamic systems in equilibrium, for ergodic sources in information theory, and for independent sampling processes in statistical systems, is degenerate, H (p ) =-∑ipilogpi . For many complex systems, which are typically history-dependent, nonergodic, and nonmultinomial, this is no longer the case. Here we show that for such processes, the three entropy concepts lead to different functional forms of entropy, which we will refer to as SEXT for extensive entropy, SIT for the source information rate in information theory, and SMEP for the entropy functional that appears in the so-called maximum entropy principle, which characterizes the most likely observable distribution functions of a system. We explicitly compute these three entropy functionals for three concrete examples: for Pólya urn processes, which are simple self-reinforcing processes, for sample-space-reducing (SSR) processes, which are simple history dependent processes that are associated with power-law statistics, and finally for multinomial mixture processes.
ERIC Educational Resources Information Center
Jackett, Dwane
1990-01-01
Described is a science activity which illustrates the principle of uncertainty using a computer simulation of bacterial reproduction. Procedures and results are discussed. Several illustrations of results are provided. The availability of a computer program is noted. (CW)
Business Mathematics Curriculum Guide.
ERIC Educational Resources Information Center
Ebersole, Benjamin P., Comp.; And Others
This revised course in business mathematics emphasizes computations needed for problem solving, but greater attention is focused on mathematical principles that were developed in previous grades. In addition, the course aims to develop further an understanding of business principles and practices which can be used in gainful employment and in the…
Westbury, Chris; Keith, Jeff; Briesemeister, Benny B; Hofmann, Markus J; Jacobs, Arthur M
2015-01-01
Ever since Aristotle discussed the issue in Book II of his Rhetoric, humans have attempted to identify a set of "basic emotion labels". In this paper we propose an algorithmic method for evaluating sets of basic emotion labels that relies upon computed co-occurrence distances between words in a 12.7-billion-word corpus of unselected text from USENET discussion groups. Our method uses the relationship between human arousal and valence ratings collected for a large list of words, and the co-occurrence similarity between each word and emotion labels. We assess how well the words in each of 12 emotion label sets-proposed by various researchers over the past 118 years-predict the arousal and valence ratings on a test and validation dataset, each consisting of over 5970 items. We also assess how well these emotion labels predict lexical decision residuals (LDRTs), after co-varying out the effects attributable to basic lexical predictors. We then demonstrate a generalization of our method to determine the most predictive "basic" emotion labels from among all of the putative models of basic emotion that we considered. As well as contributing empirical data towards the development of a more rigorous definition of basic emotions, our method makes it possible to derive principled computational estimates of emotionality-specifically, of arousal and valence-for all words in the language.
Design for the Maintainer: Projecting Maintenance Performance from Design Characteristics.
1981-07-01
of Kahneman and Tversky (Tversky & Kahneman, 1974; Kahneman & Tversky, 1979). They have observed some general principles to which human decision...makers tend to adhere. The first of these is the "representativeness heuristicw . According to this principle , the question, ’will event A be generated by...process B?", will be decided affirmatively to the extent that the event A resembles process B. According to this principle , if failure in a computer
Modeling macro-and microstructures of gas-metal-arc welded HSLA-100 steel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Z.; Debroy, T.
1999-06-01
Fluid flow and heat transfer during gas-metal-arc welding (GMAW) of HSLA-100 steel were studied using a transient, three-dimensional, turbulent heat transfer and fluid flow model. The temperature and velocity fields, cooling rates, and shape and size of the fusion and heat-affected zones (HAZs) were calculated. A continuous-cooling-transformation (CCT) diagram was computed to aid in the understanding of the observed weld metal microstructure. The computed results demonstrate that the dissipation of heat and momentum in the weld pool is significantly aided by turbulence,m thus suggesting that previous modeling results based on laminar flow need to be re-examined. A comparison of themore » calculated fusion and HAZ geometries with their corresponding measured values showed good agreement. Furthermore, finger penetration, a unique geometric characteristic of gas-metal-arc weld pools, could be satisfactorily predicted from the model. The ability to predict these geometric variables and the agreement between the calculated and the measured cooling rates indicate the appropriateness of using a turbulence model for accurate calculations. The microstructure of the weld metal consisted mainly of acicular ferrite with small amounts of bainite. At high heat inputs, small amounts of allotriomorphic and Widmanstaetten ferrite were also observed. The observed microstructures are consistent with those expected from the computed CCT diagram and the cooling rates. The results presented here demonstrate significant promise for understanding both macro-and microstructures of steel welds from the combination of the fundamental principles from both transport phenomena and phase transformation theory.« less
Intelligence-Augmented Rat Cyborgs in Maze Solving.
Yu, Yipeng; Pan, Gang; Gong, Yongyue; Xu, Kedi; Zheng, Nenggan; Hua, Weidong; Zheng, Xiaoxiang; Wu, Zhaohui
2016-01-01
Cyborg intelligence is an emerging kind of intelligence paradigm. It aims to deeply integrate machine intelligence with biological intelligence by connecting machines and living beings via neural interfaces, enhancing strength by combining the biological cognition capability with the machine computational capability. Cyborg intelligence is considered to be a new way to augment living beings with machine intelligence. In this paper, we build rat cyborgs to demonstrate how they can expedite the maze escape task with integration of machine intelligence. We compare the performance of maze solving by computer, by individual rats, and by computer-aided rats (i.e. rat cyborgs). They were asked to find their way from a constant entrance to a constant exit in fourteen diverse mazes. Performance of maze solving was measured by steps, coverage rates, and time spent. The experimental results with six rats and their intelligence-augmented rat cyborgs show that rat cyborgs have the best performance in escaping from mazes. These results provide a proof-of-principle demonstration for cyborg intelligence. In addition, our novel cyborg intelligent system (rat cyborg) has great potential in various applications, such as search and rescue in complex terrains.
Intelligence-Augmented Rat Cyborgs in Maze Solving
Yu, Yipeng; Pan, Gang; Gong, Yongyue; Xu, Kedi; Zheng, Nenggan; Hua, Weidong; Zheng, Xiaoxiang; Wu, Zhaohui
2016-01-01
Cyborg intelligence is an emerging kind of intelligence paradigm. It aims to deeply integrate machine intelligence with biological intelligence by connecting machines and living beings via neural interfaces, enhancing strength by combining the biological cognition capability with the machine computational capability. Cyborg intelligence is considered to be a new way to augment living beings with machine intelligence. In this paper, we build rat cyborgs to demonstrate how they can expedite the maze escape task with integration of machine intelligence. We compare the performance of maze solving by computer, by individual rats, and by computer-aided rats (i.e. rat cyborgs). They were asked to find their way from a constant entrance to a constant exit in fourteen diverse mazes. Performance of maze solving was measured by steps, coverage rates, and time spent. The experimental results with six rats and their intelligence-augmented rat cyborgs show that rat cyborgs have the best performance in escaping from mazes. These results provide a proof-of-principle demonstration for cyborg intelligence. In addition, our novel cyborg intelligent system (rat cyborg) has great potential in various applications, such as search and rescue in complex terrains. PMID:26859299
Computational Modeling of Fluid–Structure–Acoustics Interaction during Voice Production
Jiang, Weili; Zheng, Xudong; Xue, Qian
2017-01-01
The paper presented a three-dimensional, first-principle based fluid–structure–acoustics interaction computer model of voice production, which employed a more realistic human laryngeal and vocal tract geometries. Self-sustained vibrations, important convergent–divergent vibration pattern of the vocal folds, and entrainment of the two dominant vibratory modes were captured. Voice quality-associated parameters including the frequency, open quotient, skewness quotient, and flow rate of the glottal flow waveform were found to be well within the normal physiological ranges. The analogy between the vocal tract and a quarter-wave resonator was demonstrated. The acoustic perturbed flux and pressure inside the glottis were found to be at the same order with their incompressible counterparts, suggesting strong source–filter interactions during voice production. Such high fidelity computational model will be useful for investigating a variety of pathological conditions that involve complex vibrations, such as vocal fold paralysis, vocal nodules, and vocal polyps. The model is also an important step toward a patient-specific surgical planning tool that can serve as a no-risk trial and error platform for different procedures, such as injection of biomaterials and thyroplastic medialization. PMID:28243588
A Strategic Approach to Network Defense: Framing the Cloud
2011-03-10
accepted network defensive principles, to reduce risks associated with emerging virtualization capabilities and scalability of cloud computing . This expanded...defensive framework can assist enterprise networking and cloud computing architects to better design more secure systems.
ERIC Educational Resources Information Center
School Science Review, 1984
1984-01-01
Discusses: (1) Brewster's angle in the elementary laboratory; (2) color mixing by computer; (3) computer iteration at A-level; (4) a simple probe for pressure measurement; (5) the measurement of distance using a laser; and (6) an activity on Archimede's principle. (JN)
Design Principles of Next-Generation Digital Gaming for Education.
ERIC Educational Resources Information Center
Squire, Kurt; Jenkins, Henry; Holland, Walter; Miller, Heather; O'Driscoll, Alice; Tan, Katie Philip; Todd, Katie.
2003-01-01
Discusses the rapid growth of digital games, describes research at MIT that is exploring the potential of digital games for supporting learning, and offers hypotheses about the design of next-generation educational video and computer games. Highlights include simulations and games; and design principles, including context and using information to…
Software Engineering Principles for Courseware Development.
ERIC Educational Resources Information Center
Magel, Kenneth
1980-01-01
Courseware (computer based curriculum materials) development should follow the lessons learned by software engineers. The most important of 28 principles of software development presented here include a stress on human readability, the importance of early planning and analysis, the need for independent evaluation, and the need to be flexible.…
The Application of Artificial Intelligence Principles to Teaching and Training
ERIC Educational Resources Information Center
Shaw, Keith
2008-01-01
This paper compares and contrasts the use of AI principles in industrial training with more normal computer-based training (CBT) approaches. A number of applications of CBT are illustrated (for example simulations, tutorial presentations, fault diagnosis, management games, industrial relations exercises) and compared with an alternative approach…
Exploiting the Maximum Entropy Principle to Increase Retrieval Effectiveness.
ERIC Educational Resources Information Center
Cooper, William S.
1983-01-01
Presents information retrieval design approach in which queries of computer-based system consist of sets of terms, either unweighted or weighted with subjective term precision estimates, and retrieval outputs ranked by probability of usefulness estimated by "maximum entropy principle." Boolean and weighted request systems are discussed.…
Bernoulli's Principle: Science as a Human Endeavor
ERIC Educational Resources Information Center
McCarthy, Deborah
2008-01-01
What do the ideas of Daniel Bernoulli--an 18th-century Swiss mathematician, physicist, natural scientist, and professor--and your students' next landing of the space shuttle via computer simulation have in common? Because of his contribution, referred in physical science as Bernoulli's principle, modern flight is possible. The mini learning-cycle…
Writing Better Software for Economics Principles Textbooks.
ERIC Educational Resources Information Center
Walbert, Mark S.
1989-01-01
Examines computer software currently available with most introductory economics textbooks. Compares what is available with what should be available in order to meet the goal of effectively using the microcomputer to teach economic principles. Recommends 14 specific pedagogical changes that should be made in order to improve current designs. (LS)
This course will introduce students to the fundamental principles of water system adaptation to hydrological changes, with emphasis on data analysis and interpretation, technical planning, and computational modeling. Starting with real-world scenarios and adaptation needs, the co...
Bridging Levels of Analysis: Learning, Information Theory, and the Lexicon
ERIC Educational Resources Information Center
Dye, Melody
2017-01-01
While information theory is typically considered in the context of modern computing and engineering, its core mathematical principles provide a potentially useful lens through which to consider human language. Like the artificial communication systems such principles were invented to describe, natural languages involve a sender and receiver, a…
The Readability of Principles of Macroeconomics Textbooks
ERIC Educational Resources Information Center
Tinkler, Sarah; Woods, James
2013-01-01
The authors evaluated principles of macroeconomics textbooks for readability using Coh-Metrix, a computational linguistics tool. Additionally, they conducted an experiment on Amazon's Mechanical Turk Web site in which participants ranked the readability of text samples. There was a wide range of scores on readability indexes both among…
Understanding Phylogenies in Biology: The Influence of a Gestalt Perceptual Principle
ERIC Educational Resources Information Center
Novick, Laura R.; Catley, Kefyn M.
2007-01-01
Cladograms, hierarchical diagrams depicting evolutionary histories among (groups of) species, are commonly drawn in 2 informationally equivalent formats--tree and ladder. The authors hypothesize that these formats are not computationally equivalent because the Gestalt principle of good continuation obscures the hierarchical structure of ladders.…
Teaching Practices in Principles of Economics Courses at Michigan Community Colleges.
ERIC Educational Resources Information Center
Utech, Claudia J.; Mosti, Patricia A.
1995-01-01
Presents findings from a study of teaching practices in Principles of Economics courses at Michigan's 29 community colleges. Describes course prerequisites; textbooks used; lecture supplements; and the use of experiential learning tools, such as computers and field trips. Presents three recommendations for improving student preparation in…
The Quantitative Analysis of User Behavior Online - Data, Models and Algorithms
NASA Astrophysics Data System (ADS)
Raghavan, Prabhakar
By blending principles from mechanism design, algorithms, machine learning and massive distributed computing, the search industry has become good at optimizing monetization on sound scientific principles. This represents a successful and growing partnership between computer science and microeconomics. When it comes to understanding how online users respond to the content and experiences presented to them, we have more of a lacuna in the collaboration between computer science and certain social sciences. We will use a concrete technical example from image search results presentation, developing in the process some algorithmic and machine learning problems of interest in their own right. We then use this example to motivate the kinds of studies that need to grow between computer science and the social sciences; a critical element of this is the need to blend large-scale data analysis with smaller-scale eye-tracking and "individualized" lab studies.
Kannan, Srimathi; Schulz, Amy; Israel, Barbara; Ayra, Indira; Weir, Sheryl; Dvonch, Timothy J.; Rowe, Zachary; Miller, Patricia; Benjamin, Alison
2008-01-01
Background Computer tailoring and personalizing recommendations for dietary health-promoting behaviors are in accordance with community-based participatory research (CBPR) principles, which emphasizes research that benefits the participants and community involved. Objective To describe the CBPR process utilized to computer-generate and disseminate personalized nutrition feedback reports (NFRs) for Detroit Healthy Environments Partnership (HEP) study participants. METHODS The CBPR process included discussion and feedback from HEP partners on several draft personalized reports. The nutrition feedback process included defining the feedback objectives; prioritizing the nutrients; customizing the report design; reviewing and revising the NFR template and readability; producing and disseminating the report; and participant follow-up. Lessons Learned Application of CBPR principles in designing the NFR resulted in a reader-friendly product with useful recommendations to promote heart health. Conclusions A CBPR process can enhance computer tailoring of personalized NFRs to address racial and socioeconomic disparities in cardiovascular disease (CVD). PMID:19337572
Universal Linear Motor Driven Leg Press Dynamometer and Concept of Serial Stretch Loading.
Hamar, Dušan
2015-08-24
Paper deals with backgrounds and principles of universal linear motor driven leg press dynamometer and concept of serial stretch loading. The device is based on two computer controlled linear motors mounted to the horizontal rails. As the motors can keep either constant resistance force in selected position or velocity in both directions, the system allows simulation of any mode of muscle contraction. In addition, it also can generate defined serial stretch stimuli in a form of repeated force peaks. This is achieved by short segments of reversed velocity (in concentric phase) or acceleration (in eccentric phase). Such stimuli, generated at the rate of 10 Hz, have proven to be a more efficient means for the improvement of rate of the force development. This capability not only affects performance in many sports, but also plays a substantial role in prevention of falls and their consequences. Universal linear motor driven and computer controlled dynamometer with its unique feature to generate serial stretch stimuli seems to be an efficient and useful tool for enhancing strength training effects on neuromuscular function not only in athletes, but as well as in senior population and rehabilitation patients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Laurence; Yurkovich, James T.; Lloyd, Colton J.
Integrating omics data to refine or make context-specific models is an active field of constraint-based modeling. Proteomics now cover over 95% of the Escherichia coli proteome by mass. Genome-scale models of Metabolism and macromolecular Expression (ME) compute proteome allocation linked to metabolism and fitness. Using proteomics data, we formulated allocation constraints for key proteome sectors in the ME model. The resulting calibrated model effectively computed the “generalist” (wild-type) E. coli proteome and phenotype across diverse growth environments. Across 15 growth conditions, prediction errors for growth rate and metabolic fluxes were 69% and 14% lower, respectively. The sector-constrained ME model thusmore » represents a generalist ME model reflecting both growth rate maximization and “hedging” against uncertain environments and stresses, as indicated by significant enrichment of these sectors for the general stress response sigma factor σS. Finally, the sector constraints represent a general formalism for integrating omics data from any experimental condition into constraint-based ME models. The constraints can be fine-grained (individual proteins) or coarse-grained (functionally-related protein groups) as demonstrated here. Furthermore, this flexible formalism provides an accessible approach for narrowing the gap between the complexity captured by omics data and governing principles of proteome allocation described by systems-level models.« less
Como-Lesko, N; Primavera, L H; Szeszko, P R
1994-08-01
This study investigated high school students' marijuana usage patterns in relation to their harmfulness ratings of 15 licit and illicit drugs, perceived negative consequences from using marijuana, and types of defense mechanisms employed. Subjects were classified into one of five pattern-of-use groups based on marijuana usage: principled nonusers, nonusers, light users, moderate users, and heavy users. Principled nonusers (individuals who have never used marijuana and would not do so if it was legalized) rated marijuana, hashish, cocaine, and alcohol as significantly more harmful than heavy users. A cluster analysis of the drugs' harmfulness ratings best fit a three cluster solution and were named medicinal drugs, recreational drugs, and hard drugs. In general, principled nonusers rated negative consequences from using marijuana as significantly more likely to occur than other groups. Principled nonusers and heavy users utilized reversal from the Defense Mechanism Inventory, which includes repression and denial, significantly more than nonusers, indicating some trait common to the two extreme pattern-of-use groups.
First principle chemical kinetics in zeolites: the methanol-to-olefin process as a case study.
Van Speybroeck, Veronique; De Wispelaere, Kristof; Van der Mynsbrugge, Jeroen; Vandichel, Matthias; Hemelsoet, Karen; Waroquier, Michel
2014-11-07
To optimally design next generation catalysts a thorough understanding of the chemical phenomena at the molecular scale is a prerequisite. Apart from qualitative knowledge on the reaction mechanism, it is also essential to be able to predict accurate rate constants. Molecular modeling has become a ubiquitous tool within the field of heterogeneous catalysis. Herein, we review current computational procedures to determine chemical kinetics from first principles, thus by using no experimental input and by modeling the catalyst and reacting species at the molecular level. Therefore, we use the methanol-to-olefin (MTO) process as a case study to illustrate the various theoretical concepts. This process is a showcase example where rational design of the catalyst was for a long time performed on the basis of trial and error, due to insufficient knowledge of the mechanism. For theoreticians the MTO process is particularly challenging as the catalyst has an inherent supramolecular nature, for which not only the Brønsted acidic site is important but also organic species, trapped in the zeolite pores, must be essentially present during active catalyst operation. All these aspects give rise to specific challenges for theoretical modeling. It is shown that present computational techniques have matured to a level where accurate enthalpy barriers and rate constants can be predicted for reactions occurring at a single active site. The comparison with experimental data such as apparent kinetic data for well-defined elementary reactions has become feasible as current computational techniques also allow predicting adsorption enthalpies with reasonable accuracy. Real catalysts are truly heterogeneous in a space- and time-like manner. Future theory developments should focus on extending our view towards phenomena occurring at longer length and time scales and integrating information from various scales towards a unified understanding of the catalyst. Within this respect molecular dynamics methods complemented with additional techniques to simulate rare events are now gradually making their entrance within zeolite catalysis. Recent applications have already given a flavor of the benefit of such techniques to simulate chemical reactions in complex molecular environments.
NASA Astrophysics Data System (ADS)
Kurtén, Theo; Ortega, Ismael; Kupiainen, Oona; Olenius, Tinja; Loukonen, Ville; Reiman, Heidi; McGrath, Matthew; Vehkamäki, Hanna
2013-04-01
Despite the importance of atmospheric particle formation for both climate and air quality, both experiments and non-empirical models using e.g. sulfuric acid, ammonia and water as condensing vapors have so far been unable to reproduce atmospheric observations using realistic trace gas concentrations. Recent experimental and theoretical evidence has shown that this mystery is likely resolved by amines. Combining first-principles evaporation rates for sulfuric acid - dimethylamine clusters with cluster kinetic modeling, we show that even sub-ppt concentrations of amines, together with atmospherically realistic concentrations of sulfuric acid, result in formation rates close to those observed in the atmosphere. Our simulated cluster formation rates are also close to, though somewhat larger than, those measured at the CLOUD experiment in CERN for both sulfuric acid - ammonia and sulfuric acid - dimethylamine systems. A sensitivity analysis indicates that the remaining discrepancy for the sulfuric acid - amine particle formation rates is likely caused by steric hindrances to cluster formation (due to alkyl groups of the amine molecules) rather than by significant errors in the evaporation rates. First-principles molecular dynamic and reaction kinetic modeling shed further light on the microscopic physics and chemistry of sulfuric acid - amine clusters. For example, while the number and type of hydrogen bonds in the clusters typically reach their equilibrium values on a picosecond timescale, and the overall bonding patterns predicted by traditional "static" quantum chemical calculations seem to be stable, the individual atoms participating in the hydrogen bonds continuously change at atmospherically realistic temperatures. From a chemical reactivity perspective, we have also discovered a surprising phenomenon: clustering with sulfuric acid molecules slightly increases the activation energy required for the abstraction of alkyl hydrogens from amine molecules. This implies that the oxidation rate of amines by OH and possibly other oxidants may be decreased by clustering, thus prolonging the chemical lifetime of amines in the air.
ERIC Educational Resources Information Center
Wilkerson-Jerde, Michelle Hoda
2014-01-01
There are increasing calls to prepare K-12 students to use computational tools and principles when exploring scientific or mathematical phenomena. The purpose of this paper is to explore whether and how constructionist computer-supported collaborative environments can explicitly engage students in this practice. The Categorizer is a…
Architectural Principles and Experimentation of Distributed High Performance Virtual Clusters
ERIC Educational Resources Information Center
Younge, Andrew J.
2016-01-01
With the advent of virtualization and Infrastructure-as-a-Service (IaaS), the broader scientific computing community is considering the use of clouds for their scientific computing needs. This is due to the relative scalability, ease of use, advanced user environment customization abilities, and the many novel computing paradigms available for…
How Science Students Can Learn about Unobservable Phenomena Using Computer-Based Analogies
ERIC Educational Resources Information Center
Trey, L.; Khan, S.
2008-01-01
A novel instructional computer simulation that incorporates a dynamic analogy to represent Le Chatelier's Principle was designed to investigate the contribution of this feature to students' understanding. Two groups of 12th grade Chemistry students (n=15) interacted with the computer simulation during the study. Both groups did the same…
Some Analogies between Computer Programming and the Composing Process.
ERIC Educational Resources Information Center
Skulicz, Matthew
Since there are similarities between the process of writing computer programs and the process of writing successful expository prose, a student's knowledge of computer programing can contribute to the understanding of some principles of composition. The establishment of a clear objective is the first priority of both the writer and the programer,…
Computational techniques in tribology and material science at the atomic level
NASA Technical Reports Server (NTRS)
Ferrante, J.; Bozzolo, G. H.
1992-01-01
Computations in tribology and material science at the atomic level present considerable difficulties. Computational techniques ranging from first-principles to semi-empirical and their limitations are discussed. Example calculations of metallic surface energies using semi-empirical techniques are presented. Finally, application of the methods to calculation of adhesion and friction are presented.
ERIC Educational Resources Information Center
DeVillar, Robert A.; Faltis, Christian J.
This book offers an alternative conceptual framework for effectively incorporating computer use within the heterogeneous classroom. The framework integrates Vygotskian social-learning theory with Allport's contact theory and the principles of cooperative learning. In Part 1 an essential element is identified for each of these areas. These are, in…
ERIC Educational Resources Information Center
Basile, Anthony; D'Aquila, Jill M.
2002-01-01
Accounting students received either traditional instruction (n=46) or used computer-mediated communication and WebCT course management software. There were no significant differences in attitudes about the course. However, computer users were more positive about course delivery and course management tools. (Contains 17 references.) (SK)
ERIC Educational Resources Information Center
da Silveira, Pedro Rodrigo Castro
2014-01-01
This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…
Integrating Data Base into the Elementary School Science Program.
ERIC Educational Resources Information Center
Schlenker, Richard M.
This document describes seven science activities that combine scientific principles and computers. The objectives for the activities are to show students how the computer can be used as a tool to store and arrange scientific data, provide students with experience using the computer as a tool to manage scientific data, and provide students with…
Modular modelling with Physiome standards
Nickerson, David P.; Nielsen, Poul M. F.; Hunter, Peter J.
2016-01-01
Key points The complexity of computational models is increasing, supported by research in modelling tools and frameworks. But relatively little thought has gone into design principles for complex models.We propose a set of design principles for complex model construction with the Physiome standard modelling protocol CellML.By following the principles, models are generated that are extensible and are themselves suitable for reuse in larger models of increasing complexity.We illustrate these principles with examples including an architectural prototype linking, for the first time, electrophysiology, thermodynamically compliant metabolism, signal transduction, gene regulation and synthetic biology.The design principles complement other Physiome research projects, facilitating the application of virtual experiment protocols and model analysis techniques to assist the modelling community in creating libraries of composable, characterised and simulatable quantitative descriptions of physiology. Abstract The ability to produce and customise complex computational models has great potential to have a positive impact on human health. As the field develops towards whole‐cell models and linking such models in multi‐scale frameworks to encompass tissue, organ, or organism levels, reuse of previous modelling efforts will become increasingly necessary. Any modelling group wishing to reuse existing computational models as modules for their own work faces many challenges in the context of construction, storage, retrieval, documentation and analysis of such modules. Physiome standards, frameworks and tools seek to address several of these challenges, especially for models expressed in the modular protocol CellML. Aside from providing a general ability to produce modules, there has been relatively little research work on architectural principles of CellML models that will enable reuse at larger scales. To complement and support the existing tools and frameworks, we develop a set of principles to address this consideration. The principles are illustrated with examples that couple electrophysiology, signalling, metabolism, gene regulation and synthetic biology, together forming an architectural prototype for whole‐cell modelling (including human intervention) in CellML. Such models illustrate how testable units of quantitative biophysical simulation can be constructed. Finally, future relationships between modular models so constructed and Physiome frameworks and tools are discussed, with particular reference to how such frameworks and tools can in turn be extended to complement and gain more benefit from the results of applying the principles. PMID:27353233
Accuracy of CBCT for volumetric measurement of simulated periapical lesions.
Ahlowalia, M S; Patel, S; Anwar, H M S; Cama, G; Austin, R S; Wilson, R; Mannocci, F
2013-06-01
To compare the accuracy of cone beam computed tomography (CBCT) and micro-computed tomography (μCT) when measuring the volume of bone cavities. Ten irregular-shaped cavities of varying dimensions were created in bovine bone specimens using a rotary diamond bur. The samples were then scanned using the Accuitomo 3D CBCT scanner. The scanned information was converted to the Digital Imaging and Communication in Medicine (DICOM) format ready for analysis. Once formatted, 10 trained and calibrated examiners segmented the scans and measured the volumes of the lesions. Intra/interexaminer agreement was assessed by each examiner re-segmenting each scan after a 2-week interval. Micro-CT scans were analysed by a single examiner. To achieve a physical reading of the artificially created cavities, replicas were created using dimensionally stable silicone impression material. After measuring the mass of each impression sample, the volume was calculated by dividing the mass of each sample by the density of the set impression material. Further corroboration of these measurements was obtained by employing Archimedes' principle to measure the volume of each impression sample. Intraclass correlation was used to assess agreement. Both CBCT (mean volume: 175.9 mm3) and μCT (mean volume: 163.1 mm3) showed a high degree of agreement (intraclass correlation coefficient >0.9) when compared to both weighed and 'Archimedes' principle' measurements (mean volume: 177.7 and 182.6 mm3, respectively). Cone beam computed tomography is an accurate means of measuring volume of artificially created bone cavities in an ex vivo model. This may provide a valuable tool for monitoring the healing rate of apical periodontitis; further investigations are warranted. © 2012 International Endodontic Journal. Published by Blackwell Publishing Ltd.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., Specification Test Procedures for Monitoring Systems for Effluent Stream Gas Volumetric Flow Rate E Appendix E... Stream Gas Volumetric Flow Rate 1. Principle and applicability. 1.1Principle. Effluent stream gas... method is applicable to subparts which require continuous gas volumetric flow rate measurement...
Code of Federal Regulations, 2010 CFR
2010-07-01
..., Specification Test Procedures for Monitoring Systems for Effluent Stream Gas Volumetric Flow Rate E Appendix E... Stream Gas Volumetric Flow Rate 1. Principle and applicability. 1.1Principle. Effluent stream gas... method is applicable to subparts which require continuous gas volumetric flow rate measurement...
47 CFR 76.1504 - Rates, terms and conditions for carriage on open video systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
....1504 Rates, terms and conditions for carriage on open video systems. (a) Reasonable rate principle. An... operator will bear the burden of proof to demonstrate, using the principles set forth below, that the...; (2) Packaging, including marketing and other fees; (3) Talent fees; and (4) A reasonable overhead...
Community-based benchmarking improves spike rate inference from two-photon calcium imaging data.
Berens, Philipp; Freeman, Jeremy; Deneux, Thomas; Chenkov, Nikolay; McColgan, Thomas; Speiser, Artur; Macke, Jakob H; Turaga, Srinivas C; Mineault, Patrick; Rupprecht, Peter; Gerhard, Stephan; Friedrich, Rainer W; Friedrich, Johannes; Paninski, Liam; Pachitariu, Marius; Harris, Kenneth D; Bolte, Ben; Machado, Timothy A; Ringach, Dario; Stone, Jasmine; Rogerson, Luke E; Sofroniew, Nicolas J; Reimer, Jacob; Froudarakis, Emmanouil; Euler, Thomas; Román Rosón, Miroslav; Theis, Lucas; Tolias, Andreas S; Bethge, Matthias
2018-05-01
In recent years, two-photon calcium imaging has become a standard tool to probe the function of neural circuits and to study computations in neuronal populations. However, the acquired signal is only an indirect measurement of neural activity due to the comparatively slow dynamics of fluorescent calcium indicators. Different algorithms for estimating spike rates from noisy calcium measurements have been proposed in the past, but it is an open question how far performance can be improved. Here, we report the results of the spikefinder challenge, launched to catalyze the development of new spike rate inference algorithms through crowd-sourcing. We present ten of the submitted algorithms which show improved performance compared to previously evaluated methods. Interestingly, the top-performing algorithms are based on a wide range of principles from deep neural networks to generative models, yet provide highly correlated estimates of the neural activity. The competition shows that benchmark challenges can drive algorithmic developments in neuroscience.
The Research of the Parallel Computing Development from the Angle of Cloud Computing
NASA Astrophysics Data System (ADS)
Peng, Zhensheng; Gong, Qingge; Duan, Yanyu; Wang, Yun
2017-10-01
Cloud computing is the development of parallel computing, distributed computing and grid computing. The development of cloud computing makes parallel computing come into people’s lives. Firstly, this paper expounds the concept of cloud computing and introduces two several traditional parallel programming model. Secondly, it analyzes and studies the principles, advantages and disadvantages of OpenMP, MPI and Map Reduce respectively. Finally, it takes MPI, OpenMP models compared to Map Reduce from the angle of cloud computing. The results of this paper are intended to provide a reference for the development of parallel computing.
Bioinspired Principles for Large-Scale Networked Sensor Systems: An Overview
Jacobsen, Rune Hylsberg; Zhang, Qi; Toftegaard, Thomas Skjødeberg
2011-01-01
Biology has often been used as a source of inspiration in computer science and engineering. Bioinspired principles have found their way into network node design and research due to the appealing analogies between biological systems and large networks of small sensors. This paper provides an overview of bioinspired principles and methods such as swarm intelligence, natural time synchronization, artificial immune system and intercellular information exchange applicable for sensor network design. Bioinspired principles and methods are discussed in the context of routing, clustering, time synchronization, optimal node deployment, localization and security and privacy. PMID:22163841
Quantum corrections to newtonian potential and generalized uncertainty principle
NASA Astrophysics Data System (ADS)
Scardigli, Fabio; Lambiase, Gaetano; Vagenas, Elias
2017-08-01
We use the leading quantum corrections to the newtonian potential to compute the deformation parameter of the generalized uncertainty principle. By assuming just only General Relativity as theory of Gravitation, and the thermal nature of the GUP corrections to the Hawking spectrum, our calculation gives, to first order, a specific numerical result. We briefly discuss the physical meaning of this value, and compare it with the previously obtained bounds on the generalized uncertainty principle deformation parameter.
Space Ultrareliable Modular Computer (SUMC) instruction simulator
NASA Technical Reports Server (NTRS)
Curran, R. T.
1972-01-01
The design principles, description, functional operation, and recommended expansion and enhancements are presented for the Space Ultrareliable Modular Computer interpretive simulator. Included as appendices are the user's manual, program module descriptions, target instruction descriptions, simulator source program listing, and a sample program printout. In discussing the design and operation of the simulator, the key problems involving host computer independence and target computer architectural scope are brought into focus.
Reliability model derivation of a fault-tolerant, dual, spare-switching, digital computer system
NASA Technical Reports Server (NTRS)
1974-01-01
A computer based reliability projection aid, tailored specifically for application in the design of fault-tolerant computer systems, is described. Its more pronounced characteristics include the facility for modeling systems with two distinct operational modes, measuring the effect of both permanent and transient faults, and calculating conditional system coverage factors. The underlying conceptual principles, mathematical models, and computer program implementation are presented.
Computer Science Research Funding: How Much Is Too Little?
2009-06-01
Bioinformatics Parallel computing Computational biology Principles of programming Computational neuroscience Real-time and embedded systems Scientific...National Security Agency ( NSA ) • Missile Defense Agency (MDA) and others The various research programs have been coordinated through the DDR&E...DOD funding included only DARPA and OSD programs. FY07 and FY08 PBR funding included DARPA, NSA , some of the Services’ basic and applied research
Beyond Moore’s technologies: operation principles of a superconductor alternative
Klenov, Nikolay V; Bakurskiy, Sergey V; Kupriyanov, Mikhail Yu; Gudkov, Alexander L; Sidorenko, Anatoli S
2017-01-01
The predictions of Moore’s law are considered by experts to be valid until 2020 giving rise to “post-Moore’s” technologies afterwards. Energy efficiency is one of the major challenges in high-performance computing that should be answered. Superconductor digital technology is a promising post-Moore’s alternative for the development of supercomputers. In this paper, we consider operation principles of an energy-efficient superconductor logic and memory circuits with a short retrospective review of their evolution. We analyze their shortcomings in respect to computer circuits design. Possible ways of further research are outlined. PMID:29354341
2015-04-27
MODELING OF C-S-H Material chemistry level modeling following the principles and techniques commonly grouped under Computational Material Science is...Henmi, C. and Kusachi, I. Monoclinic tobermorite from fuka, bitchu-cho, Okoyama Perfecture. Japan J. Min. Petr. Econ . Geol. (1989)84:374-379. [22...31] Liu, Y. et al. First principles study of the stability and mechanical properties of MC (M=Ti, V, Zr, Nb, Hf and Ta) compounds. Journal of Alloys and Compounds. (2014) 582:500-504. 10
First-principles study of the infrared spectra of the ice Ih (0001) surface
Pham, T. Anh; Huang, P.; Schwegler, E.; ...
2012-08-22
Here, we present a study of the infrared (IR) spectra of the (0001) deuterated ice surface based on first-principles molecular dynamics simulations. The computed spectra show a good agreement with available experimental IR measurements. We identified the bonding configurations associated with specific features in the spectra, allowing us to provide a detailed interpretation of IR signals. We computed the spectra of several proton ordered and disordered models of the (0001) surface of ice, and we found that IR spectra do not appear to be a sensitive probe of the microscopic arrangement of protons at ice surfaces.
A Human Factors Framework for Payload Display Design
NASA Technical Reports Server (NTRS)
Dunn, Mariea C.; Hutchinson, Sonya L.
1998-01-01
During missions to space, one charge of the astronaut crew is to conduct research experiments. These experiments, referred to as payloads, typically are controlled by computers. Crewmembers interact with payload computers by using visual interfaces or displays. To enhance the safety, productivity, and efficiency of crewmember interaction with payload displays, particular attention must be paid to the usability of these displays. Enhancing display usability requires adoption of a design process that incorporates human factors engineering principles at each stage. This paper presents a proposed framework for incorporating human factors engineering principles into the payload display design process.
ERIC Educational Resources Information Center
School Science Review, 1985
1985-01-01
Presents 23 experiments, demonstrations, activities, and computer programs in biology, chemistry, and physics. Topics include lead in petrol, production of organic chemicals, reduction of water, enthalpy, X-ray diffraction model, nuclear magnetic resonance spectroscopy, computer simulation for additive mixing of colors, Archimedes Principle, and…
Programmable and autonomous computing machine made of biomolecules
Benenson, Yaakov; Paz-Elizur, Tamar; Adar, Rivka; Keinan, Ehud; Livneh, Zvi; Shapiro, Ehud
2013-01-01
Devices that convert information from one form into another according to a definite procedure are known as automata. One such hypothetical device is the universal Turing machine1, which stimulated work leading to the development of modern computers. The Turing machine and its special cases2, including finite automata3, operate by scanning a data tape, whose striking analogy to information-encoding biopolymers inspired several designs for molecular DNA computers4–8. Laboratory-scale computing using DNA and human-assisted protocols has been demonstrated9–15, but the realization of computing devices operating autonomously on the molecular scale remains rare16–20. Here we describe a programmable finite automaton comprising DNA and DNA-manipulating enzymes that solves computational problems autonomously. The automaton’s hardware consists of a restriction nuclease and ligase, the software and input are encoded by double-stranded DNA, and programming amounts to choosing appropriate software molecules. Upon mixing solutions containing these components, the automaton processes the input molecule via a cascade of restriction, hybridization and ligation cycles, producing a detectable output molecule that encodes the automaton’s final state, and thus the computational result. In our implementation 1012 automata sharing the same software run independently and in parallel on inputs (which could, in principle, be distinct) in 120 μl solution at room temperature at a combined rate of 109 transitions per second with a transition fidelity greater than 99.8%, consuming less than 10−10 W. PMID:11719800
The Handicap Principle for Trust in Computer Security, the Semantic Web and Social Networking
NASA Astrophysics Data System (ADS)
Ma, Zhanshan (Sam); Krings, Axel W.; Hung, Chih-Cheng
Communication is a fundamental function of life, and it exists in almost all living things: from single-cell bacteria to human beings. Communication, together with competition and cooperation,arethree fundamental processes in nature. Computer scientists are familiar with the study of competition or 'struggle for life' through Darwin's evolutionary theory, or even evolutionary computing. They may be equally familiar with the study of cooperation or altruism through the Prisoner's Dilemma (PD) game. However, they are likely to be less familiar with the theory of animal communication. The objective of this article is three-fold: (i) To suggest that the study of animal communication, especially the honesty (reliability) of animal communication, in which some significant advances in behavioral biology have been achieved in the last three decades, should be on the verge to spawn important cross-disciplinary research similar to that generated by the study of cooperation with the PD game. One of the far-reaching advances in the field is marked by the publication of "The Handicap Principle: a Missing Piece of Darwin's Puzzle" by Zahavi (1997). The 'Handicap' principle [34][35], which states that communication signals must be costly in some proper way to be reliable (honest), is best elucidated with evolutionary games, e.g., Sir Philip Sidney (SPS) game [23]. Accordingly, we suggest that the Handicap principle may serve as a fundamental paradigm for trust research in computer science. (ii) To suggest to computer scientists that their expertise in modeling computer networks may help behavioral biologists in their study of the reliability of animal communication networks. This is largely due to the historical reason that, until the last decade, animal communication was studied with the dyadic paradigm (sender-receiver) rather than with the network paradigm. (iii) To pose several open questions, the answers to which may bear some refreshing insights to trust research in computer science, especially secure and resilient computing, the semantic web, and social networking. One important thread unifying the three aspects is the evolutionary game theory modeling or its extensions with survival analysis and agreement algorithms [19][20], which offer powerful game models for describing time-, space-, and covariate-dependent frailty (uncertainty and vulnerability) and deception (honesty).
Nessler, Bernhard; Pfeiffer, Michael; Buesing, Lars; Maass, Wolfgang
2013-01-01
The principles by which networks of neurons compute, and how spike-timing dependent plasticity (STDP) of synaptic weights generates and maintains their computational function, are unknown. Preceding work has shown that soft winner-take-all (WTA) circuits, where pyramidal neurons inhibit each other via interneurons, are a common motif of cortical microcircuits. We show through theoretical analysis and computer simulations that Bayesian computation is induced in these network motifs through STDP in combination with activity-dependent changes in the excitability of neurons. The fundamental components of this emergent Bayesian computation are priors that result from adaptation of neuronal excitability and implicit generative models for hidden causes that are created in the synaptic weights through STDP. In fact, a surprising result is that STDP is able to approximate a powerful principle for fitting such implicit generative models to high-dimensional spike inputs: Expectation Maximization. Our results suggest that the experimentally observed spontaneous activity and trial-to-trial variability of cortical neurons are essential features of their information processing capability, since their functional role is to represent probability distributions rather than static neural codes. Furthermore it suggests networks of Bayesian computation modules as a new model for distributed information processing in the cortex. PMID:23633941
Principled negotiation and distributed optimization for advanced air traffic management
NASA Astrophysics Data System (ADS)
Wangermann, John Paul
Today's aircraft/airspace system faces complex challenges. Congestion and delays are widespread as air traffic continues to grow. Airlines want to better optimize their operations, and general aviation wants easier access to the system. Additionally, the accident rate must decline just to keep the number of accidents each year constant. New technology provides an opportunity to rethink the air traffic management process. Faster computers, new sensors, and high-bandwidth communications can be used to create new operating models. The choice is no longer between "inflexible" strategic separation assurance and "flexible" tactical conflict resolution. With suitable operating procedures, it is possible to have strategic, four-dimensional separation assurance that is flexible and allows system users maximum freedom to optimize operations. This thesis describes an operating model based on principled negotiation between agents. Many multi-agent systems have agents that have different, competing interests but have a shared interest in coordinating their actions. Principled negotiation is a method of finding agreement between agents with different interests. By focusing on fundamental interests and searching for options for mutual gain, agents with different interests reach agreements that provide benefits for both sides. Using principled negotiation, distributed optimization by each agent can be coordinated leading to iterative optimization of the system. Principled negotiation is well-suited to aircraft/airspace systems. It allows aircraft and operators to propose changes to air traffic control. Air traffic managers check the proposal maintains required aircraft separation. If it does, the proposal is either accepted or passed to agents whose trajectories change as part of the proposal for approval. Aircraft and operators can use all the data at hand to develop proposals that optimize their operations, while traffic managers can focus on their primary duty of ensuring aircraft safety. This thesis describes how an aircraft/airspace system using principled negotiation operates, and reports simulation results on the concept. The results show safety is maintained while aircraft have freedom to optimize their operations.
Reliability models for dataflow computer systems
NASA Technical Reports Server (NTRS)
Kavi, K. M.; Buckles, B. P.
1985-01-01
The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.
Monitoring system of multiple fire fighting based on computer vision
NASA Astrophysics Data System (ADS)
Li, Jinlong; Wang, Li; Gao, Xiaorong; Wang, Zeyong; Zhao, Quanke
2010-10-01
With the high demand of fire control in spacious buildings, computer vision is playing a more and more important role. This paper presents a new monitoring system of multiple fire fighting based on computer vision and color detection. This system can adjust to the fire position and then extinguish the fire by itself. In this paper, the system structure, working principle, fire orientation, hydrant's angle adjusting and system calibration are described in detail; also the design of relevant hardware and software is introduced. At the same time, the principle and process of color detection and image processing are given as well. The system runs well in the test, and it has high reliability, low cost, and easy nodeexpanding, which has a bright prospect of application and popularization.
NASA Astrophysics Data System (ADS)
Cannon, William R.; Baker, Scott E.
2017-10-01
Comprehensive and predictive simulation of coupled reaction networks has long been a goal of biology and other fields. Currently, metabolic network models that utilize enzyme mass action kinetics have predictive power but are limited in scope and application by the fact that the determination of enzyme rate constants is laborious and low throughput. We present a statistical thermodynamic formulation of the law of mass action for coupled reactions at both steady states and non-stationary states. The formulation uses chemical potentials instead of rate constants. When used to model deterministic systems, the method corresponds to a rescaling of the time dependent reactions in such a way that steady states can be reached on the same time scale but with significantly fewer computational steps. The relationships between reaction affinities, free energy changes and generalized detailed balance are central to the discussion. The significance for applications in systems biology are discussed as is the concept and assumption of maximum entropy production rate as a biological principle that links thermodynamics to natural selection.
An Emerging Technology Curriculum. Education for Technology Employment Project. Final Report.
ERIC Educational Resources Information Center
Harms, Dan; And Others
Individualized, competency-based curriculum materials were developed for a course on Principles of Technology, Units 1-6. New and updated curriculum materials in Drafting and Electronics and the Principles of Technology units were pilot tested in area vocational center settings in Illinois. A computer maintenance program was also developed but not…
Applying Minimal Manual Principles for Documentation of Graphical User Interfaces.
ERIC Educational Resources Information Center
Nowaczyk, Ronald H.; James, E. Christopher
1993-01-01
Investigates the need to include computer screens in documentation for software using a graphical user interface. Describes the uses and purposes of "minimal manuals" and their principles. Studies student reaction to their use of one of three on-screen manuals: screens, icon, and button. Finds some benefit for including icon and button…
Designing Serious Game Interventions for Individuals with Autism
ERIC Educational Resources Information Center
Whyte, Elisabeth M.; Smyth, Joshua M.; Scherf, K. Suzanne
2015-01-01
The design of "Serious games" that use game components (e.g., storyline, long-term goals, rewards) to create engaging learning experiences has increased in recent years. We examine of the core principles of serious game design and examine the current use of these principles in computer-based interventions for individuals with autism.…
An Educational Development Tool Based on Principles of Formal Ontology
ERIC Educational Resources Information Center
Guzzi, Rodolfo; Scarpanti, Stefano; Ballista, Giovanni; Di Nicolantonio, Walter
2005-01-01
Computer science provides with virtual laboratories, places where one can merge real experiments with the formalism of algorithms and mathematics and where, with the advent of multimedia, sounds and movies can also be added. In this paper we present a method, based on principles of formal ontology, allowing one to develop interactive educational…
ERIC Educational Resources Information Center
Jager, Sake, Ed.; Bradley, Linda, Ed.; Meima, Estelle J., Ed.; Thouësny, Sylvie, Ed.
2014-01-01
The theme of EUROCALL 2014 was "CALL Design: Principles and Practice," which attracted approximately 280 practitioners, researchers and students from computer-assisted language learning (CALL) and related disciplines of more than 40 different nationalities. Over 170 presentations were delivered on topics related to this overarching…
Dispersion in Spherical Water Drops.
ERIC Educational Resources Information Center
Eliason, John C., Jr.
1989-01-01
Discusses a laboratory exercise simulating the paths of light rays through spherical water drops by applying principles of ray optics and geometry. Describes four parts: determining the output angles, computer simulation, explorations, model testing, and solutions. Provides a computer program and some diagrams. (YP)
ERIC Educational Resources Information Center
Cárdenas-Claros, Mónica Stella
2015-01-01
This paper reports on the findings of two qualitative exploratory studies that sought to investigate design features of help options in computer-based L2 listening materials. Informed by principles of participatory design, language learners, software designers, language teachers, and a computer programmer worked collaboratively in a series of…
Principles of Tablet Computing for Educators
ERIC Educational Resources Information Center
Katzan, Harry, Jr.
2015-01-01
In the study of modern technology for the 21st century, one of the most popular subjects is tablet computing. Tablet computers are now used in business, government, education, and the personal lives of practically everyone--at least, it seems that way. As of October 2013, Apple has sold 170 million iPads. The success of tablets is enormous and has…
Cognitive Computational Neuroscience: A New Conference for an Emerging Discipline.
Naselaris, Thomas; Bassett, Danielle S; Fletcher, Alyson K; Kording, Konrad; Kriegeskorte, Nikolaus; Nienborg, Hendrikje; Poldrack, Russell A; Shohamy, Daphna; Kay, Kendrick
2018-05-01
Understanding the computational principles that underlie complex behavior is a central goal in cognitive science, artificial intelligence, and neuroscience. In an attempt to unify these disconnected communities, we created a new conference called Cognitive Computational Neuroscience (CCN). The inaugural meeting revealed considerable enthusiasm but significant obstacles remain. Copyright © 2018 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Good, Jonathon; Keenan, Sarah; Mishra, Punya
2016-01-01
The popular press is rife with examples of how students in the United States and around the globe are learning to program, make, and tinker. The Hour of Code, maker-education, and similar efforts are advocating that more students be exposed to principles found within computer science. We propose an expansion beyond simply teaching computational…
NASA Astrophysics Data System (ADS)
Sutton, Jonathan E.; Guo, Wei; Katsoulakis, Markos A.; Vlachos, Dionisios G.
2016-04-01
Kinetic models based on first principles are becoming common place in heterogeneous catalysis because of their ability to interpret experimental data, identify the rate-controlling step, guide experiments and predict novel materials. To overcome the tremendous computational cost of estimating parameters of complex networks on metal catalysts, approximate quantum mechanical calculations are employed that render models potentially inaccurate. Here, by introducing correlative global sensitivity analysis and uncertainty quantification, we show that neglecting correlations in the energies of species and reactions can lead to an incorrect identification of influential parameters and key reaction intermediates and reactions. We rationalize why models often underpredict reaction rates and show that, despite the uncertainty being large, the method can, in conjunction with experimental data, identify influential missing reaction pathways and provide insights into the catalyst active site and the kinetic reliability of a model. The method is demonstrated in ethanol steam reforming for hydrogen production for fuel cells.
NASA Astrophysics Data System (ADS)
Li, Ming-zhou; Zhou, Jie-min; Tong, Chang-ren; Zhang, Wen-hai; Chen, Zhuo; Wang, Jin-liang
2018-05-01
Based on the principle of multiphase equilibrium, a mathematical model of the copper flash converting process was established by the equilibrium constant method, and a computational system was developed with the use of MetCal software platform. The mathematical model was validated by comparing simulated outputs, industrial data, and published data. To obtain high-quality blister copper, a low copper content in slag, and increased impurity removal rate, the model was then applied to investigate the effects of the operational parameters [oxygen/feed ratio (R OF), flux rate (R F), and converting temperature (T)] on the product weights, compositions, and the distribution behaviors of impurity elements. The optimized results showed that R OF, R F, and T should be controlled at approximately 156 Nm3/t, within 3.0 pct, and at approximately 1523 K (1250 °C), respectively.
Thermal quantum time-correlation functions from classical-like dynamics
NASA Astrophysics Data System (ADS)
Hele, Timothy J. H.
2017-07-01
Thermal quantum time-correlation functions are of fundamental importance in quantum dynamics, allowing experimentally measurable properties such as reaction rates, diffusion constants and vibrational spectra to be computed from first principles. Since the exact quantum solution scales exponentially with system size, there has been considerable effort in formulating reliable linear-scaling methods involving exact quantum statistics and approximate quantum dynamics modelled with classical-like trajectories. Here, we review recent progress in the field with the development of methods including centroid molecular dynamics , ring polymer molecular dynamics (RPMD) and thermostatted RPMD (TRPMD). We show how these methods have recently been obtained from 'Matsubara dynamics', a form of semiclassical dynamics which conserves the quantum Boltzmann distribution. We also apply the Matsubara formalism to reaction rate theory, rederiving t → 0+ quantum transition-state theory (QTST) and showing that Matsubara-TST, like RPMD-TST, is equivalent to QTST. We end by surveying areas for future progress.
Field evaluation of boat-mounted acoustic Doppler instruments used to measure streamflow
Mueller, D.S.; ,
2003-01-01
The use of instruments based on the Doppler principle for measuring water velocity and computing discharge is common within the U.S. Geological Survey (USGS). The instruments and software have changed appreciably during the last 5 years; therefore, the USGS has begun field validation of the instruments used to make discharge measurements from a moving boat. Instruments manufactured by SonTek/YSI and RD Instruments, Inc. were used to collect discharge data at five different sites. One or more traditional discharge measurements were made using a Price AA current meter and standard USGS procedures concurrent with the acoustic instruments at each site. Discharges measured with the acoustic instruments were compared with discharges measured with Price AA current meters and the USGS stage-discharge rating for each site. The mean discharges measured by each acoustic instrument were within 5 percent of the Price AA-based measurement and (or) discharge from the stage-discharge rating.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanson, J. D.
The virtual-casing principle is used in plasma physics to convert a Biot–Savart integration over a current distribution into a surface integral over a surface that encloses the current. In many circumstances, use of virtual casing can significantly speed up the computation of magnetic fields. In this paper, a virtual-casing principle is derived for a general vector field with arbitrary divergence and curl. This form of the virtual-casing principle is thus applicable to both magnetostatic fields and electrostatic fields. The result is then related to Helmholtz's theorem.
Computing in Hydraulic Engineering Education
NASA Astrophysics Data System (ADS)
Duan, J. G.
2011-12-01
Civil engineers, pioneers of our civilization, are rarely perceived as leaders and innovators in modern society because of retardations in technology innovation. This crisis has resulted in the decline of the prestige of civil engineering profession, reduction of federal funding on deteriorating infrastructures, and problems with attracting the most talented high-school students. Infusion of cutting-edge computer technology and stimulating creativity and innovation therefore are the critical challenge to civil engineering education. To better prepare our graduates to innovate, this paper discussed the adaption of problem-based collaborative learning technique and integration of civil engineering computing into a traditional civil engineering curriculum. Three interconnected courses: Open Channel Flow, Computational Hydraulics, and Sedimentation Engineering, were developed with emphasis on computational simulations. In Open Channel flow, the focuses are principles of free surface flow and the application of computational models. This prepares students to the 2nd course, Computational Hydraulics, that introduce the fundamental principles of computational hydraulics, including finite difference and finite element methods. This course complements the Open Channel Flow class to provide students with in-depth understandings of computational methods. The 3rd course, Sedimentation Engineering, covers the fundamentals of sediment transport and river engineering, so students can apply the knowledge and programming skills gained from previous courses to develop computational models for simulating sediment transport. These courses effectively equipped students with important skills and knowledge to complete thesis and dissertation research.
NASA Astrophysics Data System (ADS)
Eisenbach, Markus; Larkin, Jeff; Lutjens, Justin; Rennich, Steven; Rogers, James H.
2017-02-01
The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. We reimplement the scattering matrix calculation for GPUs with a block matrix inversion algorithm that only uses accelerator memory. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code.
Unconventional Hamilton-type variational principle in phase space and symplectic algorithm
NASA Astrophysics Data System (ADS)
Luo, En; Huang, Weijiang; Zhang, Hexin
2003-06-01
By a novel approach proposed by Luo, the unconventional Hamilton-type variational principle in phase space for elastodynamics of multidegree-of-freedom system is established in this paper. It not only can fully characterize the initial-value problem of this dynamic, but also has a natural symplectic structure. Based on this variational principle, a symplectic algorithm which is called a symplectic time-subdomain method is proposed. A non-difference scheme is constructed by applying Lagrange interpolation polynomial to the time subdomain. Furthermore, it is also proved that the presented symplectic algorithm is an unconditionally stable one. From the results of the two numerical examples of different types, it can be seen that the accuracy and the computational efficiency of the new method excel obviously those of widely used Wilson-θ and Newmark-β methods. Therefore, this new algorithm is a highly efficient one with better computational performance.
Eisenbach, Markus; Larkin, Jeff; Lutjens, Justin; ...
2016-07-12
The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn–Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. In this paper, we present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. We reimplement the scattering matrix calculation for GPUs with a block matrix inversion algorithm that only uses accelerator memory. Finally, using the Craymore » XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code.« less
Principles of Biomimetic Vascular Network Design Applied to a Tissue-Engineered Liver Scaffold
Hoganson, David M.; Pryor, Howard I.; Spool, Ira D.; Burns, Owen H.; Gilmore, J. Randall
2010-01-01
Branched vascular networks are a central component of scaffold architecture for solid organ tissue engineering. In this work, seven biomimetic principles were established as the major guiding technical design considerations of a branched vascular network for a tissue-engineered scaffold. These biomimetic design principles were applied to a branched radial architecture to develop a liver-specific vascular network. Iterative design changes and computational fluid dynamic analysis were used to optimize the network before mold manufacturing. The vascular network mold was created using a new mold technique that achieves a 1:1 aspect ratio for all channels. In vitro blood flow testing confirmed the physiologic hemodynamics of the network as predicted by computational fluid dynamic analysis. These results indicate that this biomimetic liver vascular network design will provide a foundation for developing complex vascular networks for solid organ tissue engineering that achieve physiologic blood flow. PMID:20001254
Principles of biomimetic vascular network design applied to a tissue-engineered liver scaffold.
Hoganson, David M; Pryor, Howard I; Spool, Ira D; Burns, Owen H; Gilmore, J Randall; Vacanti, Joseph P
2010-05-01
Branched vascular networks are a central component of scaffold architecture for solid organ tissue engineering. In this work, seven biomimetic principles were established as the major guiding technical design considerations of a branched vascular network for a tissue-engineered scaffold. These biomimetic design principles were applied to a branched radial architecture to develop a liver-specific vascular network. Iterative design changes and computational fluid dynamic analysis were used to optimize the network before mold manufacturing. The vascular network mold was created using a new mold technique that achieves a 1:1 aspect ratio for all channels. In vitro blood flow testing confirmed the physiologic hemodynamics of the network as predicted by computational fluid dynamic analysis. These results indicate that this biomimetic liver vascular network design will provide a foundation for developing complex vascular networks for solid organ tissue engineering that achieve physiologic blood flow.
Impact of a process improvement program in a production software environment: Are we any better?
NASA Technical Reports Server (NTRS)
Heller, Gerard H.; Page, Gerald T.
1990-01-01
For the past 15 years, Computer Sciences Corporation (CSC) has participated in a process improvement program as a member of the Software Engineering Laboratory (SEL), which is sponsored by GSFC. The benefits CSC has derived from involvement in this program are analyzed. In the environment studied, it shows that improvements were indeed achieved, as evidenced by a decrease in error rates and costs over a period in which both the size and the complexity of the developed systems increased substantially. The principles and mechanics of the process improvement program, the lessons CSC has learned, and how CSC has capitalized on these lessons are also discussed.
A Computer Analysis Study of the Word Style in Love-songs of Tshang yang Gya tsho
NASA Astrophysics Data System (ADS)
Yonghong, Li; SunTing; Lei, Guo; Hongzhi, Yu
Based on the statistical methods of corpus and the 124 love-songs of Tshang yang Gya tsho as the studying object, this paper have set up the principles of vocabulary segmentation and built the love-songs corpus of Tibetan and Tibetan-Chinese grammar separation lexicon corpus. Then it did quantitative research on the achievement of "love-songs" in the language arts from three aspects: the length of the vocabularie's, the frequency rate of the vocabularies, and the distribution of the term's number in the verses and the songs. In addition it also introduced a new kind of researching idea and method for the study of Tibetan literature.
Spatial transform coding of color images.
NASA Technical Reports Server (NTRS)
Pratt, W. K.
1971-01-01
The application of the transform-coding concept to the coding of color images represented by three primary color planes of data is discussed. The principles of spatial transform coding are reviewed and the merits of various methods of color-image representation are examined. A performance analysis is presented for the color-image transform-coding system. Results of a computer simulation of the coding system are also given. It is shown that, by transform coding, the chrominance content of a color image can be coded with an average of 1.0 bits per element or less without serious degradation. If luminance coding is also employed, the average rate reduces to about 2.0 bits per element or less.
Kiluk, Brian D.; Sugarman, Dawn E.; Nich, Charla; Gibbons, Carly J.; Martino, Steve; Rounsaville, Bruce J.; Carroll, Kathleen M.
2013-01-01
Objective Computer-assisted therapies offer a novel, cost-effective strategy for providing evidence-based therapies to a broad range of individuals with psychiatric disorders. However, the extent to which the growing body of randomized trials evaluating computer-assisted therapies meets current standards of methodological rigor for evidence-based interventions is not clear. Method A methodological analysis of randomized clinical trials of computer-assisted therapies for adult psychiatric disorders, published between January 1990 and January 2010, was conducted. Seventy-five studies that examined computer-assisted therapies for a range of axis I disorders were evaluated using a 14-item methodological quality index. Results Results indicated marked heterogeneity in study quality. No study met all 14 basic quality standards, and three met 13 criteria. Consistent weaknesses were noted in evaluation of treatment exposure and adherence, rates of follow-up assessment, and conformity to intention-to-treat principles. Studies utilizing weaker comparison conditions (e.g., wait-list controls) had poorer methodological quality scores and were more likely to report effects favoring the computer-assisted condition. Conclusions While several well-conducted studies have indicated promising results for computer-assisted therapies, this emerging field has not yet achieved a level of methodological quality equivalent to those required for other evidence-based behavioral therapies or pharmacotherapies. Adoption of more consistent standards for methodological quality in this field, with greater attention to potential adverse events, is needed before computer-assisted therapies are widely disseminated or marketed as evidence based. PMID:21536689
A study of usability principles and interface design for mobile e-books.
Wang, Chao-Ming; Huang, Ching-Hua
2015-01-01
This study examined usability principles and interface designs in order to understand the relationship between the intentions of mobile e-book interface designs and users' perceptions. First, this study summarised 4 usability principles and 16 interface attributes, in order to conduct usability testing and questionnaire survey by referring to Nielsen (1993), Norman (2002), and Yeh (2010), who proposed the usability principles. Second, this study used the interviews to explore the perceptions and behaviours of user operations through senior users of multi-touch prototype devices. The results of this study are as follows: (1) users' behaviour of operating an interactive interface is related to user prior experience; (2) users' rating of the visibility principle is related to users' subjective perception but not related to user prior experience; however, users' ratings of the ease, efficiency, and enjoyment principles are related to user prior experience; (3) the interview survey reveals that the key attributes affecting users' behaviour of operating an interface include aesthetics, achievement, and friendliness. This study conducts experiments to explore the effects of users’ prior multi-touch experience on users’ behaviour of operating a mobile e-book interface and users’ rating of usability principles. Both qualitative and quantitative data analyses were performed. By applying protocol analysis, key attributes affecting users’ behaviour of operation were determined.
21st International Conference on DNA Computing and Molecular Programming: 8.1 Biochemistry
include information storage and biological applications of DNA systems, biomolecular chemical reaction networks, applications of self -assembled DNA...nanostructures, tile self -assembly and computation, principles and models of self -assembly, and strand displacement and biomolecular circuits. The fund
Goal-Oriented Intelligence in Optimization of Distributed Parameter Systems
2004-08-01
Yarus, and R.L. Chambers, editors, AAPG Computer Applications in geology, No. 3, The American Association of Petroleum Geologists, Tulsa, OK, USA...Stochastic Modeling and Geostatistics – Principles, Methods, and Case Studies, AAPG Computer Applications in geology, No. 3, The American
DEVELOPMENT OF COMPUTATIONAL TOOLS FOR OPTIMAL IDENTIFICATION OF BIOLOGICAL NETWORKS
Following the theoretical analysis and computer simulations, the next step for the development of SNIP will be a proof-of-principle laboratory application. Specifically, we have obtained a synthetic transcriptional cascade (harbored in Escherichia coli...
Computing Your Way through Science.
ERIC Educational Resources Information Center
Allen, Denise
1994-01-01
Reviews three computer software programs focusing on teaching science to middle school students: (1) Encarta, a multimedia encyclopedia; (2) Gizmos and Gadgets, which allows students to explore physical science principles; and (3) BodyScope, which allows students to examine the systems of the human body. (BB)
PCs: Key to the Future. Business Center Provides Sound Skills and Good Attitudes.
ERIC Educational Resources Information Center
Pay, Renee W.
1991-01-01
The Advanced Computing/Management Training Program at Jordan Technical Center (Sandy, Utah) simulates an automated office to teach five sets of skills: computer architecture and operating systems, word processing, data processing, communications skills, and management principles. (SK)
ERIC Educational Resources Information Center
Casey, Joe
This document contains five units for a course in computer numerical control (CNC) for computer-aided manufacturing. It is intended to familiarize students with the principles and techniques necessary to create proper CNC programs manually. Each unit consists of an introduction, instructional objectives, learning materials, learning activities,…
Water System Adaptation To Hydrological Changes: Module 11, Methods and Tools: Computational Models
This course will introduce students to the fundamental principles of water system adaptation to hydrological changes, with emphasis on data analysis and interpretation, technical planning, and computational modeling. Starting with real-world scenarios and adaptation needs, the co...
A System for Generating Instructional Computer Graphics.
ERIC Educational Resources Information Center
Nygard, Kendall E.; Ranganathan, Babusankar
1983-01-01
Description of the Tektronix-Based Interactive Graphics System for Instruction (TIGSI), which was developed for generating graphics displays in computer-assisted instruction materials, discusses several applications (e.g., reinforcing learning of concepts, principles, rules, and problem-solving techniques) and presents advantages of the TIGSI…
NASA Astrophysics Data System (ADS)
Deng, Lujuan; Xie, Songhe; Cui, Jiantao; Liu, Tao
2006-11-01
It is the essential goal of intelligent greenhouse environment optimal control to enhance income of cropper and energy save. There were some characteristics such as uncertainty, imprecision, nonlinear, strong coupling, bigger inertia and different time scale in greenhouse environment control system. So greenhouse environment optimal control was not easy and especially model-based optimal control method was more difficult. So the optimal control problem of plant environment in intelligent greenhouse was researched. Hierarchical greenhouse environment control system was constructed. In the first level data measuring was carried out and executive machine was controlled. Optimal setting points of climate controlled variable in greenhouse was calculated and chosen in the second level. Market analysis and planning were completed in third level. The problem of the optimal setting point was discussed in this paper. Firstly the model of plant canopy photosynthesis responses and the model of greenhouse climate model were constructed. Afterwards according to experience of the planting expert, in daytime the optimal goals were decided according to the most maximal photosynthesis rate principle. In nighttime on plant better growth conditions the optimal goals were decided by energy saving principle. Whereafter environment optimal control setting points were computed by GA. Compared the optimal result and recording data in real system, the method is reasonable and can achieve energy saving and the maximal photosynthesis rate in intelligent greenhouse
Lewin, Linda Orkin; Singh, Mamta; Bateman, Betzi L; Glover, Pamela Bligh
2009-06-10
Standardizing the experiences of medical students in a community preceptorship where clinical sites vary by geography and discipline can be challenging. Computer-assisted learning is prevalent in medical education and can help standardize experiences, but often is not used to its fullest advantage. A blended learning curriculum combining web-based modules with face-to-face learning can ensure students obtain core curricular principles. This course was developed and used at The Case Western Reserve University School of Medicine and its associated preceptorship sites in the greater Cleveland area. Leaders of a two-year elective continuity experience at the Case Western Reserve School of Medicine used adult learning principles to develop four interactive online modules presenting basics of office practice, difficult patient interviews, common primary care diagnoses, and disease prevention. They can be viewed at (http://casemed.case.edu/cpcp/curriculum). Students completed surveys rating the content and technical performance of each module and completed a Generalist OSCE exam at the end of the course. Participating students rated all aspects of the course highly; particularly those related to charting and direct patient care. Additionally, they scored very well on the Generalist OSCE exam. Students found the web-based modules to be valuable and to enhance their clinical learning. The blended learning model is a useful tool in designing web-based curriculum for enhancing the clinical curriculum of medical students.
Yan, Koon-Kiu; Fang, Gang; Bhardwaj, Nitin; Alexander, Roger P.; Gerstein, Mark
2010-01-01
The genome has often been called the operating system (OS) for a living organism. A computer OS is described by a regulatory control network termed the call graph, which is analogous to the transcriptional regulatory network in a cell. To apply our firsthand knowledge of the architecture of software systems to understand cellular design principles, we present a comparison between the transcriptional regulatory network of a well-studied bacterium (Escherichia coli) and the call graph of a canonical OS (Linux) in terms of topology and evolution. We show that both networks have a fundamentally hierarchical layout, but there is a key difference: The transcriptional regulatory network possesses a few global regulators at the top and many targets at the bottom; conversely, the call graph has many regulators controlling a small set of generic functions. This top-heavy organization leads to highly overlapping functional modules in the call graph, in contrast to the relatively independent modules in the regulatory network. We further develop a way to measure evolutionary rates comparably between the two networks and explain this difference in terms of network evolution. The process of biological evolution via random mutation and subsequent selection tightly constrains the evolution of regulatory network hubs. The call graph, however, exhibits rapid evolution of its highly connected generic components, made possible by designers’ continual fine-tuning. These findings stem from the design principles of the two systems: robustness for biological systems and cost effectiveness (reuse) for software systems. PMID:20439753
Yan, Koon-Kiu; Fang, Gang; Bhardwaj, Nitin; Alexander, Roger P; Gerstein, Mark
2010-05-18
The genome has often been called the operating system (OS) for a living organism. A computer OS is described by a regulatory control network termed the call graph, which is analogous to the transcriptional regulatory network in a cell. To apply our firsthand knowledge of the architecture of software systems to understand cellular design principles, we present a comparison between the transcriptional regulatory network of a well-studied bacterium (Escherichia coli) and the call graph of a canonical OS (Linux) in terms of topology and evolution. We show that both networks have a fundamentally hierarchical layout, but there is a key difference: The transcriptional regulatory network possesses a few global regulators at the top and many targets at the bottom; conversely, the call graph has many regulators controlling a small set of generic functions. This top-heavy organization leads to highly overlapping functional modules in the call graph, in contrast to the relatively independent modules in the regulatory network. We further develop a way to measure evolutionary rates comparably between the two networks and explain this difference in terms of network evolution. The process of biological evolution via random mutation and subsequent selection tightly constrains the evolution of regulatory network hubs. The call graph, however, exhibits rapid evolution of its highly connected generic components, made possible by designers' continual fine-tuning. These findings stem from the design principles of the two systems: robustness for biological systems and cost effectiveness (reuse) for software systems.
Bayesian models: A statistical primer for ecologists
Hobbs, N. Thompson; Hooten, Mevin B.
2015-01-01
Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods—in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach.Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probability and develops a step-by-step sequence of connected ideas, including basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and inference from single and multiple models. This unique book places less emphasis on computer coding, favoring instead a concise presentation of the mathematical statistics needed to understand how and why Bayesian analysis works. It also explains how to write out properly formulated hierarchical Bayesian models and use them in computing, research papers, and proposals.This primer enables ecologists to understand the statistical principles behind Bayesian modeling and apply them to research, teaching, policy, and management.Presents the mathematical and statistical foundations of Bayesian modeling in language accessible to non-statisticiansCovers basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and moreDeemphasizes computer coding in favor of basic principlesExplains how to write out properly factored statistical expressions representing Bayesian models
Wissler, Eugene H; Havenith, George
2009-03-01
Overall resistances for heat and vapor transport in a multilayer garment depend on the properties of individual layers and the thickness of any air space between layers. Under uncomplicated, steady-state conditions, thermal and mass fluxes are uniform within the garment, and the rate of transport is simply computed as the overall temperature or water concentration difference divided by the appropriate resistance. However, that simple computation is not valid under cool ambient conditions when the vapor permeability of the garment is low, and condensation occurs within the garment. Several recent studies have measured heat and vapor transport when condensation occurs within the garment (Richards et al. in Report on Project ThermProject, Contract No. G6RD-CT-2002-00846, 2002; Havenith et al. in J Appl Physiol 104:142-149, 2008). In addition to measuring cooling rates for ensembles when the skin was either wet or dry, both studies employed a flat-plate apparatus to measure resistances of individual layers. Those data provide information required to define the properties of an ensemble in terms of its individual layers. We have extended the work of previous investigators by developing a rather simple technique for analyzing heat and water vapor transport when condensation occurs within a garment. Computed results agree well with experimental results reported by Richards et al. (Report on Project ThermProject, Contract No. G6RD-CT-2002-00846, 2002) and Havenith et al. (J Appl Physiol 104:142-149, 2008). We discuss application of the method to human subjects for whom the rate of sweat secretion, instead of the partial pressure of water on the skin, is specified. Analysis of a more complicated five-layer system studied by Yoo and Kim (Text Res J 78:189-197, 2008) required an iterative computation based on principles defined in this paper.
Micro-feeding and dosing of powders via a small-scale powder pump.
Besenhard, M O; Fathollahi, S; Siegmann, E; Slama, E; Faulhammer, E; Khinast, J G
2017-03-15
Robust and accurate powder micro-feeding (<100mg/s) and micro-dosing (<5 mg) are major challenges, especially with regard to regulatory limitations applicable to pharmaceutical development and production. Since known micro-feeders that yield feed rates below 5mg/s use gravimetric feeding principles, feed rates depend primarily on powder properties. In contrast, volumetric powder feeders do not require regular calibration because their feed rates are primarily determined by the feeder's characteristic volume replacement. In this paper, we present a volumetric micro-feeder based on a cylinder piston system (i.e., a powder pump), which allows accurate micro-feeding and feed rates of a few grams per hours even for very fine powders. Our experimental studies addressed the influence of cylinder geometries, the initial conditions of bulk powder, and the piston speeds. Additional computational studies via Discrete Element Method simulations offered a better understanding of the feeding process, its possible limitations and ways to overcome them. The powder pump is a simple yet valuable tool for accurate powder feeding at feed rates of several orders of magnitude. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Feng, Tianli; Ruan, Xiulin
2016-01-01
Recently, first principle-based predictions of lattice thermal conductivity κ from perturbation theory have achieved significant success. However, it only includes three-phonon scattering due to the assumption that four-phonon and higher-order processes are generally unimportant. Also, directly evaluating the scattering rates of four-phonon and higher-order processes has been a long-standing challenge. In this work, however, we have developed a formalism to explicitly determine quantum mechanical scattering probability matrices for four-phonon scattering in the full Brillouin zone, and by mitigating the computational challenge we have directly calculated four-phonon scattering rates. We find that four-phonon scattering rates are comparable to three-phonon scattering rates at medium and high temperatures, and they increase quadratically with temperature. As a consequence, κ of Lennard-Jones argon is reduced by more than 60% at 80 K when four-phonon scattering is included. Also, in less anharmonic materials—diamond, silicon, and germanium—κ is still reduced considerably at high temperature by four-phonon scattering by using the classical Tersoff potentials. Also, the thermal conductivity of optical phonons is dominated by the fourth- and higher-orders phonon scattering even at low temperature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shumilov, V. N., E-mail: vnshumilov@rambler.ru; Syryamkin, V. I., E-mail: maximus70sir@gmail.com; Syryamkin, M. V., E-mail: maximus70sir@gmail.com
The paper puts forward principles of action of devices operating similarly to the nervous system and the brain of biological systems. We propose an alternative method of studying diseases of the nervous system, which may significantly influence prevention, medical treatment, or at least retardation of development of these diseases. This alternative is to use computational and electronic models of the nervous system. Within this approach, we represent the brain in the form of a huge electrical circuit composed of active units, namely, neuron-like units and connections between them. As a result, we created computational and electronic models of elementary nervousmore » systems, which are based on the principles of functioning of biological nervous systems that we have put forward. Our models demonstrate reactions to external stimuli and their change similarly to the behavior of simplest biological organisms. The models possess the ability of self-training and retraining in real time without human intervention and switching operation/training modes. In our models, training and memorization take place constantly under the influence of stimuli on the organism. Training is without any interruption and switching operation modes. Training and formation of new reflexes occur by means of formation of new connections between excited neurons, between which formation of connections is physically possible. Connections are formed without external influence. They are formed under the influence of local causes. Connections are formed between outputs and inputs of two neurons, when the difference between output and input potentials of excited neurons exceeds a value sufficient to form a new connection. On these grounds, we suggest that the proposed principles truly reflect mechanisms of functioning of biological nervous systems and the brain. In order to confirm the correspondence of the proposed principles to biological nature, we carry out experiments for the study of processes of formation of connections between neurons in simplest biological objects. Based on the correspondence of function of the created models to function of biological nervous systems we suggest the use of computational and electronic models of the brain for the study of its function under normal and pathological conditions, because operating principles of the models are built on principles imitating the function of biological nervous systems and the brain.« less
NASA Astrophysics Data System (ADS)
Shumilov, V. N.; Syryamkin, V. I.; Syryamkin, M. V.
2015-11-01
The paper puts forward principles of action of devices operating similarly to the nervous system and the brain of biological systems. We propose an alternative method of studying diseases of the nervous system, which may significantly influence prevention, medical treatment, or at least retardation of development of these diseases. This alternative is to use computational and electronic models of the nervous system. Within this approach, we represent the brain in the form of a huge electrical circuit composed of active units, namely, neuron-like units and connections between them. As a result, we created computational and electronic models of elementary nervous systems, which are based on the principles of functioning of biological nervous systems that we have put forward. Our models demonstrate reactions to external stimuli and their change similarly to the behavior of simplest biological organisms. The models possess the ability of self-training and retraining in real time without human intervention and switching operation/training modes. In our models, training and memorization take place constantly under the influence of stimuli on the organism. Training is without any interruption and switching operation modes. Training and formation of new reflexes occur by means of formation of new connections between excited neurons, between which formation of connections is physically possible. Connections are formed without external influence. They are formed under the influence of local causes. Connections are formed between outputs and inputs of two neurons, when the difference between output and input potentials of excited neurons exceeds a value sufficient to form a new connection. On these grounds, we suggest that the proposed principles truly reflect mechanisms of functioning of biological nervous systems and the brain. In order to confirm the correspondence of the proposed principles to biological nature, we carry out experiments for the study of processes of formation of connections between neurons in simplest biological objects. Based on the correspondence of function of the created models to function of biological nervous systems we suggest the use of computational and electronic models of the brain for the study of its function under normal and pathological conditions, because operating principles of the models are built on principles imitating the function of biological nervous systems and the brain.
ERIC Educational Resources Information Center
Çakýroðlu, Ünal
2014-01-01
This study assesses the quality of distance learning (DL) in higher education assessed by considering the Seven Principles of Good Practice (SPGP). The participants were 77 second-year students from the Computer and Instructional Technologies Program (CEIT) of a Faculty of Education in Turkey. A questionnaire was developed in line with the SPGP…
Pilot Study: Impact of Computer Simulation on Students' Economic Policy Performance. Pilot Study.
ERIC Educational Resources Information Center
Domazlicky, Bruce; France, Judith
Fiscal and monetary policies taught in macroeconomic principles courses are concepts that might require both lecture and simulation methods. The simulation models, which apply the principles gleened from comparative statistics to a dynamic world, may give students an appreciation for the problems facing policy makers. This paper is a report of a…
Screen Design Principles of Computer-Aided Instructional Software for Elementary School Students
ERIC Educational Resources Information Center
Berrin, Atiker; Turan, Bülent Onur
2017-01-01
This study aims to present primary school students' views about current educational software interfaces, and to propose principles for educational software screens. The study was carried out with a general screening model. Sample group of the study consisted of sixth grade students in Sehit Ögretmen Hasan Akan Elementary School. In this context,…
ERIC Educational Resources Information Center
Greer, Diana L.; Crutchfield, Stephen A.; Woods, Kari L.
2013-01-01
Struggling learners and students with Learning Disabilities often exhibit unique cognitive processing and working memory characteristics that may not align with instructional design principles developed with typically developing learners. This paper explains the Cognitive Theory of Multimedia Learning and underlying Cognitive Load Theory, and…
Nonequilibrium BN-ZnO: Optical properties and excitonic effects from first principles
NASA Astrophysics Data System (ADS)
Zhang, Xiao; Schleife, André
2018-03-01
The nonequilibrium boron nitride (BN) phase of zinc oxide (ZnO) has been reported for thin films and nanostructures, however, its properties are not well understood due to a persistent controversy that prevents reconciling experimental and first-principles results for its atomic coordinates. We use first-principles theoretical spectroscopy to accurately compute electronic and optical properties, including single-quasiparticle and excitonic effects: Band structures and densities of states are computed using density functional theory, hybrid functionals, and the G W approximation. Accurate optical absorption spectra and exciton binding energies are computed by solving the Bethe-Salpeter equation for the optical polarization function. Using this data we show that the band-gap difference between BN-ZnO and wurtzite (WZ) ZnO agrees very well with experiment when the theoretical lattice geometry is used, but significantly disagrees for the experimental atomic coordinates. We also show that the optical anisotropy of BN-ZnO differs significantly from that of WZ-ZnO, allowing us to optically distinguish both polymorphs. By using the transfer-matrix method to solve Maxwell's equations for thin films composed of both polymorphs, we illustrate that this opens up a promising route for tuning optical properties.
Using a Simple Neural Network to Delineate Some Principles of Distributed Economic Choice.
Balasubramani, Pragathi P; Moreno-Bote, Rubén; Hayden, Benjamin Y
2018-01-01
The brain uses a mixture of distributed and modular organization to perform computations and generate appropriate actions. While the principles under which the brain might perform computations using modular systems have been more amenable to modeling, the principles by which the brain might make choices using distributed principles have not been explored. Our goal in this perspective is to delineate some of those distributed principles using a neural network method and use its results as a lens through which to reconsider some previously published neurophysiological data. To allow for direct comparison with our own data, we trained the neural network to perform binary risky choices. We find that value correlates are ubiquitous and are always accompanied by non-value information, including spatial information (i.e., no pure value signals). Evaluation, comparison, and selection were not distinct processes; indeed, value signals even in the earliest stages contributed directly, albeit weakly, to action selection. There was no place, other than at the level of action selection, at which dimensions were fully integrated. No units were specialized for specific offers; rather, all units encoded the values of both offers in an anti-correlated format, thus contributing to comparison. Individual network layers corresponded to stages in a continuous rotation from input to output space rather than to functionally distinct modules. While our network is likely to not be a direct reflection of brain processes, we propose that these principles should serve as hypotheses to be tested and evaluated for future studies.
Using a Simple Neural Network to Delineate Some Principles of Distributed Economic Choice
Balasubramani, Pragathi P.; Moreno-Bote, Rubén; Hayden, Benjamin Y.
2018-01-01
The brain uses a mixture of distributed and modular organization to perform computations and generate appropriate actions. While the principles under which the brain might perform computations using modular systems have been more amenable to modeling, the principles by which the brain might make choices using distributed principles have not been explored. Our goal in this perspective is to delineate some of those distributed principles using a neural network method and use its results as a lens through which to reconsider some previously published neurophysiological data. To allow for direct comparison with our own data, we trained the neural network to perform binary risky choices. We find that value correlates are ubiquitous and are always accompanied by non-value information, including spatial information (i.e., no pure value signals). Evaluation, comparison, and selection were not distinct processes; indeed, value signals even in the earliest stages contributed directly, albeit weakly, to action selection. There was no place, other than at the level of action selection, at which dimensions were fully integrated. No units were specialized for specific offers; rather, all units encoded the values of both offers in an anti-correlated format, thus contributing to comparison. Individual network layers corresponded to stages in a continuous rotation from input to output space rather than to functionally distinct modules. While our network is likely to not be a direct reflection of brain processes, we propose that these principles should serve as hypotheses to be tested and evaluated for future studies. PMID:29643773
Reversibility and stability of information processing systems
NASA Technical Reports Server (NTRS)
Zurek, W. H.
1984-01-01
Classical and quantum models of dynamically reversible computers are considered. Instabilities in the evolution of the classical 'billiard ball computer' are analyzed and shown to result in a one-bit increase of entropy per step of computation. 'Quantum spin computers', on the other hand, are not only microscopically, but also operationally reversible. Readoff of the output of quantum computation is shown not to interfere with this reversibility. Dissipation, while avoidable in principle, can be used in practice along with redundancy to prevent errors.
Technology in Note Taking and Assessment: The Effects of Congruence on Student Performance
ERIC Educational Resources Information Center
Barrett, Matthew E.; Swan, Alexander B.; Mamikonian, Ani; Ghajoyan, Inna; Kramarova, Olga; Youmans, Robert J.
2014-01-01
This study examined the encoding specificity principle in relation to traditional and computer-based note taking and assessment formats in higher education. Students (N = 79) took lecture notes either by hand (n = 40) or by computer (n = 39) and then completed either a computer or a paper-based assessment. When note taking and assessment formats…
Langenbucher, Frieder
2002-01-01
Most computations in the field of in vitro/in vivo correlations can be handled directly by Excel worksheets, without the need for specialized software. Following a summary of Excel features, applications are illustrated for numerical computation of AUC and Mean, Wagner-Nelson and Loo-Riegelman absorption plots, and polyexponential curve fitting.
ERIC Educational Resources Information Center
Palme, Jacob
The four papers contained in this document provide: (1) a survey of computer based mail and conference systems; (2) an evaluation of systems for both individually addressed mail and group addressing through conferences and distribution lists; (3) a discussion of various methods of structuring the text data in existing systems; and (4) a…
Ayachit, Utkarsh; Bauer, Andrew; Duque, Earl P. N.; ...
2016-11-01
A key trend facing extreme-scale computational science is the widening gap between computational and I/O rates, and the challenge that follows is how to best gain insight from simulation data when it is increasingly impractical to save it to persistent storage for subsequent visual exploration and analysis. One approach to this challenge is centered around the idea of in situ processing, where visualization and analysis processing is performed while data is still resident in memory. Our paper examines several key design and performance issues related to the idea of in situ processing at extreme scale on modern platforms: Scalability, overhead,more » performance measurement and analysis, comparison and contrast with a traditional post hoc approach, and interfacing with simulation codes. We illustrate these principles in practice with studies, conducted on large-scale HPC platforms, that include a miniapplication and multiple science application codes, one of which demonstrates in situ methods in use at greater than 1M-way concurrency.« less
High-Performance Signal Detection for Adverse Drug Events using MapReduce Paradigm.
Fan, Kai; Sun, Xingzhi; Tao, Ying; Xu, Linhao; Wang, Chen; Mao, Xianling; Peng, Bo; Pan, Yue
2010-11-13
Post-marketing pharmacovigilance is important for public health, as many Adverse Drug Events (ADEs) are unknown when those drugs were approved for marketing. However, due to the large number of reported drugs and drug combinations, detecting ADE signals by mining these reports is becoming a challenging task in terms of computational complexity. Recently, a parallel programming model, MapReduce has been introduced by Google to support large-scale data intensive applications. In this study, we proposed a MapReduce-based algorithm, for common ADE detection approach, Proportional Reporting Ratio (PRR), and tested it in mining spontaneous ADE reports from FDA. The purpose is to investigate the possibility of using MapReduce principle to speed up biomedical data mining tasks using this pharmacovigilance case as one specific example. The results demonstrated that MapReduce programming model could improve the performance of common signal detection algorithm for pharmacovigilance in a distributed computation environment at approximately liner speedup rates.
The Computer Simulation of Liquids by Molecular Dynamics.
ERIC Educational Resources Information Center
Smith, W.
1987-01-01
Proposes a mathematical computer model for the behavior of liquids using the classical dynamic principles of Sir Isaac Newton and the molecular dynamics method invented by other scientists. Concludes that other applications will be successful using supercomputers to go beyond simple Newtonian physics. (CW)
Design Principles and Guidelines for Security
2007-11-21
Padula , Secure Computer Systems: Unified Exposition and Multics Interpretation. Electronic Systems Division, USAF. ESD-TR-75-306, MTR-2997 Rev.1...Hanscom AFB, MA. March 1976 [7] David Elliott Bell. “Looking Back at the Bell-La Padula Model,” Proc. Annual Computer Security Applications Conference
Marginal Bidding: An Application of the Equimarginal Principle to Bidding in TAC SCM
NASA Astrophysics Data System (ADS)
Greenwald, Amy; Naroditskiy, Victor; Odean, Tyler; Ramirez, Mauricio; Sodomka, Eric; Zimmerman, Joe; Cutler, Clark
We present a fast and effective bidding strategy for the Trading Agent Competition in Supply Chain Management (TAC SCM). In TAC SCM, manufacturers compete to procure computer parts from suppliers (the procurement problem), and then sell assembled computers to customers in reverse auctions (the bidding problem). This paper is concerned only with bidding, in which an agent must decide how many computers to sell and at what prices to sell them. We propose a greedy solution, Marginal Bidding, inspired by the Equimarginal Principle, which states that revenue is maximized among possible uses of a resource when the return on the last unit of the resource is the same across all areas of use. We show experimentally that certain variations of Marginal Bidding can compute bids faster than our ILP solution, which enables Marginal Bidders to consider future demand as well as current demand, and hence achieve greater revenues when knowledge of the future is valuable.
Spirov, Alexander; Holloway, David
2013-07-15
This paper surveys modeling approaches for studying the evolution of gene regulatory networks (GRNs). Modeling of the design or 'wiring' of GRNs has become increasingly common in developmental and medical biology, as a means of quantifying gene-gene interactions, the response to perturbations, and the overall dynamic motifs of networks. Drawing from developments in GRN 'design' modeling, a number of groups are now using simulations to study how GRNs evolve, both for comparative genomics and to uncover general principles of evolutionary processes. Such work can generally be termed evolution in silico. Complementary to these biologically-focused approaches, a now well-established field of computer science is Evolutionary Computations (ECs), in which highly efficient optimization techniques are inspired from evolutionary principles. In surveying biological simulation approaches, we discuss the considerations that must be taken with respect to: (a) the precision and completeness of the data (e.g. are the simulations for very close matches to anatomical data, or are they for more general exploration of evolutionary principles); (b) the level of detail to model (we proceed from 'coarse-grained' evolution of simple gene-gene interactions to 'fine-grained' evolution at the DNA sequence level); (c) to what degree is it important to include the genome's cellular context; and (d) the efficiency of computation. With respect to the latter, we argue that developments in computer science EC offer the means to perform more complete simulation searches, and will lead to more comprehensive biological predictions. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo
2018-05-01
Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.
Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo
2018-05-14
Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.
Quantitative phase and amplitude imaging using Differential-Interference Contrast (DIC) microscopy
NASA Astrophysics Data System (ADS)
Preza, Chrysanthe; O'Sullivan, Joseph A.
2009-02-01
We present an extension of the development of an alternating minimization (AM) method for the computation of a specimen's complex transmittance function (magnitude and phase) from DIC images. The ability to extract both quantitative phase and amplitude information from two rotationally-diverse DIC images (i.e., acquired by rotating the sample) extends previous efforts in computational DIC microscopy that have focused on quantitative phase imaging only. Simulation results show that the inverse problem at hand is sensitive to noise as well as to the choice of the AM algorithm parameters. The AM framework allows constraints and penalties on the magnitude and phase estimates to be incorporated in a principled manner. Towards this end, Green and De Pierro's "log-cosh" regularization penalty is applied to the magnitude of differences of neighboring values of the complex-valued function of the specimen during the AM iterations. The penalty is shown to be convex in the complex space. A procedure to approximate the penalty within the iterations is presented. In addition, a methodology to pre-compute AM parameters that are optimal with respect to the convergence rate of the AM algorithm is also presented. Both extensions of the AM method are investigated with simulations.
NASA Astrophysics Data System (ADS)
Vintila, Iuliana; Gavrus, Adinel
2017-10-01
The present research paper proposes the validation of a rigorous computation model used as a numerical tool to identify rheological behavior of complex emulsions W/O. Considering a three-dimensional description of a general viscoplastic flow it is detailed the thermo-mechanical equations used to identify fluid or soft material's rheological laws starting from global experimental measurements. Analyses are conducted for complex emulsions W/O having generally a Bingham behavior using the shear stress - strain rate dependency based on a power law and using an improved analytical model. Experimental results are investigated in case of rheological behavior for crude and refined rapeseed/soybean oils and four types of corresponding W/O emulsions using different physical-chemical composition. The rheological behavior model was correlated with the thermo-mechanical analysis of a plane-plane rheometer, oil content, chemical composition, particle size and emulsifier's concentration. The parameters of rheological laws describing the industrial oils and the W/O concentrated emulsions behavior were computed from estimated shear stresses using a non-linear regression technique and from experimental torques using the inverse analysis tool designed by A. Gavrus (1992-2000).
Arrhenius-kinetics evidence for quantum tunneling in microbial "social" decision rates.
Clark, Kevin B
2010-11-01
Social-like bacteria, fungi and protozoa communicate chemical and behavioral signals to coordinate their specializations into an ordered group of individuals capable of fitter ecological performance. Examples of microbial "social" behaviors include sporulation and dispersion, kin recognition and nonclonal or paired reproduction. Paired reproduction by ciliates is believed to involve intra- and intermate selection through pheromone-stimulated "courting" rituals. Such social maneuvering minimizes survival-reproduction tradeoffs while sorting superior mates from inferior ones, lowering the vertical spread of deleterious genes in geographically constricted populations and possibly promoting advantageous genetic innovations. In a previous article, I reported findings that the heterotrich Spirostomum ambiguum can out-complete mating rivals in simulated social trials by learning behavioral heuristics which it then employs to store and select sets of altruistic and deceptive signaling strategies. Frequencies of strategy use typically follow Maxwell-Boltzmann (MB), Fermi-Dirac (FD) or Bose-Einstein (BE) statistical distributions. For ciliates most adept at social decision making, a brief classical MB computational phase drives signaling behavior into a later quantum BE computational phase that condenses or favors the selection of a single fittest strategy. Appearance of the network analogue of BE condensation coincides with Hebbian-like trial-and-error learning and is consistent with the idea that cells behave as heat engines, where loss of energy associated with specific cellular machinery critical for mating decisions effectively reduces the temperature of intracellular enzymes cohering into weak Fröhlich superposition. I extend these findings by showing the rates at which ciliates switch serial behavioral strategies agree with principles of chemical reactions exhibiting linear and nonlinear Arrhenius kinetics during respective classical and quantum computations. Nonlinear Arrhenius kinetics in ciliate decision making suggest transitions from one signaling strategy to another result from a computational analogue of quantum tunneling in social information processing.
Principles for a Successful Computerized Physician Order Entry Implementation
Ash, Joan S.; Fournier, Lara; Stavri, P. Zoë; Dykstra, Richard
2003-01-01
To identify success factors for implementing computerized physician order entry (CPOE), our research team took both a top-down and bottom-up approach and reconciled the results to develop twelve overarching principles to guide implementation. A consensus panel of experts produced ten Considerations with nearly 150 sub-considerations, and a three year project using qualitative methods at multiple successful sites for a grounded theory approach yielded ten general themes with 24 sub-themes. After reconciliation using a meta-matrix approach, twelve Principles, which cluster into groups forming the mnemonic CPOE emerged. Computer technology principles include: temporal concerns; technology and meeting information needs; multidimensional integration; and costs. Personal principles are: value to users and tradeoffs; essential people; and training and support. Organizational principles include: foundational underpinnings; collaborative project management; terms, concepts and connotations; and improvement through evaluation and learning. Finally, Environmental issues include the motivation and context for implementing such systems. PMID:14728129
Generalizing Landauer's principle
NASA Astrophysics Data System (ADS)
Maroney, O. J. E.
2009-03-01
In a recent paper [Stud. Hist. Philos. Mod. Phys. 36, 355 (2005)] it is argued that to properly understand the thermodynamics of Landauer’s principle it is necessary to extend the concept of logical operations to include indeterministic operations. Here we examine the thermodynamics of such operations in more detail, extending the work of Landauer to include indeterministic operations and to include logical states with variable entropies, temperatures, and mean energies. We derive the most general statement of Landauer’s principle and prove its universality, extending considerably the validity of previous proofs. This confirms conjectures made that all logical operations may, in principle, be performed in a thermodynamically reversible fashion, although logically irreversible operations would require special, practically rather difficult, conditions to do so. We demonstrate a physical process that can perform any computation without work requirements or heat exchange with the environment. Many widespread statements of Landauer’s principle are shown to be special cases of our generalized principle.
NASA Technical Reports Server (NTRS)
Felippa, Carlos A.; Ohayon, Roger
1991-01-01
A general three-field variational principle is obtained for the motion of an acoustic fluid enclosed in a rigid or flexible container by the method of canonical decomposition applied to a modified form of the wave equation in the displacement potential. The general principle is specialized to a mixed two-field principle that contains the fluid displacement potential and pressure as independent fields. This principle contains a free parameter alpha. Semidiscrete finite-element equations of motion based on this principle are displayed and applied to the transient response and free-vibrations of the coupled fluid-structure problem. It is shown that a particular setting of alpha yields a rich set of formulations that can be customized to fit physical and computational requirements. The variational principle is then extended to handle slosh motions in a uniform gravity field, and used to derive semidiscrete equations of motion that account for such effects.
Design Principles for a Comprehensive Library System.
ERIC Educational Resources Information Center
Uluakar, Tamer; And Others
1981-01-01
Describes an online design featuring circulation control, catalog access, and serial holdings that uses an incremental approach to system development. Utilizing a dedicated computer, this second of three releases pays particular attention to present and predicted computing capabilities as well as trends in library automation. (Author/RAA)
Computer-Generated, Three-Dimensional Character Animation.
ERIC Educational Resources Information Center
Van Baerle, Susan Lynn
This master's thesis begins by discussing the differences between 3-D computer animation of solid three-dimensional, or monolithic, objects, and the animation of characters, i.e., collections of movable parts with soft pliable surfaces. Principles from two-dimensional character animation that can be transferred to three-dimensional character…
Is Computer Science Compatible with Technological Literacy?
ERIC Educational Resources Information Center
Buckler, Chris; Koperski, Kevin; Loveland, Thomas R.
2018-01-01
Although technology education evolved over time, and pressure increased to infuse more engineering principles and increase links to STEM (science technology, engineering, and mathematics) initiatives, there has never been an official alignment between technology and engineering education and computer science. There is movement at the federal level…
The Variation Theorem Applied to H-2+: A Simple Quantum Chemistry Computer Project
ERIC Educational Resources Information Center
Robiette, Alan G.
1975-01-01
Describes a student project which requires limited knowledge of Fortran and only minimal computing resources. The results illustrate such important principles of quantum mechanics as the variation theorem and the virial theorem. Presents sample calculations and the subprogram for energy calculations. (GS)
Computer-Mediated Intersensory Learning Model for Students with Learning Disabilities
ERIC Educational Resources Information Center
Seok, Soonhwa; DaCosta, Boaventura; Kinsell, Carolyn; Poggio, John C.; Meyen, Edward L.
2010-01-01
This article proposes a computer-mediated intersensory learning model as an alternative to traditional instructional approaches for students with learning disabilities (LDs) in the inclusive classroom. Predominant practices of classroom inclusion today reflect the six principles of zero reject, nondiscriminatory evaluation, appropriate education,…
Teaching Molecular Biology with Microcomputers.
ERIC Educational Resources Information Center
Reiss, Rebecca; Jameson, David
1984-01-01
Describes a series of computer programs that use simulation and gaming techniques to present the basic principles of the central dogma of molecular genetics, mutation, and the genetic code. A history of discoveries in molecular biology is presented and the evolution of these computer assisted instructional programs is described. (MBR)
Computer Series, 37: Bits and Pieces, 14.
ERIC Educational Resources Information Center
Moore, John W., Ed.
1983-01-01
Thirteen computer/calculator programs (available from authors) are described. These include: representation of molecules as 3-D models; animated 3-D graphical display of line drawings of molecules; principles of Fourier-transform nuclear magnetic resonance; tutorial program for pH calculation; balancing chemical reactions using a hand-held…
What Communication Theories Can Teach the Designer of Computer-Based Training.
ERIC Educational Resources Information Center
Larsen, Ronald E.
1985-01-01
Reviews characteristics of computer-based training (CBT) that make application of communication theories appropriate and presents principles from communication theory (e.g., general systems theory, symbolic interactionism, rule theories, and interpersonal communication theories) to illustrate how CBT developers can profitably apply them to…
Practice Makes Perfect: Using a Computer-Based Business Simulation in Entrepreneurship Education
ERIC Educational Resources Information Center
Armer, Gina R. M.
2011-01-01
This article explains the use of a specific computer-based simulation program as a successful experiential learning model and as a way to increase student motivation while augmenting conventional methods of business instruction. This model is based on established adult learning principles.
Weighted mining of massive collections of [Formula: see text]-values by convex optimization.
Dobriban, Edgar
2018-06-01
Researchers in data-rich disciplines-think of computational genomics and observational cosmology-often wish to mine large bodies of [Formula: see text]-values looking for significant effects, while controlling the false discovery rate or family-wise error rate. Increasingly, researchers also wish to prioritize certain hypotheses, for example, those thought to have larger effect sizes, by upweighting, and to impose constraints on the underlying mining, such as monotonicity along a certain sequence. We introduce Princessp , a principled method for performing weighted multiple testing by constrained convex optimization. Our method elegantly allows one to prioritize certain hypotheses through upweighting and to discount others through downweighting, while constraining the underlying weights involved in the mining process. When the [Formula: see text]-values derive from monotone likelihood ratio families such as the Gaussian means model, the new method allows exact solution of an important optimal weighting problem previously thought to be non-convex and computationally infeasible. Our method scales to massive data set sizes. We illustrate the applications of Princessp on a series of standard genomics data sets and offer comparisons with several previous 'standard' methods. Princessp offers both ease of operation and the ability to scale to extremely large problem sizes. The method is available as open-source software from github.com/dobriban/pvalue_weighting_matlab (accessed 11 October 2017).
Learning visual balance from large-scale datasets of aesthetically highly rated images
NASA Astrophysics Data System (ADS)
Jahanian, Ali; Vishwanathan, S. V. N.; Allebach, Jan P.
2015-03-01
The concept of visual balance is innate for humans, and influences how we perceive visual aesthetics and cognize harmony. Although visual balance is a vital principle of design and taught in schools of designs, it is barely quantified. On the other hand, with emergence of automantic/semi-automatic visual designs for self-publishing, learning visual balance and computationally modeling it, may escalate aesthetics of such designs. In this paper, we present how questing for understanding visual balance inspired us to revisit one of the well-known theories in visual arts, the so called theory of "visual rightness", elucidated by Arnheim. We define Arnheim's hypothesis as a design mining problem with the goal of learning visual balance from work of professionals. We collected a dataset of 120K images that are aesthetically highly rated, from a professional photography website. We then computed factors that contribute to visual balance based on the notion of visual saliency. We fitted a mixture of Gaussians to the saliency maps of the images, and obtained the hotspots of the images. Our inferred Gaussians align with Arnheim's hotspots, and confirm his theory. Moreover, the results support the viability of the center of mass, symmetry, as well as the Rule of Thirds in our dataset.
Computation and Pre-Parametric Design
1988-09-01
dynamic systems. Instruments, sensors , and actuators fall into this class of devices, and examples include pressure gages, pneumatic cylinders...novel tiltmeter . The design was based on an abstraction of the problem and clever use of analogy. [Maher87] proposes that certain design synthesis...temperature differences. This principle is exploited in order to build robust, inexpensive and accurate temperature sensors . The principle can also be used
The Phenomenal World of Physics. The Science Club. Ages 10-14. [CD-ROM].
ERIC Educational Resources Information Center
1999
This CD-ROM allows students to learn about physics principles and the scientists who discovered them through genius or luck. The simplicity of these physical laws and how the discovery of these laws has improved the daily lives of humans is discussed. The computer program explores the physics behind the earth's rotation, Archimedes' Principles,…
Calculating phase diagrams using PANDAT and panengine
NASA Astrophysics Data System (ADS)
Chen, S.-L.; Zhang, F.; Xie, F.-Y.; Daniel, S.; Yan, X.-Y.; Chang, Y. A.; Schmid-Fetzer, R.; Oates, W. A.
2003-12-01
Knowledge of phase equilibria or phase diagrams and thermodynamic properties is important in alloy design and materials-processing simulation. In principle, stable phase equilibrium is uniquely determined by the thermodynamic properties of the system, such as the Gibbs energy functions of the phases. PANDAT, a new computer software package for multicomponent phase-diagram calculation, was developed under the guidance of this principle.
MIADS2 ... an alphanumeric map information assembly and display system for a large computer
Elliot L. Amidon
1966-01-01
A major improvement and extension of the Map Information Assembly and Display System (MIADS) developed in 1964 is described. Basic principles remain unchanged, but the computer programs have been expanded and rewritten for a large computer, in Fortran IV and MAP languages. The code system is extended from 99 integers to about 2,200 alphanumeric 2-character codes. Hand-...
GPU computing in medical physics: a review.
Pratx, Guillem; Xing, Lei
2011-05-01
The graphics processing unit (GPU) has emerged as a competitive platform for computing massively parallel problems. Many computing applications in medical physics can be formulated as data-parallel tasks that exploit the capabilities of the GPU for reducing processing times. The authors review the basic principles of GPU computing as well as the main performance optimization techniques, and survey existing applications in three areas of medical physics, namely image reconstruction, dose calculation and treatment plan optimization, and image processing.
Basic hydraulic principles of open-channel flow
Jobson, Harvey E.; Froehlich, David C.
1988-01-01
The three basic principles of open-channel-flow analysis--the conservation of mass, energy, and momentum--are derived, explained, and applied to solve problems of open-channel flow. These principles are introduced at a level that can be comprehended by a person with an understanding of the principles of physics and mechanics equivalent to that presented in the first college level course of the subject. The reader is assumed to have a working knowledge of algebra and plane geometry as well as some knowledge of calculus. Once the principles have been derived, a number of example applications are presented that illustrate the computation of flow through culverts and bridges, and over structures, such as dams and weirs. Because resistance to flow is a major obstacle to the successful application of the energy principle to open-channel flow, procedures are outlined for the rational selection of flow resistance coefficients. The principle of specific energy is shown to be useful in the prediction of water-surface profiles both in the qualitative and quantitative sense. (USGS)
Aeroacoustic and aerodynamic applications of the theory of nonequilibrium thermodynamics
NASA Technical Reports Server (NTRS)
Horne, W. Clifton; Smith, Charles A.; Karamcheti, Krishnamurty
1991-01-01
Recent developments in the field of nonequilibrium thermodynamics associated with viscous flows are examined and related to developments to the understanding of specific phenomena in aerodynamics and aeroacoustics. A key element of the nonequilibrium theory is the principle of minimum entropy production rate for steady dissipative processes near equilibrium, and variational calculus is used to apply this principle to several examples of viscous flow. A review of nonequilibrium thermodynamics and its role in fluid motion are presented. Several formulations are presented of the local entropy production rate and the local energy dissipation rate, two quantities that are of central importance to the theory. These expressions and the principle of minimum entropy production rate for steady viscous flows are used to identify parallel-wall channel flow and irrotational flow as having minimally dissipative velocity distributions. Features of irrotational, steady, viscous flow near an airfoil, such as the effect of trailing-edge radius on circulation, are also found to be compatible with the minimum principle. Finally, the minimum principle is used to interpret the stability of infinitesimal and finite amplitude disturbances in an initially laminar, parallel shear flow, with results that are consistent with experiment and linearized hydrodynamic stability theory. These results suggest that a thermodynamic approach may be useful in unifying the understanding of many diverse phenomena in aerodynamics and aeroacoustics.
Zarrabi, Bahar; Burce, Karen K; Seal, Stella M; Lifchez, Scott D; Redett, Richard J; Frick, Kevin D; Dorafshar, Amir H; Cooney, Carisa M
2017-05-01
Rising health care costs, decreasing reimbursement rates, and changes in American health care are forcing physicians to become increasingly business-minded. Both academic and private plastic surgeons can benefit from being educated in business principles. The authors conducted a systematic review to identify existing business curricula and integrated a business principles curriculum into residency training. The authors anonymously surveyed their department regarding perceived importance of business principles and performed a systematic literature review from 1993 to 2013 using PubMed and Embase to identify residency training programs that had designed/implemented business curricula. Subsequently, the authors implemented a formal, quarterly business curriculum. Thirty-two of 36 physicians (88.9 percent; 76.6 percent response rate) stated business principles are either "pretty important" or "very important" to being a doctor. Only 36 percent of faculty and 41 percent of trainees had previous business instruction. The authors identified 434 articles in the systematic review: 29 documented formal business curricula. Twelve topics were addressed, with practice management/administration (n = 22) and systems-based practice (n = 6) being the most common. Four articles were from surgical specialties: otolaryngology (n = 1), general surgery (n = 2), and combined general surgery/plastic surgery (n = 1). Teaching formats included lectures and self-directed learning modules; outcomes and participant satisfaction were reported inconsistently. From August of 2013 to June of 2015, the authors held eight business principles sessions. Postsession surveys demonstrated moderately to extremely satisfied responses in 75 percent or more of resident/fellow respondents (n = 13; response rate, 48.1 percent) and faculty (n = 9; response rate, 45.0 percent). Business principles can be integrated into residency training programs. Having speakers familiar with the physician audience and a session coordinator is vital to program success.
A College-Level, Computer-Assisted Course in Nutrition.
ERIC Educational Resources Information Center
Carew, Lyndon B.; And Others
1984-01-01
Describes a computer-assisted instructional (CAI) program to accompany a 15-week, college-level, introductory lecture course on the scientific principles of mammalian nutrition. The nature of the program is discussed, and examples of how it operates are provided. Comments on the evaluation of the program are also provided. (JN)
Real-Time Computer-Mediated Communication: Email and Instant Messaging Simulation
ERIC Educational Resources Information Center
Newman, Amy
2007-01-01
As computer-mediated communication becomes increasingly prevalent in the workplace, students need to apply effective writing principles to today's technologies. Email, in particular, requires interns and new hires to manage incoming messages, use an appropriate tone, and craft clear, concise messages. In addition, with instant messaging (IM)…
A CS1 Pedagogical Approach to Parallel Thinking
ERIC Educational Resources Information Center
Rague, Brian William
2010-01-01
Almost all collegiate programs in Computer Science offer an introductory course in programming primarily devoted to communicating the foundational principles of software design and development. The ACM designates this introduction to computer programming course for first-year students as CS1, during which methodologies for solving problems within…
Individual Differences in Learning from an Intelligent Discovery World: Smithtown.
ERIC Educational Resources Information Center
Shute, Valerie J.
"Smithtown" is an intelligent computer program designed to enhance an individual's scientific inquiry skills as well as to provide an environment for learning principles of basic microeconomics. It was hypothesized that intelligent computer instruction on applying effective interrogative skills (e.g., changing one variable at a time…
Making Construals as a New Digital Skill for Learning
ERIC Educational Resources Information Center
Beynon, Meurig; Boyatt, Russell; Foss, Jonathan; Hall, Chris; Hudnott, Elizabeth; Russ, Steve; Sutinen, Erkki; Macleod, Hamish; Kommers, Piet
2015-01-01
Making construals is a practical approach to computing that was originally developed for and by computer science undergraduates. It is the central theme of an EU project aimed at disseminating the relevant principles to a broader audience. This involves bringing together technical experts in making construals and international experts in…
Teaching Accounting with Computers.
ERIC Educational Resources Information Center
Shaoul, Jean
This paper addresses the numerous ways that computers may be used to enhance the teaching of accounting and business topics. It focuses on the pedagogical use of spreadsheet software to improve the conceptual coverage of accounting principles and practice, increase student understanding by involvement in the solution process, and reduce the amount…
Optimizing Computer Assisted Instruction By Applying Principles of Learning Theory.
ERIC Educational Resources Information Center
Edwards, Thomas O.
The development of learning theory and its application to computer-assisted instruction (CAI) are described. Among the early theoretical constructs thought to be important are E. L. Thorndike's concept of learning connectisms, Neal Miller's theory of motivation, and B. F. Skinner's theory of operant conditioning. Early devices incorporating those…
Electronic Computer and Switching Systems Specialist (AFSC 30554).
ERIC Educational Resources Information Center
Air Univ., Gunter AFS, Ala. Extension Course Inst.
This course is intended to train Air Force personnel to become electronic computer and switching systems specialists. One part of the course consists of a three-volume career development course. Topics are maintenance orientation (15 hours), electronic principles and digital techniques (87 hours), and systems maintenance (51 hours). Each volume…
Learning in Transformational Computer Games: Exploring Design Principles for a Nanotechnology Game
ERIC Educational Resources Information Center
Masek, Martin; Murcia, Karen; Morrison, Jason; Newhouse, Paul; Hackling, Mark
2012-01-01
Transformational games are digital computer and video applications purposefully designed to create engaging and immersive learning environments for delivering specified learning goals, outcomes and experiences. The virtual world of a transformational game becomes the social environment within which learning occurs as an outcome of the complex…
CALL Essentials: Principles and Practice in CALL Classrooms
ERIC Educational Resources Information Center
Egbert, Joy
2005-01-01
Computers and the Internet offer innovative teachers exciting ways to enhance their pedagogy and capture their students' attention. These technologies have created a growing field of inquiry, computer-assisted language learning (CALL). As new technologies have emerged, teaching professionals have adapted them to support teachers and learners in…
Imprinting Community College Computer Science Education with Software Engineering Principles
ERIC Educational Resources Information Center
Hundley, Jacqueline Holliday
2012-01-01
Although the two-year curriculum guide includes coverage of all eight software engineering core topics, the computer science courses taught in Alabama community colleges limit student exposure to the programming, or coding, phase of the software development lifecycle and offer little experience in requirements analysis, design, testing, and…
ERIC Educational Resources Information Center
Gillespie, Robert W.
A market exchange simulation utilizing the PLATO computer-assisted instructional system at the University of Illinois has been designed to teach students the principles of a general equilibrium system. It serves a laboratory function which supplements traditional instruction by stimulating students' interests and providing them with illustrations…
A Survey of Display Hardware and Software.
ERIC Educational Resources Information Center
Poore, Jesse H., Jr.; And Others
Reported are two papers which deal with the fundamentals of display hardware and software in computer systems. The first report presents the basic principles of display hardware in terms of image generation from buffers presumed to be loaded and controlled by a digital computer. The concepts surrounding the electrostatic tube, the electromagnetic…
Brunk, Elizabeth; Ashari, Negar; Athri, Prashanth; Campomanes, Pablo; de Carvalho, F Franco; Curchod, Basile F E; Diamantis, Polydefkis; Doemer, Manuel; Garrec, Julian; Laktionov, Andrey; Micciarelli, Marco; Neri, Marilisa; Palermo, Giulia; Penfold, Thomas J; Vanni, Stefano; Tavernelli, Ivano; Rothlisberger, Ursula
2011-01-01
The Laboratory of Computational Chemistry and Biochemistry is active in the development and application of first-principles based simulations of complex chemical and biochemical phenomena. Here, we review some of our recent efforts in extending these methods to larger systems, longer time scales and increased accuracies. Their versatility is illustrated with a diverse range of applications, ranging from the determination of the gas phase structure of the cyclic decapeptide gramicidin S, to the study of G protein coupled receptors, the interaction of transition metal based anti-cancer agents with protein targets, the mechanism of action of DNA repair enzymes, the role of metal ions in neurodegenerative diseases and the computational design of dye-sensitized solar cells. Many of these projects are done in collaboration with experimental groups from the Institute of Chemical Sciences and Engineering (ISIC) at the EPFL.
Predictive codes of familiarity and context during the perceptual learning of facial identities
NASA Astrophysics Data System (ADS)
Apps, Matthew A. J.; Tsakiris, Manos
2013-11-01
Face recognition is a key component of successful social behaviour. However, the computational processes that underpin perceptual learning and recognition as faces transition from unfamiliar to familiar are poorly understood. In predictive coding, learning occurs through prediction errors that update stimulus familiarity, but recognition is a function of both stimulus and contextual familiarity. Here we show that behavioural responses on a two-option face recognition task can be predicted by the level of contextual and facial familiarity in a computational model derived from predictive-coding principles. Using fMRI, we show that activity in the superior temporal sulcus varies with the contextual familiarity in the model, whereas activity in the fusiform face area covaries with the prediction error parameter that updated facial familiarity. Our results characterize the key computations underpinning the perceptual learning of faces, highlighting that the functional properties of face-processing areas conform to the principles of predictive coding.
NASA Astrophysics Data System (ADS)
Puligheddu, Marcello; Gygi, Francois; Galli, Giulia
The prediction of the thermal properties of solids and liquids is central to numerous problems in condensed matter physics and materials science, including the study of thermal management of opto-electronic and energy conversion devices. We present a method to compute the thermal conductivity of solids by performing ab initio molecular dynamics at non equilibrium conditions. Our formulation is based on a generalization of the approach to equilibrium technique, using sinusoidal temperature gradients, and it only requires calculations of first principles trajectories and atomic forces. We discuss results and computational requirements for a representative, simple oxide, MgO, and compare with experiments and data obtained with classical potentials. This work was supported by MICCoM as part of the Computational Materials Science Program funded by the U.S. Department of Energy (DOE), Office of Science , Basic Energy Sciences (BES), Materials Sciences and Engineering Division under Grant DOE/BES 5J-30.
An exploratory study of live vs. web-based delivery of a phlebotomy program.
Fydryszewski, Nadine A; Scanlan, Craig; Guiles, H Jesse; Tucker, Ann
2010-01-01
Changes in student population and increased Web-based education offerings provided the impetus to assess pedagogy, cognitive outcomes and perceptions of course quality. This study explored cognitive outcomes and students' perception of course quality related to the Seven Principles for Good Practice in Undergraduate Education between live classroom delivery, compared to a Web-based delivery of a phlebotomy program. Quasi-experimental; students self-selected to enroll in live or Web-based program. For cognitive outcomes, no significant difference was found between the groups. Student perception of course quality differed only for Principle One (student-instructor contact). Students in the live classroom rated Principle One higher for the Part I course compared to the Web-based group. For the Part II course, there was no significant difference in perception of course quality related to any of the Seven Principles. The more constructivist pedagogy in the Part II course did not improve cognitive outcomes however, it may have contributed to knowledge retention. The live group rated Principle One in the Part II course evaluation relatively the same as they did for the Part I course evaluation. However, the Web-based group rated Principle One considerable higher for the Part II course than for Part I course. Future studies with a larger sample could explore improved course quality assessment instruments.
NASA Astrophysics Data System (ADS)
Shimada, Kazuhiro
2018-03-01
We perform first-principles calculations to investigate the crystal structure, elastic and piezoelectric properties, and spontaneous polarization of orthorhombic M2O3 (M = Al, Ga, In, Sc, Y) with Pna21 space group based on density functional theory. The lattice parameters, full elastic stiffness constants, piezoelectric stress and strain constants, and spontaneous polarization are successfully predicted. Comparison with available experimental and computational results indicates the validity of our computational results. Detailed analysis of the results clarifies the difference in the bonding character and the origin of the strong piezoelectric response and large spontaneous polarization.
NASA Astrophysics Data System (ADS)
Hassan, Irtaza; Donati, Luca; Stensitzki, Till; Keller, Bettina G.; Heyne, Karsten; Imhof, Petra
2018-04-01
We have combined infrared (IR) experiments with molecular dynamics (MD) simulations in solution at finite temperature to analyse the vibrational signature of the small floppy peptide Alanine-Leucine. IR spectra computed from first-principles MD simulations exhibit no distinct differences between conformational clusters of α -helix or β -sheet-like folds with different orientations of the bulky leucine side chain. All computed spectra show two prominent bands, in good agreement with the experiment, that are assigned to the stretch vibrations of the carbonyl and carboxyl group, respectively. Variations in band widths and exact maxima are likely due to small fluctuations in the backbone torsion angles.
Computational and Experimental Approaches to Visual Aesthetics
Brachmann, Anselm; Redies, Christoph
2017-01-01
Aesthetics has been the subject of long-standing debates by philosophers and psychologists alike. In psychology, it is generally agreed that aesthetic experience results from an interaction between perception, cognition, and emotion. By experimental means, this triad has been studied in the field of experimental aesthetics, which aims to gain a better understanding of how aesthetic experience relates to fundamental principles of human visual perception and brain processes. Recently, researchers in computer vision have also gained interest in the topic, giving rise to the field of computational aesthetics. With computing hardware and methodology developing at a high pace, the modeling of perceptually relevant aspect of aesthetic stimuli has a huge potential. In this review, we present an overview of recent developments in computational aesthetics and how they relate to experimental studies. In the first part, we cover topics such as the prediction of ratings, style and artist identification as well as computational methods in art history, such as the detection of influences among artists or forgeries. We also describe currently used computational algorithms, such as classifiers and deep neural networks. In the second part, we summarize results from the field of experimental aesthetics and cover several isolated image properties that are believed to have a effect on the aesthetic appeal of visual stimuli. Their relation to each other and to findings from computational aesthetics are discussed. Moreover, we compare the strategies in the two fields of research and suggest that both fields would greatly profit from a joined research effort. We hope to encourage researchers from both disciplines to work more closely together in order to understand visual aesthetics from an integrated point of view. PMID:29184491
Computational and Experimental Approaches to Visual Aesthetics.
Brachmann, Anselm; Redies, Christoph
2017-01-01
Aesthetics has been the subject of long-standing debates by philosophers and psychologists alike. In psychology, it is generally agreed that aesthetic experience results from an interaction between perception, cognition, and emotion. By experimental means, this triad has been studied in the field of experimental aesthetics , which aims to gain a better understanding of how aesthetic experience relates to fundamental principles of human visual perception and brain processes. Recently, researchers in computer vision have also gained interest in the topic, giving rise to the field of computational aesthetics . With computing hardware and methodology developing at a high pace, the modeling of perceptually relevant aspect of aesthetic stimuli has a huge potential. In this review, we present an overview of recent developments in computational aesthetics and how they relate to experimental studies. In the first part, we cover topics such as the prediction of ratings, style and artist identification as well as computational methods in art history, such as the detection of influences among artists or forgeries. We also describe currently used computational algorithms, such as classifiers and deep neural networks. In the second part, we summarize results from the field of experimental aesthetics and cover several isolated image properties that are believed to have a effect on the aesthetic appeal of visual stimuli. Their relation to each other and to findings from computational aesthetics are discussed. Moreover, we compare the strategies in the two fields of research and suggest that both fields would greatly profit from a joined research effort. We hope to encourage researchers from both disciplines to work more closely together in order to understand visual aesthetics from an integrated point of view.
A Simple and Resource-efficient Setup for the Computer-aided Drug Design Laboratory.
Moretti, Loris; Sartori, Luca
2016-10-01
Undertaking modelling investigations for Computer-Aided Drug Design (CADD) requires a proper environment. In principle, this could be done on a single computer, but the reality of a drug discovery program requires robustness and high-throughput computing (HTC) to efficiently support the research. Therefore, a more capable alternative is needed but its implementation has no widespread solution. Here, the realization of such a computing facility is discussed, from general layout to technical details all aspects are covered. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
QM Automata: A New Class of Restricted Quantum Membrane Automata.
Giannakis, Konstantinos; Singh, Alexandros; Kastampolidou, Kalliopi; Papalitsas, Christos; Andronikos, Theodore
2017-01-01
The term "Unconventional Computing" describes the use of non-standard methods and models in computing. It is a recently established field, with many interesting and promising results. In this work we combine notions from quantum computing with aspects of membrane computing to define what we call QM automata. Specifically, we introduce a variant of quantum membrane automata that operate in accordance with the principles of quantum computing. We explore the functionality and capabilities of the QM automata through indicative examples. Finally we suggest future directions for research on QM automata.
Secure Multiparty Quantum Computation for Summation and Multiplication.
Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun
2016-01-21
As a fundamental primitive, Secure Multiparty Summation and Multiplication can be used to build complex secure protocols for other multiparty computations, specially, numerical computations. However, there is still lack of systematical and efficient quantum methods to compute Secure Multiparty Summation and Multiplication. In this paper, we present a novel and efficient quantum approach to securely compute the summation and multiplication of multiparty private inputs, respectively. Compared to classical solutions, our proposed approach can ensure the unconditional security and the perfect privacy protection based on the physical principle of quantum mechanics.
Secure Multiparty Quantum Computation for Summation and Multiplication
Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun
2016-01-01
As a fundamental primitive, Secure Multiparty Summation and Multiplication can be used to build complex secure protocols for other multiparty computations, specially, numerical computations. However, there is still lack of systematical and efficient quantum methods to compute Secure Multiparty Summation and Multiplication. In this paper, we present a novel and efficient quantum approach to securely compute the summation and multiplication of multiparty private inputs, respectively. Compared to classical solutions, our proposed approach can ensure the unconditional security and the perfect privacy protection based on the physical principle of quantum mechanics. PMID:26792197
The Regulation of Medical Computer Software as a “Device” under the Food, Drug, and Cosmetic Act
Brannigan, Vincent
1986-01-01
Recent developments in computer software have raised the possibility that federal regulators may claim to control medical computer software as a “device” under the Food, Drug and Cosmetic Act. The purpose of this paper is to analyze the FDCA to determine whether computer software is included in the statutory scheme, examine constitutional arguments relating to computer software, and discuss regulatory principles that should be taken into account when deciding appropriate regulation. This paper is limited to computer program output used by humans in deciding appropriate medical therapy for a patient.
NASA Technical Reports Server (NTRS)
Somersall, A. C.; Guillet, J. E.
1982-01-01
A computer model which simulates, in principle, the chemical changes in the photooxidation of hydrocarbons using as input data a set of elementary reactions, corresponding kinetic rate data and appropriate initial conditions was developed. The Model was refined and exploited to examine more closely the photooxidation and photostabilization of a hydrocarbon polymer. The results lead to the following observations. (1) The time to failure, tau sub f (chosen as the level of 5% C-H bond oxidation which is within the range anticipated for marked change in mechanical properties) varies as the inverse square root of the light intensity. However, tau sub f is almost unaffected by both the photoinitiator type and concentration. (2) The time to failure decreases with the rate of abstraction of C-H by peroxy radicals but increases with the rate of bimolecular radical termination controlled by diffusion. (3) Of the various stabilization mechanisms considered, the trapping of peroxy radicals is distinctly the most effective, although the concommitant decomposition of hydroperoxide is also desirable.
Growth rate measurement in free jet experiments
NASA Astrophysics Data System (ADS)
Charpentier, Jean-Baptiste; Renoult, Marie-Charlotte; Crumeyrolle, Olivier; Mutabazi, Innocent
2017-07-01
An experimental method was developed to measure the growth rate of the capillary instability for free liquid jets. The method uses a standard shadow-graph imaging technique to visualize a jet, produced by extruding a liquid through a circular orifice, and a statistical analysis of the entire jet. The analysis relies on the computation of the standard deviation of a set of jet profiles, obtained in the same experimental conditions. The principle and robustness of the method are illustrated with a set of emulated jet profiles. The method is also applied to free falling jet experiments conducted for various Weber numbers and two low-viscosity solutions: a Newtonian and a viscoelastic one. Growth rate measurements are found in good agreement with linear stability theory in the Rayleigh's regime, as expected from previous studies. In addition, the standard deviation curve is used to obtain an indirect measurement of the initial perturbation amplitude and to identify beads on a string structure on the jet. This last result serves to demonstrate the capability of the present technique to explore in the future the dynamics of viscoelastic liquid jets.
Hu, Suxing; Collins, Lee A.; Goncharov, V. N.; ...
2016-05-26
Using first-principles (FP) methods, we have performed ab initio compute for the equation of state (EOS), thermal conductivity, and opacity of deuterium-tritium (DT) in a wide range of densities and temperatures for inertial confinement fusion (ICF) applications. These systematic investigations have recently been expanded to accurately compute the plasma properties of CH ablators under extreme conditions. In particular, the first-principles EOS and thermal-conductivity tables of CH are self-consistently built from such FP calculations, which are benchmarked by experimental measurements. When compared with the traditional models used for these plasma properties in hydrocodes, significant differences have been identified in the warmmore » dense plasma regime. When these FP-calculated properties of DT and CH were used in our hydrodynamic simulations of ICF implosions, we found that the target performance in terms of neutron yield and energy gain can vary by a factor of 2 to 3, relative to traditional model simulations.« less
Roy, Tapta Kanchan; Sharma, Rahul; Gerber, R Benny
2016-01-21
First-principles quantum calculations for anharmonic vibrational spectroscopy of three protected dipeptides are carried out and compared with experimental data. Using hybrid HF/MP2 potentials, the Vibrational Self-Consistent Field with Second-Order Perturbation Correction (VSCF-PT2) algorithm is used to compute the spectra without any ad hoc scaling or fitting. All of the vibrational modes (135 for the largest system) are treated quantum mechanically and anharmonically using full pair-wise coupling potentials to represent the interaction between different modes. In the hybrid potential scheme the MP2 method is used for the harmonic part of the potential and a modified HF method is used for the anharmonic part. The overall agreement between computed spectra and experiment is very good and reveals different signatures for different conformers. This study shows that first-principles spectroscopic calculations of good accuracy are possible for dipeptides hence it opens possibilities for determination of dipeptide conformer structures by comparison of spectroscopic calculations with experiment.
Acer, N; Bayar, B; Basaloglu, H; Oner, E; Bayar, K; Sankur, S
2008-11-20
The size and shape of tarsal bones are especially relevant when considering some orthopedic diseases such as clubfoot. For this reason, the measurements of the tarsal bones have been the subject of many studies, none of which has used stereological methods to estimate the volume. In the present stereological study, we estimated the volume of calcaneal bone of normal feet and dry bones. We used a combination of the Cavalieri principle and computer tomographic scans taken from eight males and nine dry calcanei to estimate the volumes of calcaneal bones. The mean volume of dry calcaneal bones was estimated, producing mean results using the point-counting method and Archimedes principle being 49.11+/-10.7 or 48.22+/-11.92 cm(3), respectively. A positive correlation was found between anthropometric measurements and the volume of calcaneal bones. The findings of the present study using the stereological methods could provide data for the evaluation of normal and pathological volumes of calcaneal bones.
A needs assessment study of undergraduate surgical education
Birch, Daniel W.; Mavis, Brian
2006-01-01
Background There is compelling evidence to suggest that undergraduate surgical education may fail to provide appropriate instruction in basic surgical principles and skills. Methods We completed a descriptive, cross-sectional survey of stakeholder groups (surgeon educators and recent medical school graduates) to assess the perceived relevance and learning for surgical principles, surgical skills, teaching environments and teaching interventions. Results Graduates returned 123 surveys, and surgeons returned 55 surveys (response rates: graduates 46%, surgeons 45%). Both graduates and surgeons considered 8 of 10 surgical principles highly relevant to current medical practice. Despite this, the surgical clerkship seemed to enable proficiency in far fewer principles (graduates: 3, surgeons: 5). Graduates believed that each of the 15 basic surgical skills is relevant to current medical practice, whereas surgeons indicated that more invasive skills (i.e., central venous lines, thoracentesis) are much less relevant. Graduates and surgeons indicated that medical students will achieve proficiency in only 3 basic skills areas as a result of the surgical clerkship. Graduates and surgeons considered each surgical specialty relevant and effective in undergraduate surgical education. According to graduates and surgeons, the most effective teaching environments are outpatient settings (emergency department, outpatient clinics). Graduates and surgeons ranked resident teaching as the most effective teaching intervention, and traditional interventions (grand rounds, formal rounds) and electronic resources (computer-assisted learning, web-based learning) were ranked the least effective. Conclusions In this study, we assessed the learning needs of contemporary medical students in surgery. The results suggest that respondent graduate students and surgeons believe that the level of proficiency achieved in surgical principles and basic skills through undergraduate surgical educations is much less than anticipated. Outpatient settings and resident teaching are believed to provide the most effective teaching for medical students. Information from this study has important implications for Canadian undergraduate surgery programs and curricula. PMID:17152571
The "Biologically-Inspired Computing" Column
NASA Technical Reports Server (NTRS)
Hinchey, Mike
2007-01-01
Self-managing systems, whether viewed from the perspective of Autonomic Computing, or from that of another initiative, offers a holistic vision for the development and evolution of biologically-inspired computer-based systems. It aims to bring new levels of automation and dependability to systems, while simultaneously hiding their complexity and reducing costs. A case can certainly be made that all computer-based systems should exhibit autonomic properties [6], and we envisage greater interest in, and uptake of, autonomic principles in future system development.
Demonstration of measurement-only blind quantum computing
NASA Astrophysics Data System (ADS)
Greganti, Chiara; Roehsner, Marie-Christine; Barz, Stefanie; Morimae, Tomoyuki; Walther, Philip
2016-01-01
Blind quantum computing allows for secure cloud networks of quasi-classical clients and a fully fledged quantum server. Recently, a new protocol has been proposed, which requires a client to perform only measurements. We demonstrate a proof-of-principle implementation of this measurement-only blind quantum computing, exploiting a photonic setup to generate four-qubit cluster states for computation and verification. Feasible technological requirements for the client and the device-independent blindness make this scheme very applicable for future secure quantum networks.
NASA Technical Reports Server (NTRS)
Stroke, G. W.
1972-01-01
Applications of the optical computer include an approach for increasing the sharpness of images obtained from the most powerful electron microscopes and fingerprint/credit card identification. The information-handling capability of the various optical computing processes is very great. Modern synthetic-aperture radars scan upward of 100,000 resolvable elements per second. Fields which have assumed major importance on the basis of optical computing principles are optical image deblurring, coherent side-looking synthetic-aperture radar, and correlative pattern recognition. Some examples of the most dramatic image deblurring results are shown.
Physical Premium Principle: A New Way for Insurance Pricing
NASA Astrophysics Data System (ADS)
Darooneh, Amir H.
2005-03-01
In our previous work we suggested a way for computing the non-life insurance premium. The probable surplus of the insurer company assumed to be distributed according to the canonical ensemble theory. The Esscher premium principle appeared as its special case. The difference between our method and traditional principles for premium calculation was shown by simulation. Here we construct a theoretical foundation for the main assumption in our method, in this respect we present a new (physical) definition for the economic equilibrium. This approach let us to apply the maximum entropy principle in the economic systems. We also extend our method to deal with the problem of premium calculation for correlated risk categories. Like the Buhlman economic premium principle our method considers the effect of the market on the premium but in a different way.
ERIC Educational Resources Information Center
Her Many Horses, Ian
2016-01-01
The world, and especially our own country, is in dire need of a larger and more diverse population of computer scientists. While many organizations have approached this problem of too few computer scientists in various ways, a promising, and I believe necessary, path is to expose elementary students to authentic practices of the discipline.…
Compensated Crystal Assemblies for Type-II Entangled Photon Generation in Quantum Cluster States
2010-03-01
in quantum computational architectures that operate by principles entirely distinct from any based on classical physics. In contrast with other...of the SPDC spectral function, to enable applications in regions that have not been accessible with other methods. Quantum Information and Computation ...Eliminating frequency and space-time correlations in multi-photon states, PRA 64, 063815, 2001 [2]A. Zeilinger et.al. Experimental One-way computing
ERIC Educational Resources Information Center
Wielard, Valerie Michelle
2013-01-01
The primary objective of this project was to learn what effect a computer program would have on academic achievement and attitude toward science of college students enrolled in a biology class for non-science majors. It became apparent that the instructor also had an effect on attitudes toward science. The researcher designed a computer program,…
Connectionist Models for Intelligent Computation
1989-07-26
Intelligent Canputation 12. PERSONAL AUTHOR(S) H.H. Chen and Y.C. Lee 13a. o R,POT Cal 13b TIME lVD/rED 14 DATE OF REPORT (Year, Month, Day) JS PAGE...fied Project Title: Connectionist Models-for Intelligent Computation Contract/Grant No.: AFOSR-87-0388 Contract/Grant Period of Performance: Sept. 1...underlying principles, architectures and appilications of artificial neural networks for intelligent computations.o, Approach: -) We use both numerical
Multi-threading: A new dimension to massively parallel scientific computation
NASA Astrophysics Data System (ADS)
Nielsen, Ida M. B.; Janssen, Curtis L.
2000-06-01
Multi-threading is becoming widely available for Unix-like operating systems, and the application of multi-threading opens new ways for performing parallel computations with greater efficiency. We here briefly discuss the principles of multi-threading and illustrate the application of multi-threading for a massively parallel direct four-index transformation of electron repulsion integrals. Finally, other potential applications of multi-threading in scientific computing are outlined.
Electronic damping of anharmonic adsorbate vibrations at metallic surfaces
NASA Astrophysics Data System (ADS)
Tremblay, Jean Christophe; Monturet, Serge; Saalfrank, Peter
2010-03-01
The nonadiabatic coupling of an adsorbate close to a metallic surface leads to electronic damping of adsorbate vibrations and line broadening in vibrational spectroscopy. Here, a perturbative treatment of the electronic contribution to the lifetime broadening serves as a building block for a new approach, in which anharmonic vibrational transition rates are calculated from a position-dependent coupling function. Different models for the coupling function will be tested, all related to embedding theory. The first two are models based on a scattering approach with (i) a jellium-type and (ii) a density functional theory based embedding density, respectively. In a third variant a further refined model is used for the embedding density, and a semiempirical approach is taken in which a scaling factor is chosen to match harmonic, single-site, first-principles transition rates, obtained from periodic density functional theory. For the example of hydrogen atoms on (adsorption) and below (subsurface absorption) a Pd(111) surface, lifetimes of and transition rates between vibrational levels are computed. The transition rates emerging from different models serve as input for the selective subsurface adsorption of hydrogen in palladium starting from an adsorption site, by using sequences of infrared laser pulses in a laser distillation scheme.
NASA Astrophysics Data System (ADS)
Lee, Myeong H.; Dunietz, Barry D.; Geva, Eitan
2014-03-01
We present a methodology to obtain the photo-induced electron transfer rate constant in organic photovoltaic (OPV) materials within the framework of Fermi's golden rule, using inputs obtained from first-principles electronic structure calculation. Within this approach, the nuclear vibrational modes are treated quantum-mechanically and a short-time approximation is avoided in contrast to the classical Marcus theory where these modes are treated classically within the high-temperature and short-time limits. We demonstrate our methodology on boron-subphthalocyanine-chloride/C60 OPV system to determine the rate constants of electron transfer and electron recombination processes upon photo-excitation. We consider two representative donor/acceptor interface configurations to investigate the effect of interface configuration on the charge transfer characteristics of OPV materials. In addition, we determine the time scale of excited states population by employing a master equation after obtaining the rate constants for all accessible electronic transitions. This work is pursued as part of the Center for Solar and Thermal Energy Conversion, an Energy Frontier Research Center funded by the US Department of Energy Office of Science, Office of Basic Energy Sciences under 390 Award No. DE-SC0000957.
Identification of Anisomerous Motor Imagery EEG Signals Based on Complex Algorithms
Zhang, Zhiwen; Duan, Feng; Zhou, Xin; Meng, Zixuan
2017-01-01
Motor imagery (MI) electroencephalograph (EEG) signals are widely applied in brain-computer interface (BCI). However, classified MI states are limited, and their classification accuracy rates are low because of the characteristics of nonlinearity and nonstationarity. This study proposes a novel MI pattern recognition system that is based on complex algorithms for classifying MI EEG signals. In electrooculogram (EOG) artifact preprocessing, band-pass filtering is performed to obtain the frequency band of MI-related signals, and then, canonical correlation analysis (CCA) combined with wavelet threshold denoising (WTD) is used for EOG artifact preprocessing. We propose a regularized common spatial pattern (R-CSP) algorithm for EEG feature extraction by incorporating the principle of generic learning. A new classifier combining the K-nearest neighbor (KNN) and support vector machine (SVM) approaches is used to classify four anisomerous states, namely, imaginary movements with the left hand, right foot, and right shoulder and the resting state. The highest classification accuracy rate is 92.5%, and the average classification accuracy rate is 87%. The proposed complex algorithm identification method can significantly improve the identification rate of the minority samples and the overall classification performance. PMID:28874909
A first-principle calculation of the XANES spectrum of Cu2+ in water
NASA Astrophysics Data System (ADS)
La Penna, G.; Minicozzi, V.; Morante, S.; Rossi, G. C.; Stellato, F.
2015-09-01
The progress in high performance computing we are witnessing today offers the possibility of accurate electron density calculations of systems in realistic physico-chemical conditions. In this paper, we present a strategy aimed at performing a first-principle computation of the low energy part of the X-ray Absorption Spectroscopy (XAS) spectrum based on the density functional theory calculation of the electronic potential. To test its effectiveness, we apply the method to the computation of the X-ray absorption near edge structure part of the XAS spectrum in the paradigmatic, but simple case of Cu2+ in water. In order to keep into account the effect of the metal site structure fluctuations in determining the experimental signal, the theoretical spectrum is evaluated as the average over the computed spectra of a statistically significant number of simulated metal site configurations. The comparison of experimental data with theoretical calculations suggests that Cu2+ lives preferentially in a square-pyramidal geometry. The remarkable success of this approach in the interpretation of XAS data makes us optimistic about the possibility of extending the computational strategy we have outlined to the more interesting case of molecules of biological relevance bound to transition metal ions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.
It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less
Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.
2016-07-26
It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less
NASA Technical Reports Server (NTRS)
Maxwell, E. L.
1980-01-01
The need for degree programs in remote sensing is considered. Any education program which claims to train remote sensing specialists must include expertise in the physical principles upon which remote sensing is based. These principles dictate the limits of engineering and design, computer analysis, photogrammetry, and photointerpretation. Faculty members must be hired to provide emphasis in those five areas.
First principles nickel-cadmium and nickel hydrogen spacecraft battery models
NASA Technical Reports Server (NTRS)
Timmerman, P.; Ratnakumar, B. V.; Distefano, S.
1996-01-01
The principles of Nickel-Cadmium and Nickel-Hydrogen spacecraft battery models are discussed. The Ni-Cd battery model includes two phase positive electrode and its predictions are very close to actual data. But the Ni-H2 battery model predictions (without the two phase positive electrode) are unacceptable even though the model is operational. Both models run on UNIX and Macintosh computers.
Christen, Markus; Ineichen, Christian; Tanner, Carmen
2014-06-17
The principles of biomedical ethics - autonomy, non-maleficence, beneficence, and justice - are of paradigmatic importance for framing ethical problems in medicine and for teaching ethics to medical students and professionals. In order to underline this significance, Tom L. Beauchamp and James F. Childress base the principles in the common morality, i.e. they claim that the principles represent basic moral values shared by all persons committed to morality and are thus grounded in human moral psychology. We empirically investigated the relationship of the principles to other moral and non-moral values that provide orientations in medicine. By way of comparison, we performed a similar analysis for the business & finance domain. We evaluated the perceived degree of "morality" of 14 values relevant to medicine (n1 = 317, students and professionals) and 14 values relevant to business & finance (n2 = 247, students and professionals). Ratings were made along four dimensions intended to characterize different aspects of morality. We found that compared to other values, the principles-related values received lower ratings across several dimensions that characterize morality. By interpreting our finding using a clustering and a network analysis approach, we suggest that the principles can be understood as "bridge values" that are connected both to moral and non-moral aspects of ethical dilemmas in medicine. We also found that the social domain (medicine vs. business & finance) influences the degree of perceived morality of values. Our results are in conflict with the common morality hypothesis of Beauchamp and Childress, which would imply domain-independent high morality ratings of the principles. Our findings support the suggestions by other scholars that the principles of biomedical ethics serve primarily as instruments in deliberated justifications, but lack grounding in a universal "common morality". We propose that the specific manner in which the principles are taught and discussed in medicine - namely by referring to conflicts requiring a balancing of principles - may partly explain why the degree of perceived "morality" of the principles is lower compared to other moral values.
2014-01-01
Background The principles of biomedical ethics – autonomy, non-maleficence, beneficence, and justice – are of paradigmatic importance for framing ethical problems in medicine and for teaching ethics to medical students and professionals. In order to underline this significance, Tom L. Beauchamp and James F. Childress base the principles in the common morality, i.e. they claim that the principles represent basic moral values shared by all persons committed to morality and are thus grounded in human moral psychology. We empirically investigated the relationship of the principles to other moral and non-moral values that provide orientations in medicine. By way of comparison, we performed a similar analysis for the business & finance domain. Methods We evaluated the perceived degree of “morality” of 14 values relevant to medicine (n1 = 317, students and professionals) and 14 values relevant to business & finance (n2 = 247, students and professionals). Ratings were made along four dimensions intended to characterize different aspects of morality. Results We found that compared to other values, the principles-related values received lower ratings across several dimensions that characterize morality. By interpreting our finding using a clustering and a network analysis approach, we suggest that the principles can be understood as “bridge values” that are connected both to moral and non-moral aspects of ethical dilemmas in medicine. We also found that the social domain (medicine vs. business & finance) influences the degree of perceived morality of values. Conclusions Our results are in conflict with the common morality hypothesis of Beauchamp and Childress, which would imply domain-independent high morality ratings of the principles. Our findings support the suggestions by other scholars that the principles of biomedical ethics serve primarily as instruments in deliberated justifications, but lack grounding in a universal “common morality”. We propose that the specific manner in which the principles are taught and discussed in medicine – namely by referring to conflicts requiring a balancing of principles – may partly explain why the degree of perceived “morality” of the principles is lower compared to other moral values. PMID:24938295
Versatile all-digital time interval measuring system
NASA Astrophysics Data System (ADS)
Vyhlidal, David; Cech, Miroslav
2011-06-01
This paper describes a design and performance of a versatile all-digital time interval measuring system. The measurement method is based on an interpolation principle. In this principle the time interval is first roughly digitized by a coarse counter driven by a high stability reference clock and the fractions between the clock periods are measured by two Time-to-Digital Converter chips TDC-GPX manufactured by Acam messelectronic. Control circuits allow programmable customization of the system to satisfy many applications such as laser range finding, event counting, or time-of-flight measurements in various physics experiments. The system has two reference clocks inputs and two independent channels for measuring start and stop events. Only one 40 MHz reference is required for the measurement. The second reference can be, for example, 1 PPS (Pulse per Second) signal from a GPS (Global Positioning System) to time tag events. Time intervals are measured using the highest resolution mode of the TDC-GPX chips. The resolution of each chip is software programmable and is PLL (Phase Locked Loop) stabilized against temperature and voltage variations. The system can achieve a timing resolution better than 15 ps rms with up to 90 kHz repetition rate. The time interval measurement range is from 0 ps up to 1 second. The power consumption of the whole system is 18 W including an embedded computer board and an LCD (Liquid Crystal Display) screen. The embedded computer controls the whole system, collects and evaluates measurement data and with the display provides a user interface. The system is implemented using commercially available components.
NASA Astrophysics Data System (ADS)
Dai, Xiaoqian; Tian, Jie; Chen, Zhe
2010-03-01
Parametric images can represent both spatial distribution and quantification of the biological and physiological parameters of tracer kinetics. The linear least square (LLS) method is a well-estimated linear regression method for generating parametric images by fitting compartment models with good computational efficiency. However, bias exists in LLS-based parameter estimates, owing to the noise present in tissue time activity curves (TTACs) that propagates as correlated error in the LLS linearized equations. To address this problem, a volume-wise principal component analysis (PCA) based method is proposed. In this method, firstly dynamic PET data are properly pre-transformed to standardize noise variance as PCA is a data driven technique and can not itself separate signals from noise. Secondly, the volume-wise PCA is applied on PET data. The signals can be mostly represented by the first few principle components (PC) and the noise is left in the subsequent PCs. Then the noise-reduced data are obtained using the first few PCs by applying 'inverse PCA'. It should also be transformed back according to the pre-transformation method used in the first step to maintain the scale of the original data set. Finally, the obtained new data set is used to generate parametric images using the linear least squares (LLS) estimation method. Compared with other noise-removal method, the proposed method can achieve high statistical reliability in the generated parametric images. The effectiveness of the method is demonstrated both with computer simulation and with clinical dynamic FDG PET study.
ERIC Educational Resources Information Center
Heald, M.; Allen, D.; Villa, D.; Oliver, C.
2013-01-01
This proof of principle study was designed to evaluate whether excessively high rates of social approach behaviors in children with Angelman syndrome (AS) can be modified using a multiple schedule design. Four children with AS were exposed to a multiple schedule arrangement, in which social reinforcement and extinction, cued using a novel…
Dynamic intersectoral models with power-law memory
NASA Astrophysics Data System (ADS)
Tarasova, Valentina V.; Tarasov, Vasily E.
2018-01-01
Intersectoral dynamic models with power-law memory are proposed. The equations of open and closed intersectoral models, in which the memory effects are described by the Caputo derivatives of non-integer orders, are derived. We suggest solutions of these equations, which have the form of linear combinations of the Mittag-Leffler functions and which are characterized by different effective growth rates. Examples of intersectoral dynamics with power-law memory are suggested for two sectoral cases. We formulate two principles of intersectoral dynamics with memory: the principle of changing of technological growth rates and the principle of domination change. It has been shown that in the input-output economic dynamics the effects of fading memory can change the economic growth rate and dominant behavior of economic sectors.
7 CFR 1767.13 - Departures from the prescribed RUS Uniform System of Accounts.
Code of Federal Regulations, 2010 CFR
2010-01-01
... accounting methodologies and principles that depart from the provisions herein; or (2) File with such... borrower's rates, based upon accounting methods and principles inconsistent with the provisions of this... accounting methods or principles for the borrower that are inconsistent with the provisions of this part, the...
Image improvement and three-dimensional reconstruction using holographic image processing
NASA Technical Reports Server (NTRS)
Stroke, G. W.; Halioua, M.; Thon, F.; Willasch, D. H.
1977-01-01
Holographic computing principles make possible image improvement and synthesis in many cases of current scientific and engineering interest. Examples are given for the improvement of resolution in electron microscopy and 3-D reconstruction in electron microscopy and X-ray crystallography, following an analysis of optical versus digital computing in such applications.
Demonstrating Operating System Principles via Computer Forensics Exercises
ERIC Educational Resources Information Center
Duffy, Kevin P.; Davis, Martin H., Jr.; Sethi, Vikram
2010-01-01
We explore the feasibility of sparking student curiosity and interest in the core required MIS operating systems course through inclusion of computer forensics exercises into the course. Students were presented with two in-class exercises. Each exercise demonstrated an aspect of the operating system, and each exercise was written as a computer…
The Development and Deployment of a Virtual Unit Operations Laboratory
ERIC Educational Resources Information Center
Vaidyanath, Sreeram; Williams, Jason; Hilliard, Marcus; Wiesner, Theodore
2007-01-01
Computer-simulated experiments offer many benefits to engineering curricula in the areas of safety, cost, and flexibility. We report our experience in developing and deploying a computer-simulated unit operations laboratory, driven by the guiding principle of maximum fidelity to the physical lab. We find that, while the up-front investment in…
Building Fossils in the Elementary School and Writing about Them Using Computers.
ERIC Educational Resources Information Center
Schlenker, Richard M.; Yoshida, Sarah
This material describes a fossil-building activity using sea shells, chicken bones, and plaster for grade one through three students. Related process skills, vocabulary, computer principles, time requirements, and materials are listed. Two methods of building the fossils are discussed. After building the fossils, classes may be divided into pairs…
Synthetic Biology: Knowledge Accessed by Everyone (Open Sources)
ERIC Educational Resources Information Center
Sánchez Reyes, Patricia Margarita
2016-01-01
Using the principles of biology, along with engineering and with the help of computer, scientists manage to copy. DNA sequences from nature and use them to create new organisms. DNA is created through engineering and computer science managing to create life inside a laboratory. We cannot dismiss the role that synthetic biology could lead in…
ERIC Educational Resources Information Center
Henry, Mark
1979-01-01
Recounts statistical inaccuracies in an article on computer-aided instruction in economics courses on the college level. The article, published in the J. Econ. Ed (Fall 1978), erroneously placed one student in the TIPS group instead of the control group. Implications of this alteration are discussed. (DB)
Design Principles for "Thriving in Our Digital World": A High School Computer Science Course
ERIC Educational Resources Information Center
Veletsianos, George; Beth, Bradley; Lin, Calvin; Russell, Gregory
2016-01-01
"Thriving in Our Digital World" is a technology-enhanced dual enrollment course introducing high school students to computer science through project- and problem-based learning. This article describes the evolution of the course and five lessons learned during the design, development, implementation, and iteration of the course from its…
Technology Allows Engineers to Make Solid Objects from Computer Designs.
ERIC Educational Resources Information Center
Wheeler, David L.
1992-01-01
Computer operators using the technique of three-dimensional printing or rapid prototyping may soon be able to sculpt an object on the screen and within minutes, have a paper, plastic, or ceramic version of the object in hand. The process uses the principle that physical objects can be created in layers. (MSE)
Effective Computer-Aided Assessment of Mathematics; Principles, Practice and Results
ERIC Educational Resources Information Center
Greenhow, Martin
2015-01-01
This article outlines some key issues for writing effective computer-aided assessment (CAA) questions in subjects with substantial mathematical or statistical content, especially the importance of control of random parameters and the encoding of wrong methods of solution (mal-rules) commonly used by students. The pros and cons of using CAA and…
Computer Generated Optical Illusions: A Teaching and Research Tool.
ERIC Educational Resources Information Center
Bailey, Bruce; Harman, Wade
Interactive computer-generated simulations that highlight psychological principles were investigated in this study in which 33 female and 19 male undergraduate college student volunteers of median age 21 matched line and circle sizes in six variations of Ponzo's illusion. Prior to working with the illusions, data were collected based on subjects'…
Principles versus Artifacts in Computer Science Curriculum Design
ERIC Educational Resources Information Center
Machanick, Philip
2003-01-01
Computer Science is a subject which has difficulty in marketing itself. Further, pinning down a standard curriculum is difficult--there are many preferences which are hard to accommodate. This paper argues the case that part of the problem is the fact that, unlike more established disciplines, the subject does not clearly distinguish the study of…
ERIC Educational Resources Information Center
Hattie, John A. C.; Brown, Gavin T. L.
2008-01-01
National assessment systems can be enhanced with effective school-based assessment (SBA) that allows teachers to focus on improvement decisions. Modern computer-assisted technology systems are often used to deploy SBA systems. Since 2000, New Zealand has researched, developed, and deployed a national, computer-assisted SBA system. Eight major…
IP Addressing: Problem-Based Learning Approach on Computer Networks
ERIC Educational Resources Information Center
Jevremovic, Aleksandar; Shimic, Goran; Veinovic, Mladen; Ristic, Nenad
2017-01-01
The case study presented in this paper describes the pedagogical aspects and experience gathered while using an e-learning tool named IPA-PBL. Its main purpose is to provide additional motivation for adopting theoretical principles and procedures in a computer networks course. In the proposed model, the sequencing of activities of the learning…
Toward a Script Theory of Guidance in Computer-Supported Collaborative Learning
ERIC Educational Resources Information Center
Fischer, Frank; Kollar, Ingo; Stegmann, Karsten; Wecker, Christof
2013-01-01
This article presents an outline of a script theory of guidance for computer-supported collaborative learning (CSCL). With its 4 types of components of internal and external scripts (play, scene, role, and scriptlet) and 7 principles, this theory addresses the question of how CSCL practices are shaped by dynamically reconfigured internal…
The Difficult Bridge between University and Industry: A Case Study in Computer Science Teaching
ERIC Educational Resources Information Center
Schilling, Jan; Klamma, Ralf
2010-01-01
Recently, there has been increasing criticism concerning academic computer science education. This paper presents a new approach based on the principles of constructivist learning design as well as the ideas of knowledge transfer in communities of practice. The course "High-tech Entrepreneurship and New Media" was introduced as an…
Computer Managed Instruction at Arthur Andersen & Company: A Status Report.
ERIC Educational Resources Information Center
Dennis, Verl E.; Gruner, Dennis
1992-01-01
Computer managed instruction (CMI) based on the principle of mastery learning has been cost effective for job training in the tax division of Arthur Andersen & Company. The CMI software system, which uses computerized pretests and posttests to monitor training, has been upgraded from microcomputer use to local area networks. Success factors at…
Theoretical Investigation of Optical Computing Based on Neural Network Models.
1987-09-29
34 Cognitive and Psychological Computation with Neu- ral Models," IEEE Trans. Sys., Man, and cyber., SMC-13, p. 799, 1983. 20’ K. Nakano, "Association-A...7),482(1986). 211 F. Rosenblatt, Principles of Neurodynamics : Perceptron and the The- ory of Brain Mechanisms, Spartan Books, Washington,(1961). 22
ERIC Educational Resources Information Center
Collentine, Karina
2009-01-01
Second language acquisition (SLA) researchers strive to understand the language and exchanges that learners generate in synchronous computer-mediated communication (SCMC). Doughty and Long (2003) advocate replacing open-ended SCMC with task-based language teaching (TBLT) design principles. Since most task-based SCMC (TB-SCMC) research addresses an…
Computer-Aided Experiment Planning toward Causal Discovery in Neuroscience.
Matiasz, Nicholas J; Wood, Justin; Wang, Wei; Silva, Alcino J; Hsu, William
2017-01-01
Computers help neuroscientists to analyze experimental results by automating the application of statistics; however, computer-aided experiment planning is far less common, due to a lack of similar quantitative formalisms for systematically assessing evidence and uncertainty. While ontologies and other Semantic Web resources help neuroscientists to assimilate required domain knowledge, experiment planning requires not only ontological but also epistemological (e.g., methodological) information regarding how knowledge was obtained. Here, we outline how epistemological principles and graphical representations of causality can be used to formalize experiment planning toward causal discovery. We outline two complementary approaches to experiment planning: one that quantifies evidence per the principles of convergence and consistency, and another that quantifies uncertainty using logical representations of constraints on causal structure. These approaches operationalize experiment planning as the search for an experiment that either maximizes evidence or minimizes uncertainty. Despite work in laboratory automation, humans must still plan experiments and will likely continue to do so for some time. There is thus a great need for experiment-planning frameworks that are not only amenable to machine computation but also useful as aids in human reasoning.
Computing with scale-invariant neural representations
NASA Astrophysics Data System (ADS)
Howard, Marc; Shankar, Karthik
The Weber-Fechner law is perhaps the oldest quantitative relationship in psychology. Consider the problem of the brain representing a function f (x) . Different neurons have receptive fields that support different parts of the range, such that the ith neuron has a receptive field at xi. Weber-Fechner scaling refers to the finding that the width of the receptive field scales with xi as does the difference between the centers of adjacent receptive fields. Weber-Fechner scaling is exponentially resource-conserving. Neurophysiological evidence suggests that neural representations obey Weber-Fechner scaling in the visual system and perhaps other systems as well. We describe an optimality constraint that is solved by Weber-Fechner scaling, providing an information-theoretic rationale for this principle of neural coding. Weber-Fechner scaling can be generated within a mathematical framework using the Laplace transform. Within this framework, simple computations such as translation, correlation and cross-correlation can be accomplished. This framework can in principle be extended to provide a general computational language for brain-inspired cognitive computation on scale-invariant representations. Supported by NSF PHY 1444389 and the BU Initiative for the Physics and Mathematics of Neural Systems,.
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.
2017-11-01
This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
von Lilienfeld-Toal, Otto Anatole
2010-11-01
The design of new materials with specific physical, chemical, or biological properties is a central goal of much research in materials and medicinal sciences. Except for the simplest and most restricted cases brute-force computational screening of all possible compounds for interesting properties is beyond any current capacity due to the combinatorial nature of chemical compound space (set of stoichiometries and configurations). Consequently, when it comes to computationally optimizing more complex systems, reliable optimization algorithms must not only trade-off sufficient accuracy and computational speed of the models involved, they must also aim for rapid convergence in terms of number of compoundsmore » 'visited'. I will give an overview on recent progress on alchemical first principles paths and gradients in compound space that appear to be promising ingredients for more efficient property optimizations. Specifically, based on molecular grand canonical density functional theory an approach will be presented for the construction of high-dimensional yet analytical property gradients in chemical compound space. Thereafter, applications to molecular HOMO eigenvalues, catalyst design, and other problems and systems shall be discussed.« less
A novel quantum scheme for secure two-party distance computation
NASA Astrophysics Data System (ADS)
Peng, Zhen-wan; Shi, Run-hua; Zhong, Hong; Cui, Jie; Zhang, Shun
2017-12-01
Secure multiparty computational geometry is an essential field of secure multiparty computation, which computes a computation geometric problem without revealing any private information of each party. Secure two-party distance computation is a primitive of secure multiparty computational geometry, which computes the distance between two points without revealing each point's location information (i.e., coordinate). Secure two-party distance computation has potential applications with high secure requirements in military, business, engineering and so on. In this paper, we present a quantum solution to secure two-party distance computation by subtly using quantum private query. Compared to the classical related protocols, our quantum protocol can ensure higher security and better privacy protection because of the physical principle of quantum mechanics.
NASA Technical Reports Server (NTRS)
Smith, Paul H.
1988-01-01
The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.
Chonsilapawit, Teeraporn; Rungpragayphan, Suang
2016-10-01
Because hospital pharmacists have to deal with large amounts of health information and advanced information technology in practice, they must possess adequate skills and knowledge of informatics to operate efficiently. However, most current pharmacy curricula in Thailand barely address the principles and skills concerned with informatics, and Thai pharmacists usually acquire computer literacy and informatics skills through personal-interest training and self-study. In this study, we aimed to assess the skills and knowledge of informatics and the training needs of hospital pharmacists in Thailand, in order to improve curricular and professional development. A self-assessment postal survey of 73 questions was developed and distributed to the pharmacy departments of 601 hospitals throughout the country. Practicing hospital pharmacists were requested to complete and return the survey voluntarily. Within the 3 months of the survey period, a total of 805 out of 2002 surveys were returned. On average, respondents rated themselves as competent or better in the skills of basic computer operation, the Internet, information management, and communication. Understandably, they rated themselves at novice level for information technology and database design knowledge/skills, and at advanced beginner level for project, risk, and change management skills. Respondents believed that skills and knowledge of informatics were highly necessary for their work, and definitely needed training. Thai hospital pharmacists were confident in using computers and the Internet. They realized and appreciated their lack of informatics knowledge and skills, and needed more training. Pharmacy curricula and training should be developed accordingly. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
First-principles studies of electron transport in Ga2O3
NASA Astrophysics Data System (ADS)
Kang, Youngho; Krishnaswamy, Karthik; Peelaers, Hartwin; van de Walle, Chris G.
Ga2O3 is a wide-gap semiconductor with a monoclinic crystal structure and a band gap of 4.8 eV. Its high carrier mobility and large band gap have attracted a lot of attention for use in high power electronics and transparent conductors. Despite its potential for adoption in these applications, an understanding of its carrier transport properties is still lacking. In this study we use first-principles calculations to analyze and compute the electron scattering rates in Ga2O3. Scattering due to ionized impurities and polar longitudinal-optical (LO) phonon is taken into account. We find that the electron mobility is nearly isotropic, despite the low-symmetry monoclinic structure of Ga2O3. At low carrier densities ( 1017 cm-3), the mobility is limited by LO phonon scattering. Scattering by ionized impurities becomes increasingly important at higher carrier densities. This type of scattering is enhanced when compensating native point defects are present; in particular, gallium vacancies, which are triply negatively charged, can have a strong effect on mobility. These effects explain the downturn in mobility observed in experiments at high carrier densities. This work was supported by ARO and NSF.
Rate-distortion theory and human perception.
Sims, Chris R
2016-07-01
The fundamental goal of perception is to aid in the achievement of behavioral objectives. This requires extracting and communicating useful information from noisy and uncertain sensory signals. At the same time, given the complexity of sensory information and the limitations of biological information processing, it is necessary that some information must be lost or discarded in the act of perception. Under these circumstances, what constitutes an 'optimal' perceptual system? This paper describes the mathematical framework of rate-distortion theory as the optimal solution to the problem of minimizing the costs of perceptual error subject to strong constraints on the ability to communicate or transmit information. Rate-distortion theory offers a general and principled theoretical framework for developing computational-level models of human perception (Marr, 1982). Models developed in this framework are capable of producing quantitatively precise explanations for human perceptual performance, while yielding new insights regarding the nature and goals of perception. This paper demonstrates the application of rate-distortion theory to two benchmark domains where capacity limits are especially salient in human perception: discrete categorization of stimuli (also known as absolute identification) and visual working memory. A software package written for the R statistical programming language is described that aids in the development of models based on rate-distortion theory. Copyright © 2016 The Author. Published by Elsevier B.V. All rights reserved.
QUANTUM COMPUTING: Quantum Entangled Bits Step Closer to IT.
Zeilinger, A
2000-07-21
In contrast to today's computers, quantum computers and information technologies may in future be able to store and transmit information not only in the state "0" or "1," but also in superpositions of the two; information will then be stored and transmitted in entangled quantum states. Zeilinger discusses recent advances toward using this principle for quantum cryptography and highlights studies into the entanglement (or controlled superposition) of several photons, atoms, or ions.
Determination of stream reaeration coefficients by use of tracers
Kilpatrick, F.A.; Rathbun, R.E.; Yotsukura, Nobuhiro; Parker, G.W.; DeLong, L.L.
1989-01-01
Stream reaeration is the physical absorption of oxygen from the atmosphere by a flowing stream. This is the primary process by which a stream replenishes the oxygen consumed in the biodegradation of organic wastes. Prior to 1965, reaeration rate coefficients could be estimated only by indirect methods. In 1965, a direct method of measuring stream reaeration coefficients was developed whereby a radioactive tracer gas was injected into a stream-the principle being that the tracer gas would be desorbed from the stream inversely to how oxygen would be absorbed. The technique has since been modified by substituting hydrocarbon gases for the radioactive tracer gas. This manual describes the slug-injection and constant-rate-injection methods of measuring gas-tracer desorption. Emphasis is on the use of rhodamine WT dye as a relatively conservative tracer and propane as the nonconservative gas tracer, on planning field tests, on methods of injection, sampling, and analysis, and on techniques for computing desorption and reaeration coefficients.
An hp symplectic pseudospectral method for nonlinear optimal control
NASA Astrophysics Data System (ADS)
Peng, Haijun; Wang, Xinwei; Li, Mingwu; Chen, Biaosong
2017-01-01
An adaptive symplectic pseudospectral method based on the dual variational principle is proposed and is successfully applied to solving nonlinear optimal control problems in this paper. The proposed method satisfies the first order necessary conditions of continuous optimal control problems, also the symplectic property of the original continuous Hamiltonian system is preserved. The original optimal control problem is transferred into a set of nonlinear equations which can be solved easily by Newton-Raphson iterations, and the Jacobian matrix is found to be sparse and symmetric. The proposed method, on one hand, exhibits exponent convergence rates when the number of collocation points are increasing with the fixed number of sub-intervals; on the other hand, exhibits linear convergence rates when the number of sub-intervals is increasing with the fixed number of collocation points. Furthermore, combining with the hp method based on the residual error of dynamic constraints, the proposed method can achieve given precisions in a few iterations. Five examples highlight the high precision and high computational efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
Zhu, C.; Rimstidt, J. D.; Liu, Z.; Yuan, H.
2016-12-01
The principle of detailed balance (PDB) has been a cornerstone for irreversible thermodynamics and chemical kinetics for a long time, and its wide application in geochemistry has mostly been implicit and without experimental testing of its applicability. Nevertheless, many extrapolations based on PDB without experimental validation have far reaching impacts on society's mega environmental enterprises. Here we report an isotope doping method that independently measures simultaneous dissolution and precipitation rates and can test this principle. The technique reacts a solution enriched in a rare isotope of an element with a solid having natural isotopic abundances (Beck et al., 1992; Gaillardet, 2008; Gruber et al., 2013). Dissolution and precipitation rates are found from the changing isotopic ratios. Our quartz experiment doped with 29Si showed that the equilibrium dissolution rate remains unchanged at all degrees of undersaturation. We recommend this approach to test the validity of using the detailed balance relationship in rate equations for other substances.
Toward Computational Design of High-Efficiency Photovoltaics from First-Principles
2016-08-15
dependence of exciton diffusion in conjugated small molecules, Applied Physics Letters, (04 2014): 0. doi: 10.1063/1.4871303 Guangfen Wu, Zi Li, Xu...principle approach based on the time- dependent density functional theory (TDDFT) to describe exciton states, including energy levels and many-body wave... depends more sensitively on the dimension and crystallinity of the acceptor parallel to the interface than normal to the interface. Reorganization
Astrobiology for the 21st Century
NASA Astrophysics Data System (ADS)
Oliveira, C.
2008-02-01
We live in a scientific world. Science is all around us. We take scientific principles for granted every time we use a piece of technological apparatus, such as a car, a computer, or a cellphone. In today's world, citizens frequently have to make decisions that require them to have some basic scientific knowledge. To be a contributing citizen in a modern democracy, a person needs to understand the general principles of science.
Dynamic Routing for Delay-Tolerant Networking in Space Flight Operations
NASA Technical Reports Server (NTRS)
Burleigh, Scott
2008-01-01
Computational self-sufficiency - the making of communication decisions on the basis of locally available information that is already in place, rather than on the basis of information residing at other entities - is a fundamental principle of Delay-Tolerant Networking. Contact Graph Routing is an attempt to apply this principle to the problem of dynamic routing in an interplanetary DTN. Testing continues, but preliminary results are promising.
NASA Technical Reports Server (NTRS)
Clarke, R.; Lintereur, L.; Bahm, C.
2016-01-01
A desire for more complete documentation of the National Aeronautics and Space Administration (NASA) Armstrong Flight Research Center (AFRC), Edwards, California legacy code used in the core simulation has led to this e ort to fully document the oblate Earth six-degree-of-freedom equations of motion and integration algorithm. The authors of this report have taken much of the earlier work of the simulation engineering group and used it as a jumping-o point for this report. The largest addition this report makes is that each element of the equations of motion is traced back to first principles and at no point is the reader forced to take an equation on faith alone. There are no discoveries of previously unknown principles contained in this report; this report is a collection and presentation of textbook principles. The value of this report is that those textbook principles are herein documented in standard nomenclature that matches the form of the computer code DERIVC. Previous handwritten notes are much of the backbone of this work, however, in almost every area, derivations are explicitly shown to assure the reader that the equations which make up the oblate Earth version of the computer routine, DERIVC, are correct.
Efficient and Effective Change Principles in Active Videogames
Fenner, Ashley A.; Howie, Erin K.; Feltz, Deborah L.; Gray, Cindy M.; Lu, Amy Shirong; Mueller, Florian “Floyd”; Simons, Monique; Barnett, Lisa M.
2015-01-01
Abstract Active videogames have the potential to enhance population levels of physical activity but have not been successful in achieving this aim to date. This article considers a range of principles that may be important to the design of effective and efficient active videogames from diverse discipline areas, including behavioral sciences (health behavior change, motor learning, and serious games), business production (marketing and sales), and technology engineering and design (human–computer interaction/ergonomics and flow). Both direct and indirect pathways to impact on population levels of habitual physical activity are proposed, along with the concept of a game use lifecycle. Examples of current active and sedentary electronic games are used to understand how such principles may be applied. Furthermore, limitations of the current usage of theoretical principles are discussed. A suggested list of principles for best practice in active videogame design is proposed along with suggested research ideas to inform practice to enhance physical activity. PMID:26181680
Efficient and Effective Change Principles in Active Videogames.
Straker, Leon M; Fenner, Ashley A; Howie, Erin K; Feltz, Deborah L; Gray, Cindy M; Lu, Amy Shirong; Mueller, Florian Floyd; Simons, Monique; Barnett, Lisa M
2015-02-01
Active videogames have the potential to enhance population levels of physical activity but have not been successful in achieving this aim to date. This article considers a range of principles that may be important to the design of effective and efficient active videogames from diverse discipline areas, including behavioral sciences (health behavior change, motor learning, and serious games), business production (marketing and sales), and technology engineering and design (human-computer interaction/ergonomics and flow). Both direct and indirect pathways to impact on population levels of habitual physical activity are proposed, along with the concept of a game use lifecycle. Examples of current active and sedentary electronic games are used to understand how such principles may be applied. Furthermore, limitations of the current usage of theoretical principles are discussed. A suggested list of principles for best practice in active videogame design is proposed along with suggested research ideas to inform practice to enhance physical activity.
The simplicity principle in perception and cognition
Feldman, Jacob
2016-01-01
The simplicity principle, traditionally referred to as Occam’s razor, is the idea that simpler explanations of observations should be preferred to more complex ones. In recent decades the principle has been clarified via the incorporation of modern notions of computation and probability, allowing a more precise understanding of how exactly complexity minimization facilitates inference. The simplicity principle has found many applications in modern cognitive science, in contexts as diverse as perception, categorization, reasoning, and neuroscience. In all these areas, the common idea is that the mind seeks the simplest available interpretation of observations— or, more precisely, that it balances a bias towards simplicity with a somewhat opposed constraint to choose models consistent with perceptual or cognitive observations. This brief tutorial surveys some of the uses of the simplicity principle across cognitive science, emphasizing how complexity minimization in a number of forms has been incorporated into probabilistic models of inference. PMID:27470193
Killeen, Peter R.; Sitomer, Matthew T.
2008-01-01
Mathematical Principles of Reinforcement (MPR) is a theory of reinforcement schedules. This paper reviews the origin of the principles constituting MPR: arousal, association and constraint. Incentives invigorate responses, in particular those preceding and predicting the incentive. The process that generates an associative bond between stimuli, responses and incentives is called coupling. The combination of arousal and coupling constitutes reinforcement. Models of coupling play a central role in the evolution of the theory. The time required to respond constrains the maximum response rates, and generates a hyperbolic relation between rate of responding and rate of reinforcement. Models of control by ratio schedules are developed to illustrate the interaction of the principles. Correlations among parameters are incorporated into the structure of the models, and assumptions that were made in the original theory are refined in light of current data. PMID:12729968
An efficient formulation of robot arm dynamics for control and computer simulation
NASA Astrophysics Data System (ADS)
Lee, C. S. G.; Nigam, R.
This paper describes an efficient formulation of the dynamic equations of motion of industrial robots based on the Lagrange formulation of d'Alembert's principle. This formulation, as applied to a PUMA robot arm, results in a set of closed form second order differential equations with cross product terms. They are not as efficient in computation as those formulated by the Newton-Euler method, but provide a better analytical model for control analysis and computer simulation. Computational complexities of this dynamic model together with other models are tabulated for discussion.
The electromagnetic modeling of thin apertures using the finite-difference time-domain technique
NASA Technical Reports Server (NTRS)
Demarest, Kenneth R.
1987-01-01
A technique which computes transient electromagnetic responses of narrow apertures in complex conducting scatterers was implemented as an extension of previously developed Finite-Difference Time-Domain (FDTD) computer codes. Although these apertures are narrow with respect to the wavelengths contained within the power spectrum of excitation, this technique does not require significantly more computer resources to attain the increased resolution at the apertures. In the report, an analytical technique which utilizes Babinet's principle to model the apertures is developed, and an FDTD computer code which utilizes this technique is described.
Numerical Modeling of Three-Dimensional Confined Flows
NASA Technical Reports Server (NTRS)
Greywall, M. S.
1981-01-01
A three dimensional confined flow model is presented. The flow field is computed by calculating velocity and enthalpy along a set of streamlines. The finite difference equations are obtained by applying conservation principles to streamtubes constructed around the chosen streamlines. With appropriate substitutions for the body force terms, the approach computes three dimensional magnetohydrodynamic channel flows. A listing of a computer code, based on this approach is presented in FORTRAN IV language. The code computes three dimensional compressible viscous flow through a rectangular duct, with the duct cross section specified along the axis.
Experimental Blind Quantum Computing for a Classical Client.
Huang, He-Liang; Zhao, Qi; Ma, Xiongfeng; Liu, Chang; Su, Zu-En; Wang, Xi-Lin; Li, Li; Liu, Nai-Le; Sanders, Barry C; Lu, Chao-Yang; Pan, Jian-Wei
2017-08-04
To date, blind quantum computing demonstrations require clients to have weak quantum devices. Here we implement a proof-of-principle experiment for completely classical clients. Via classically interacting with two quantum servers that share entanglement, the client accomplishes the task of having the number 15 factorized by servers who are denied information about the computation itself. This concealment is accompanied by a verification protocol that tests servers' honesty and correctness. Our demonstration shows the feasibility of completely classical clients and thus is a key milestone towards secure cloud quantum computing.
NASA Astrophysics Data System (ADS)
Bogdanov, Alexander; Khramushin, Vasily
2016-02-01
The architecture of a digital computing system determines the technical foundation of a unified mathematical language for exact arithmetic-logical description of phenomena and laws of continuum mechanics for applications in fluid mechanics and theoretical physics. The deep parallelization of the computing processes results in functional programming at a new technological level, providing traceability of the computing processes with automatic application of multiscale hybrid circuits and adaptive mathematical models for the true reproduction of the fundamental laws of physics and continuum mechanics.
Experimental Blind Quantum Computing for a Classical Client
NASA Astrophysics Data System (ADS)
Huang, He-Liang; Zhao, Qi; Ma, Xiongfeng; Liu, Chang; Su, Zu-En; Wang, Xi-Lin; Li, Li; Liu, Nai-Le; Sanders, Barry C.; Lu, Chao-Yang; Pan, Jian-Wei
2017-08-01
To date, blind quantum computing demonstrations require clients to have weak quantum devices. Here we implement a proof-of-principle experiment for completely classical clients. Via classically interacting with two quantum servers that share entanglement, the client accomplishes the task of having the number 15 factorized by servers who are denied information about the computation itself. This concealment is accompanied by a verification protocol that tests servers' honesty and correctness. Our demonstration shows the feasibility of completely classical clients and thus is a key milestone towards secure cloud quantum computing.
Aircraft noise source and computer programs - User's guide
NASA Technical Reports Server (NTRS)
Crowley, K. C.; Jaeger, M. A.; Meldrum, D. F.
1973-01-01
The application of computer programs for predicting the noise-time histories and noise contours for five types of aircraft is reported. The aircraft considered are: (1) turbojet, (2) turbofan, (3) turboprop, (4) V/STOL, and (5) helicopter. Three principle considerations incorporated in the design of the noise prediction program are core effectiveness, limited input, and variable output reporting.
ERIC Educational Resources Information Center
Franklin, Cindy A.
The behavior and rehearsal habits of elementary level instrumental music students were addressed by the implementation of a behavior contract and computer managed recordkeeping system. A contract was developed following the principles of the "Assertive Discipline" system. Rewards and incentives were used to reinforce positive student behavior.…
Computer Assisted Instruction: The Game "Le Choc des Multinationales."
ERIC Educational Resources Information Center
Cramer, Hazel
"Le Choc de Multinationales" is a microcomputer game for students in an upper-level commercial French couse, to be played by two opponents, one of whom may be another student or the computer itself as a direct business competitor. The game's requirements for language use and knowledge of business and economics theory and principles are moderate,…
Power Monitoring Using the Raspberry Pi
ERIC Educational Resources Information Center
Snyder, Robin M.
2014-01-01
The Raspberry Pi is a credit card size low powered compute board with Ethernet connection, HDMI video output, audio, full Linux operating system run from an SD card, and more, all for $45. With cables, SD card, etc., the cost is about $70. Originally designed to help teach computer science principles to low income children and students, the Pi has…
Conceptual Framework for Using Computers to Enhance Employee Engagement in Large Offices
ERIC Educational Resources Information Center
Gill, Rob
2010-01-01
Using computers to engage with staff members on their organization's Employer of Choice (EOC) program as part of a human resource development (HRD) framework can add real value to that organization's reputation. EOC is an evolving principle for Australian business. It reflects the value and importance organizations place on their key stakeholders,…
USDA-ARS?s Scientific Manuscript database
High resolution x-ray computed tomography (HRCT) is a non-destructive diagnostic imaging technique with sub-micron resolution capability that is now being used to evaluate the structure and function of plant xylem network in three dimensions (3D). HRCT imaging is based on the same principles as medi...
The Effects of Learning a Computer Programming Language on the Logical Reasoning of School Children.
ERIC Educational Resources Information Center
Seidman, Robert H.
The research reported in this paper explores the syntactical and semantic link between computer programming statements and logical principles, and addresses the effects of learning a programming language on logical reasoning ability. Fifth grade students in a public school in Syracuse, New York, were randomly selected as subjects, and then…
ERIC Educational Resources Information Center
Gulson, Kalervo N.; Webb, P. Taylor
2017-01-01
Contemporary education policy involves the integration of novel forms of data and the creation of new data platforms, in addition to the infusion of business principles into school governance networks, and intensification of socio-technical relations. In this paper, we examine how "computational rationality" may be understood as…
Meeting the Needs of All Students: A Universal Design Approach to Computer-Based Testing
ERIC Educational Resources Information Center
Russell, Michael; Hoffmann, Thomas; Higgins, Jennifer
2009-01-01
Michael Russell, Thomas Hoffmann, and Jennifer Higgins describe how the principles of universal design were applied to the development of an innovative computer-based test delivery system, NimbleTools, to meet the accessibility and accommodation needs of students with a wide range of disabilities and special needs. Noting the movement to…
Evaluation of the Effectiveness of a Web-Based Learning Design for Adult Computer Science Courses
ERIC Educational Resources Information Center
Antonis, Konstantinos; Daradoumis, Thanasis; Papadakis, Spyros; Simos, Christos
2011-01-01
This paper reports on work undertaken within a pilot study concerned with the design, development, and evaluation of online computer science training courses. Drawing on recent developments in e-learning technology, these courses were structured around the principles of a learner-oriented approach for use with adult learners. The paper describes a…
Inertial Orientation Trackers with Drift Compensation
NASA Technical Reports Server (NTRS)
Foxlin, Eric M.
2008-01-01
A class of inertial-sensor systems with drift compensation has been invented for use in measuring the orientations of human heads (and perhaps other, similarly sized objects). These systems can be designed to overcome some of the limitations of prior orientation-measuring systems that are based, variously, on magnetic, optical, mechanical-linkage, and acoustical principles. The orientation signals generated by the systems of this invention could be used for diverse purposes, including controlling head-orientation-dependent virtual reality visual displays or enabling persons whose limbs are paralyzed to control machinery by means of head motions. The inventive concept admits to variations too numerous to describe here, making it necessary to limit this description to a typical system, the selected aspects of which are illustrated in the figure. A set of sensors is mounted on a bracket on a band or a cap that gently but firmly grips the wearer s head to be tracked. Among the sensors are three drift-sensitive rotationrate sensors (e.g., integrated-circuit angular- rate-measuring gyroscopes), which put out DC voltages nominally proportional to the rates of rotation about their sensory axes. These sensors are mounted in mutually orthogonal orientations for measuring rates of rotation about the roll, pitch, and yaw axes of the wearer s head. The outputs of these rate sensors are conditioned and digitized, and the resulting data are fed to an integrator module implemented in software in a digital computer. In the integrator module, the angular-rate signals are jointly integrated by any of several established methods to obtain a set of angles that represent approximately the orientation of the head in an external, inertial coordinate system. Because some drift is always present as a component of an angular position computed by integrating the outputs of angular-rate sensors, the orientation signal is processed further in a drift-compensator software module.
Dynamic response of polyurea subjected to nanosecond rise-time stress waves
NASA Astrophysics Data System (ADS)
Youssef, George; Gupta, Vijay
2012-08-01
Shaped charges and explosively formed projectiles used in modern warfare can attain speeds as high as 30,000 ft/s. Impacts from these threats are expected to load the armor materials in the 10 to 100 ns timeframe. During this time, the material strains are quite limited but the strain rates are extremely high. To develop armors against such threats it is imperative to understand the dynamic constitutive behavior of materials in the tens of nanoseconds timeframe. Material behavior in this parameter space cannot be obtained by even the most sophisticated plate-impact and split-Hopkinson bar setups that exist within the high energy materials field today. This paper introduces an apparatus and a test method that are based on laser-generated stress waves to obtain such material behaviors. Although applicable to any material system, the test procedures are demonstrated on polyurea which shows unusual dynamic properties. Thin polyurea layers were deformed using laser-generated stress waves with 1-2 ns rise times and 16 ns total duration. The total strain in the samples was less than 3%. Because of the transient nature of the stress wave, the strain rate varied throughout the deformation history of the sample. A peak value of 1.1×105 s-1 was calculated. It was found that the stress-strain characteristics, determined from experimentally recorded incident and transmitted wave profiles, matched satisfactorily with those computed from a 2D wave mechanics simulation in which the polyurea was modeled as a linearly viscoelastic solid with constants derived from the quasi-static experiments. Thus, the test data conformed to the Time-Temperature Superposition (TTS) principle even at extremely high strain rates of our test. This then extends the previous observations of Zhao et al. (Mech. Time-Depend. Mater. 11:289-308, 2007) who showed the applicability of the TTS principle for polyurea in the linearly viscoelastic regime up to peak strain rates of 1200 s-1.
NASA Technical Reports Server (NTRS)
Christenson, D.; Gordon, M.; Kistler, R.; Kriegler, F.; Lampert, S.; Marshall, R.; Mclaughlin, R.
1977-01-01
A third-generation, fast, low cost, multispectral recognition system (MIDAS) able to keep pace with the large quantity and high rates of data acquisition from large regions with present and projected sensots is described. The program can process a complete ERTS frame in forty seconds and provide a color map of sixteen constituent categories in a few minutes. A principle objective of the MIDAS program is to provide a system well interfaced with the human operator and thus to obtain large overall reductions in turn-around time and significant gains in throughput. The hardware and software generated in the overall program is described. The system contains a midi-computer to control the various high speed processing elements in the data path, a preprocessor to condition data, and a classifier which implements an all digital prototype multivariate Gaussian maximum likelihood or a Bayesian decision algorithm. Sufficient software was developed to perform signature extraction, control the preprocessor, compute classifier coefficients, control the classifier operation, operate the color display and printer, and diagnose operation.
Ergonomic nursing workstation design to prevent cumulative trauma disorders.
McHugh, M L; Schaller, P
1997-01-01
The introduction of computerized nursing information systems offers health care institutions an opportunity to take a new look at safety issues related to nursing workstation design. Industrial studies have investigated the injuries sustained by clerical workers who spend long periods of time at their computers. Cumulative trauma disorders (CTDs) are the most common injuries associated with computerized workstation use. They account for nearly 90,000 injuries each year in the United States. Typical CTDs include back pain, strain of the neck, shoulders and eyes, and carpal tunnel syndrome. As the information handling work of nurses is increasingly computerized, the incidence of computer-related injury is expected to increase. Injury rates can be reduced by ergonomic workstation design. An assessment of potential risks associated with the equipment installed should be done as part of workstation design. Risk identification is a prerequisite for avoiding injuries by designing workstations that protect human health. The ergonomic principles learned and tested on office workers are addressed and extrapolated to nursing workstation design. Specific suggestions for design of sitting and standing workstations are presented.
Designing the user interface: strategies for effective human-computer interaction
NASA Astrophysics Data System (ADS)
Shneiderman, B.
1998-03-01
In revising this popular book, Ben Shneiderman again provides a complete, current and authoritative introduction to user-interface design. The user interface is the part of every computer system that determines how people control and operate that system. When the interface is well designed, it is comprehensible, predictable, and controllable; users feel competent, satisfied, and responsible for their actions. Shneiderman discusses the principles and practices needed to design such effective interaction. Based on 20 years experience, Shneiderman offers readers practical techniques and guidelines for interface design. He also takes great care to discuss underlying issues and to support conclusions with empirical results. Interface designers, software engineers, and product managers will all find this book an invaluable resource for creating systems that facilitate rapid learning and performance, yield low error rates, and generate high user satisfaction. Coverage includes the human factors of interactive software (with a new discussion of diverse user communities), tested methods to develop and assess interfaces, interaction styles such as direct manipulation for graphical user interfaces, and design considerations such as effective messages, consistent screen design, and appropriate color.
Cross-verification of the GENE and XGC codes in preparation for their coupling
NASA Astrophysics Data System (ADS)
Jenko, Frank; Merlo, Gabriele; Bhattacharjee, Amitava; Chang, Cs; Dominski, Julien; Ku, Seunghoe; Parker, Scott; Lanti, Emmanuel
2017-10-01
A high-fidelity Whole Device Model (WDM) of a magnetically confined plasma is a crucial tool for planning and optimizing the design of future fusion reactors, including ITER. Aiming at building such a tool, in the framework of the Exascale Computing Project (ECP) the two existing gyrokinetic codes GENE (Eulerian delta-f) and XGC (PIC full-f) will be coupled, thus enabling to carry out first principle kinetic WDM simulations. In preparation for this ultimate goal, a benchmark between the two codes is carried out looking at ITG modes in the adiabatic electron limit. This verification exercise is also joined by the global Lagrangian PIC code ORB5. Linear and nonlinear comparisons have been carried out, neglecting for simplicity collisions and sources. A very good agreement is recovered on frequency, growth rate and mode structure of linear modes. A similarly excellent agreement is also observed comparing the evolution of the heat flux and of the background temperature profile during nonlinear simulations. Work supported by the US DOE under the Exascale Computing Project (17-SC-20-SC).
Acceleration of the direct reconstruction of linear parametric images using nested algorithms.
Wang, Guobao; Qi, Jinyi
2010-03-07
Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.
- and Scene-Guided Integration of Tls and Photogrammetric Point Clouds for Landslide Monitoring
NASA Astrophysics Data System (ADS)
Zieher, T.; Toschi, I.; Remondino, F.; Rutzinger, M.; Kofler, Ch.; Mejia-Aguilar, A.; Schlögel, R.
2018-05-01
Terrestrial and airborne 3D imaging sensors are well-suited data acquisition systems for the area-wide monitoring of landslide activity. State-of-the-art surveying techniques, such as terrestrial laser scanning (TLS) and photogrammetry based on unmanned aerial vehicle (UAV) imagery or terrestrial acquisitions have advantages and limitations associated with their individual measurement principles. In this study we present an integration approach for 3D point clouds derived from these techniques, aiming at improving the topographic representation of landslide features while enabling a more accurate assessment of landslide-induced changes. Four expert-based rules involving local morphometric features computed from eigenvectors, elevation and the agreement of the individual point clouds, are used to choose within voxels of selectable size which sensor's data to keep. Based on the integrated point clouds, digital surface models and shaded reliefs are computed. Using an image correlation technique, displacement vectors are finally derived from the multi-temporal shaded reliefs. All results show comparable patterns of landslide movement rates and directions. However, depending on the applied integration rule, differences in spatial coverage and correlation strength emerge.
Modeling plastic deformation of post-irradiated copper micro-pillars
NASA Astrophysics Data System (ADS)
Crosby, Tamer; Po, Giacomo; Ghoniem, Nasr M.
2014-12-01
We present here an application of a fundamentally new theoretical framework for description of the simultaneous evolution of radiation damage and plasticity that can describe both in situ and ex situ deformation of structural materials [1]. The theory is based on the variational principle of maximum entropy production rate; with constraints on dislocation climb motion that are imposed by point defect fluxes as a result of irradiation. The developed theory is implemented in a new computational code that facilitates the simulation of irradiated and unirradiated materials alike in a consistent fashion [2]. Discrete Dislocation Dynamics (DDD) computer simulations are presented here for irradiated fcc metals that address the phenomenon of dislocation channel formation in post-irradiated copper. The focus of the simulations is on the role of micro-pillar boundaries and the statistics of dislocation pinning by stacking-fault tetrahedra (SFTs) on the onset of dislocation channel and incipient surface crack formation. The simulations show that the spatial heterogeneity in the distribution of SFTs naturally leads to localized plastic deformation and incipient surface fracture of micro-pillars.
Computational materials design for energy applications
NASA Astrophysics Data System (ADS)
Ozolins, Vidvuds
2013-03-01
General adoption of sustainable energy technologies depends on the discovery and development of new high-performance materials. For instance, waste heat recovery and electricity generation via the solar thermal route require bulk thermoelectrics with a high figure of merit (ZT) and thermal stability at high-temperatures. Energy recovery applications (e.g., regenerative braking) call for the development of rapidly chargeable systems for electrical energy storage, such as electrochemical supercapacitors. Similarly, use of hydrogen as vehicular fuel depends on the ability to store hydrogen at high volumetric and gravimetric densities, as well as on the ability to extract it at ambient temperatures at sufficiently rapid rates. We will discuss how first-principles computational methods based on quantum mechanics and statistical physics can drive the understanding, improvement and prediction of new energy materials. We will cover prediction and experimental verification of new earth-abundant thermoelectrics, transition metal oxides for electrochemical supercapacitors, and kinetics of mass transport in complex metal hydrides. Research has been supported by the US Department of Energy under grant Nos. DE-SC0001342, DE-SC0001054, DE-FG02-07ER46433, and DE-FC36-08GO18136.
Use of rhythm in acquisition of a computer-generated tracking task.
Fulop, A C; Kirby, R H; Coates, G D
1992-08-01
This research assessed whether rhythm aids acquisition of motor skills by providing cues for the timing of those skills. Rhythms were presented to participants visually or visually with auditory cues. It was hypothesized that the auditory cues would facilitate recognition and learning of the rhythms. The three timing principles of rhythms were also explored. It was hypothesized that rhythms that satisfied all three timing principles would be more beneficial in learning a skill than rhythms that did not satisfy the principles. Three groups learned three different rhythms by practicing a tracking task. After training, participants attempted to reproduce the tracks from memory. Results suggest that rhythms do help in learning motor skills but different sets of timing principles explain perception of rhythm in different modalities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Tianfeng
The goal of the proposed research is to create computational flame diagnostics (CFLD) that are rigorous numerical algorithms for systematic detection of critical flame features, such as ignition, extinction, and premixed and non-premixed flamelets, and to understand the underlying physicochemical processes controlling limit flame phenomena, flame stabilization, turbulence-chemistry interactions and pollutant emissions etc. The goal has been accomplished through an integrated effort on mechanism reduction, direct numerical simulations (DNS) of flames at engine conditions and a variety of turbulent flames with transport fuels, computational diagnostics, turbulence modeling, and DNS data mining and data reduction. The computational diagnostics are primarily basedmore » on the chemical explosive mode analysis (CEMA) and a recently developed bifurcation analysis using datasets from first-principle simulations of 0-D reactors, 1-D laminar flames, and 2-D and 3-D DNS (collaboration with J.H. Chen and S. Som at Argonne, and C.S. Yoo at UNIST). Non-stiff reduced mechanisms for transportation fuels amenable for 3-D DNS are developed through graph-based methods and timescale analysis. The flame structures, stabilization mechanisms, local ignition and extinction etc., and the rate controlling chemical processes are unambiguously identified through CFLD. CEMA is further employed to segment complex turbulent flames based on the critical flame features, such as premixed reaction fronts, and to enable zone-adaptive turbulent combustion modeling.« less
NASA Technical Reports Server (NTRS)
Yang, H. Q.; West, Jeff
2015-01-01
Current reduced-order thermal model for cryogenic propellant tanks is based on correlations built for flat plates collected in the 1950's. The use of these correlations suffers from: inaccurate geometry representation; inaccurate gravity orientation; ambiguous length scale; and lack of detailed validation. The work presented under this task uses the first-principles based Computational Fluid Dynamics (CFD) technique to compute heat transfer from tank wall to the cryogenic fluids, and extracts and correlates the equivalent heat transfer coefficient to support reduced-order thermal model. The CFD tool was first validated against available experimental data and commonly used correlations for natural convection along a vertically heated wall. Good agreements between the present prediction and experimental data have been found for flows in laminar as well turbulent regimes. The convective heat transfer between tank wall and cryogenic propellant, and that between tank wall and ullage gas were then simulated. The results showed that commonly used heat transfer correlations for either vertical or horizontal plate over predict heat transfer rate for the cryogenic tank, in some cases by as much as one order of magnitude. A characteristic length scale has been defined that can correlate all heat transfer coefficients for different fill levels into a single curve. This curve can be used for the reduced-order heat transfer model analysis.
The Internet and the menopause consultation: menopause management in the third millennium.
Cumming, Grant P; Currie, Heather
2005-09-01
The Internet was born in 1969; it was originally developed so that computers could share information on research and development in the scientific and military fields. The original Internet consisted of four university computers networked in the United States. Email became available two years later. The infant Internet initially required complex computing knowledge to be used. However, this was all to change with the development of the World Wide Web in the early 1990s, which made the Internet much more widely accessible. The Internet has since grown at a phenomenal rate and has evolved into a global communications tool. It is by nature anarchic, in that it is an unrestricted broadcast medium. Although this lack of censorship is a strength, it is also a weakness. The quality of information available on the Web is variable and discernment is required. With the growth of e-health, medicine and its allied specialties are faced with the challenges of providing their services in a novel way while maintaining the first principle of medicine, primum non nocere (first, do no harm). This provision of e-health care is in its infancy and this review explores issues arising from the use of the Internet as a medium for organizing menopausal health care in the third millennium.
Quantum Computing since Democritus
NASA Astrophysics Data System (ADS)
Aaronson, Scott
2013-03-01
1. Atoms and the void; 2. Sets; 3. Gödel, Turing, and friends; 4. Minds and machines; 5. Paleocomplexity; 6. P, NP, and friends; 7. Randomness; 8. Crypto; 9. Quantum; 10. Quantum computing; 11. Penrose; 12. Decoherence and hidden variables; 13. Proofs; 14. How big are quantum states?; 15. Skepticism of quantum computing; 16. Learning; 17. Interactive proofs and more; 18. Fun with the Anthropic Principle; 19. Free will; 20. Time travel; 21. Cosmology and complexity; 22. Ask me anything.
Biomolecular computing systems: principles, progress and potential.
Benenson, Yaakov
2012-06-12
The task of information processing, or computation, can be performed by natural and man-made 'devices'. Man-made computers are made from silicon chips, whereas natural 'computers', such as the brain, use cells and molecules. Computation also occurs on a much smaller scale in regulatory and signalling pathways in individual cells and even within single biomolecules. Indeed, much of what we recognize as life results from the remarkable capacity of biological building blocks to compute in highly sophisticated ways. Rational design and engineering of biological computing systems can greatly enhance our ability to study and to control biological systems. Potential applications include tissue engineering and regeneration and medical treatments. This Review introduces key concepts and discusses recent progress that has been made in biomolecular computing.
East-West paths to unconventional computing.
Adamatzky, Andrew; Akl, Selim; Burgin, Mark; Calude, Cristian S; Costa, José Félix; Dehshibi, Mohammad Mahdi; Gunji, Yukio-Pegio; Konkoli, Zoran; MacLennan, Bruce; Marchal, Bruno; Margenstern, Maurice; Martínez, Genaro J; Mayne, Richard; Morita, Kenichi; Schumann, Andrew; Sergeyev, Yaroslav D; Sirakoulis, Georgios Ch; Stepney, Susan; Svozil, Karl; Zenil, Hector
2017-12-01
Unconventional computing is about breaking boundaries in thinking, acting and computing. Typical topics of this non-typical field include, but are not limited to physics of computation, non-classical logics, new complexity measures, novel hardware, mechanical, chemical and quantum computing. Unconventional computing encourages a new style of thinking while practical applications are obtained from uncovering and exploiting principles and mechanisms of information processing in and functional properties of, physical, chemical and living systems; in particular, efficient algorithms are developed, (almost) optimal architectures are designed and working prototypes of future computing devices are manufactured. This article includes idiosyncratic accounts of 'unconventional computing' scientists reflecting on their personal experiences, what attracted them to the field, their inspirations and discoveries. Copyright © 2017 Elsevier Ltd. All rights reserved.
Quantum Computation: Entangling with the Future
NASA Technical Reports Server (NTRS)
Jiang, Zhang
2017-01-01
Commercial applications of quantum computation have become viable due to the rapid progress of the field in the recent years. Efficient quantum algorithms are discovered to cope with the most challenging real-world problems that are too hard for classical computers. Manufactured quantum hardware has reached unprecedented precision and controllability, enabling fault-tolerant quantum computation. Here, I give a brief introduction on what principles in quantum mechanics promise its unparalleled computational power. I will discuss several important quantum algorithms that achieve exponential or polynomial speedup over any classical algorithm. Building a quantum computer is a daunting task, and I will talk about the criteria and various implementations of quantum computers. I conclude the talk with near-future commercial applications of a quantum computer.