Accounting & Computing Curriculum Guide.
ERIC Educational Resources Information Center
Avani, Nathan T.; And Others
This curriculum guide consists of materials for use in teaching a competency-based accounting and computing course that is designed to prepare students for employability in the following occupational areas: inventory control clerk, invoice clerk, payroll clerk, traffic clerk, general ledger bookkeeper, accounting clerk, account information clerk,…
Teaching Accounting with Computers.
ERIC Educational Resources Information Center
Shaoul, Jean
This paper addresses the numerous ways that computers may be used to enhance the teaching of accounting and business topics. It focuses on the pedagogical use of spreadsheet software to improve the conceptual coverage of accounting principles and practice, increase student understanding by involvement in the solution process, and reduce the amount…
Wang, Menghua
2016-05-30
To understand and assess the effect of the sensor spectral response function (SRF) on the accuracy of the top of the atmosphere (TOA) Rayleigh-scattering radiance computation, new TOA Rayleigh radiance lookup tables (LUTs) over global oceans and inland waters have been generated. The new Rayleigh LUTs include spectral coverage of 335-2555 nm, all possible solar-sensor geometries, and surface wind speeds of 0-30 m/s. Using the new Rayleigh LUTs, the sensor SRF effect on the accuracy of the TOA Rayleigh radiance computation has been evaluated for spectral bands of the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (SNPP) satellite and the Joint Polar Satellite System (JPSS)-1, showing some important uncertainties for VIIRS-SNPP particularly for large solar- and/or sensor-zenith angles as well as for large Rayleigh optical thicknesses (i.e., short wavelengths) and bands with broad spectral bandwidths. To accurately account for the sensor SRF effect, a new correction algorithm has been developed for VIIRS spectral bands, which improves the TOA Rayleigh radiance accuracy to ~0.01% even for the large solar-zenith angles of 70°-80°, compared with the error of ~0.7% without applying the correction for the VIIRS-SNPP 410 nm band. The same methodology that accounts for the sensor SRF effect on the Rayleigh radiance computation can be used for other satellite sensors. In addition, with the new Rayleigh LUTs, the effect of surface atmospheric pressure variation on the TOA Rayleigh radiance computation can be calculated precisely, and no specific atmospheric pressure correction algorithm is needed. There are some other important applications and advantages to using the new Rayleigh LUTs for satellite remote sensing, including an efficient and accurate TOA Rayleigh radiance computation for hyperspectral satellite remote sensing, detector-based TOA Rayleigh radiance computation, Rayleigh radiance calculations for high altitude
Vocational Accounting and Computing Programs.
ERIC Educational Resources Information Center
Avani, Nathan T.
1986-01-01
Describes an "Accounting and Computing" program in Michigan that emphasizes computerized accounting procedures. This article describes the program curriculum and duty areas (such as handling accounts receivable), presents a list of sample tasks in each duty area, and specifies components of each task. Computer equipment necessary for this program…
Vattikonda, Anirudh; Surampudi, Bapi Raju; Banerjee, Arpan; Deco, Gustavo; Roy, Dipanjan
2016-08-01
Computational modeling of the spontaneous dynamics over the whole brain provides critical insight into the spatiotemporal organization of brain dynamics at multiple resolutions and their alteration to changes in brain structure (e.g. in diseased states, aging, across individuals). Recent experimental evidence further suggests that the adverse effect of lesions is visible on spontaneous dynamics characterized by changes in resting state functional connectivity and its graph theoretical properties (e.g. modularity). These changes originate from altered neural dynamics in individual brain areas that are otherwise poised towards a homeostatic equilibrium to maintain a stable excitatory and inhibitory activity. In this work, we employ a homeostatic inhibitory mechanism, balancing excitation and inhibition in the local brain areas of the entire cortex under neurological impairments like lesions to understand global functional recovery (across brain networks and individuals). Previous computational and empirical studies have demonstrated that the resting state functional connectivity varies primarily due to the location and specific topological characteristics of the lesion. We show that local homeostatic balance provides a functional recovery by re-establishing excitation-inhibition balance in all areas that are affected by lesion. We systematically compare the extent of recovery in the primary hub areas (e.g. default mode network (DMN), medial temporal lobe, medial prefrontal cortex) as well as other sensory areas like primary motor area, supplementary motor area, fronto-parietal and temporo-parietal networks. Our findings suggest that stability and richness similar to the normal brain dynamics at rest are achievable by re-establishment of balance. PMID:27177761
Integrating Computer Concepts into Principles of Accounting.
ERIC Educational Resources Information Center
Beck, Henry J.; Parrish, Roy James, Jr.
A package of instructional materials for an undergraduate principles of accounting course at Danville Community College was developed based upon the following assumptions: (1) the principles of accounting student does not need to be able to write computer programs; (2) computerized accounting concepts should be presented in this course; (3)…
Space shuttle configuration accounting functional design specification
NASA Technical Reports Server (NTRS)
1974-01-01
An analysis is presented of the requirements for an on-line automated system which must be capable of tracking the status of requirements and engineering changes and of providing accurate and timely records. The functional design specification provides the definition, description, and character length of the required data elements and the interrelationship of data elements to adequately track, display, and report the status of active configuration changes. As changes to the space shuttle program levels II and III configuration are proposed, evaluated, and dispositioned, it is the function of the configuration management office to maintain records regarding changes to the baseline and to track and report the status of those changes. The configuration accounting system will consist of a combination of computers, computer terminals, software, and procedures, all of which are designed to store, retrieve, display, and process information required to track proposed and proved engineering changes to maintain baseline documentation of the space shuttle program levels II and III.
Symbolic functions from neural computation.
Smolensky, Paul
2012-07-28
Is thought computation over ideas? Turing, and many cognitive scientists since, have assumed so, and formulated computational systems in which meaningful concepts are encoded by symbols which are the objects of computation. Cognition has been carved into parts, each a function defined over such symbols. This paper reports on a research program aimed at computing these symbolic functions without computing over the symbols. Symbols are encoded as patterns of numerical activation over multiple abstract neurons, each neuron simultaneously contributing to the encoding of multiple symbols. Computation is carried out over the numerical activation values of such neurons, which individually have no conceptual meaning. This is massively parallel numerical computation operating within a continuous computational medium. The paper presents an axiomatic framework for such a computational account of cognition, including a number of formal results. Within the framework, a class of recursive symbolic functions can be computed. Formal languages defined by symbolic rewrite rules can also be specified, the subsymbolic computations producing symbolic outputs that simultaneously display central properties of both facets of human language: universal symbolic grammatical competence and statistical, imperfect performance. PMID:22711873
Assessment of the Accounting and Joint Accounting/Computer Information Systems Programs.
ERIC Educational Resources Information Center
Appiah, John; Cernigliaro, James; Davis, Jeffrey; Gordon, Millicent; Richards, Yves; Santamaria, Fernando; Siegel, Annette; Lytle, Namy; Wharton, Patrick
This document presents City University of New York LaGuardia Community College's Department of Accounting and Managerial Studies assessment of its accounting and joint accounting/computer information systems programs report, and includes the following items: (1) description of the mission and goals of the Department of Accounting and Managerial…
Computational complexity of Boolean functions
NASA Astrophysics Data System (ADS)
Korshunov, Aleksei D.
2012-02-01
Boolean functions are among the fundamental objects of discrete mathematics, especially in those of its subdisciplines which fall under mathematical logic and mathematical cybernetics. The language of Boolean functions is convenient for describing the operation of many discrete systems such as contact networks, Boolean circuits, branching programs, and some others. An important parameter of discrete systems of this kind is their complexity. This characteristic has been actively investigated starting from Shannon's works. There is a large body of scientific literature presenting many fundamental results. The purpose of this survey is to give an account of the main results over the last sixty years related to the complexity of computation (realization) of Boolean functions by contact networks, Boolean circuits, and Boolean circuits without branching. Bibliography: 165 titles.
A Computational Account of Bilingual Aphasia Rehabilitation
ERIC Educational Resources Information Center
Kiran, Swathi; Grasemann, Uli; Sandberg, Chaleece; Miikkulainen, Risto
2013-01-01
Current research on bilingual aphasia highlights the paucity in recommendations for optimal rehabilitation for bilingual aphasic patients (Edmonds & Kiran, 2006; Roberts & Kiran, 2007). In this paper, we have developed a computational model to simulate an English-Spanish bilingual language system in which language representations can vary by age…
Computer-Based Instruction in Accounting Using the CREATE System.
ERIC Educational Resources Information Center
Henkle, Edward B.; Robertson, Kenneth W.
The Graduate Logistics program of the United States Air Force (USAF) Institute of Technology has required that prospective students show a satisfactory level of competence in basic accounting procedures before entering the program. The purpose of this thesis was to develop accounting case problems for use with the CREATE computer system that would…
Network Coding for Function Computation
ERIC Educational Resources Information Center
Appuswamy, Rathinakumar
2011-01-01
In this dissertation, the following "network computing problem" is considered. Source nodes in a directed acyclic network generate independent messages and a single receiver node computes a target function f of the messages. The objective is to maximize the average number of times f can be computed per network usage, i.e., the "computing…
Program Computes Thermodynamic Functions
NASA Technical Reports Server (NTRS)
Mcbride, Bonnie J.; Gordon, Sanford
1994-01-01
PAC91 is latest in PAC (Properties and Coefficients) series. Two principal features are to provide means of (1) generating theoretical thermodynamic functions from molecular constants and (2) least-squares fitting of these functions to empirical equations. PAC91 written in FORTRAN 77 to be machine-independent.
Common Accounting System for Monitoring the ATLAS Distributed Computing Resources
NASA Astrophysics Data System (ADS)
Karavakis, E.; Andreeva, J.; Campana, S.; Gayazov, S.; Jezequel, S.; Saiz, P.; Sargsyan, L.; Schovancova, J.; Ueda, I.; Atlas Collaboration
2014-06-01
This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.
PC-DYMAC: Personal Computer---DYnamic Materials ACcounting
Jackson, B.G.
1989-11-01
This manual was designed to provide complete documentation for the computer system used by the EBR-II Fuels and Materials Department, Argonne National Laboratory-West (ANL-W) for accountability of special nuclear materials (SNM). This document includes background information on the operation of the Fuel Manufacturing Facility (FMF), instructions on computer operations in correlation with production and a detailed manual for DYMAC operation. 60 figs.
Computers Can Help Student Retention in Introductory College Accounting.
ERIC Educational Resources Information Center
Price, Richard L.; Murvin, Harry J.
1992-01-01
Almost all students in a study of an integrated instructional approach indicated that using a computer and workbook was very helpful in understanding financial accounting. A related study found that students with lower reading levels benefited most from this approach, and withdrawal dropped from 10 percent to 2 percent. (JOW)
Computational Models for Neuromuscular Function
Valero-Cuevas, Francisco J.; Hoffmann, Heiko; Kurse, Manish U.; Kutch, Jason J.; Theodorou, Evangelos A.
2011-01-01
Computational models of the neuromuscular system hold the potential to allow us to reach a deeper understanding of neuromuscular function and clinical rehabilitation by complementing experimentation. By serving as a means to distill and explore specific hypotheses, computational models emerge from prior experimental data and motivate future experimental work. Here we review computational tools used to understand neuromuscular function including musculoskeletal modeling, machine learning, control theory, and statistical model analysis. We conclude that these tools, when used in combination, have the potential to further our understanding of neuromuscular function by serving as a rigorous means to test scientific hypotheses in ways that complement and leverage experimental data. PMID:21687779
Automatic computation of transfer functions
Atcitty, Stanley; Watson, Luke Dale
2015-04-14
Technologies pertaining to the automatic computation of transfer functions for a physical system are described herein. The physical system is one of an electrical system, a mechanical system, an electromechanical system, an electrochemical system, or an electromagnetic system. A netlist in the form of a matrix comprises data that is indicative of elements in the physical system, values for the elements in the physical system, and structure of the physical system. Transfer functions for the physical system are computed based upon the netlist.
Computer Experiments for Function Approximations
Chang, A; Izmailov, I; Rizzo, S; Wynter, S; Alexandrov, O; Tong, C
2007-10-15
This research project falls in the domain of response surface methodology, which seeks cost-effective ways to accurately fit an approximate function to experimental data. Modeling and computer simulation are essential tools in modern science and engineering. A computer simulation can be viewed as a function that receives input from a given parameter space and produces an output. Running the simulation repeatedly amounts to an equivalent number of function evaluations, and for complex models, such function evaluations can be very time-consuming. It is then of paramount importance to intelligently choose a relatively small set of sample points in the parameter space at which to evaluate the given function, and then use this information to construct a surrogate function that is close to the original function and takes little time to evaluate. This study was divided into two parts. The first part consisted of comparing four sampling methods and two function approximation methods in terms of efficiency and accuracy for simple test functions. The sampling methods used were Monte Carlo, Quasi-Random LP{sub {tau}}, Maximin Latin Hypercubes, and Orthogonal-Array-Based Latin Hypercubes. The function approximation methods utilized were Multivariate Adaptive Regression Splines (MARS) and Support Vector Machines (SVM). The second part of the study concerned adaptive sampling methods with a focus on creating useful sets of sample points specifically for monotonic functions, functions with a single minimum and functions with a bounded first derivative.
On computing special functions in marine engineering
NASA Astrophysics Data System (ADS)
Constantinescu, E.; Bogdan, M.
2015-11-01
Important modeling applications in marine engineering conduct us to a special class of solutions for difficult differential equations with variable coefficients. In order to be able to solve and implement such models (in wave theory, in acoustics, in hydrodynamics, in electromagnetic waves, but also in many other engineering fields), it is necessary to compute so called special functions: Bessel functions, modified Bessel functions, spherical Bessel functions, Hankel functions. The aim of this paper is to develop numerical solutions in Matlab for the above mentioned special functions. Taking into account the main properties for Bessel and modified Bessel functions, we shortly present analytically solutions (where possible) in the form of series. Especially it is studied the behavior of these special functions using Matlab facilities: numerical solutions and plotting. Finally, it will be compared the behavior of the special functions and point out other directions for investigating properties of Bessel and spherical Bessel functions. The asymptotic forms of Bessel functions and modified Bessel functions allow determination of important properties of these functions. The modified Bessel functions tend to look more like decaying and growing exponentials.
Computer program for the automated attendance accounting system
NASA Technical Reports Server (NTRS)
Poulson, P.; Rasmusson, C.
1971-01-01
The automated attendance accounting system (AAAS) was developed under the auspices of the Space Technology Applications Program. The task is basically the adaptation of a small digital computer, coupled with specially developed pushbutton terminals located in school classrooms and offices for the purpose of taking daily attendance, maintaining complete attendance records, and producing partial and summary reports. Especially developed for high schools, the system is intended to relieve both teachers and office personnel from the time-consuming and dreary task of recording and analyzing the myriad classroom attendance data collected throughout the semester. In addition, since many school district budgets are related to student attendance, the increase in accounting accuracy is expected to augment district income. A major component of this system is the real-time AAAS software system, which is described.
FUNCTION GENERATOR FOR ANALOGUE COMPUTERS
Skramstad, H.K.; Wright, J.H.; Taback, L.
1961-12-12
An improved analogue computer is designed which can be used to determine the final ground position of radioactive fallout particles in an atomic cloud. The computer determines the fallout pattern on the basis of known wind velocity and direction at various altitudes, and intensity of radioactivity in the mushroom cloud as a function of particle size and initial height in the cloud. The output is then displayed on a cathode-ray tube so that the average or total luminance of the tube screen at any point represents the intensity of radioactive fallout at the geographical location represented by that point. (AEC)
Accountability for Early Childhood Education (Assessing Global Functioning).
ERIC Educational Resources Information Center
Cassel, Russell N.
1995-01-01
Discusses the pacing of learning activity, knowledge of progress in student learning, teacher role, accountability in learning, feedback on knowledge of success, the global functioning assessment concept, and the mother surrogate. (RS)
Living through a computer voice: a personal account.
Martin, Alan; Newell, Christopher
2013-10-01
Alan Martin, the first author of this paper, has cerebral palsy and uses a voice output communication aid (VOCA) to speak, and this paper describes the personal experience of living 'through' a computer voice (or VOCA) in the form of an interview of Mr Martin conducted by Dr Newell. The interview focuses on the computerized voice output rather than other features of the VOCA. In presenting a first-hand account of the experience of actually using VOCA, the intention is that both everyday, practical issues of the technology and broader imaginative, philosophical, and sociological implications will be explored. Based upon the interview, the authors offer an informal set of design requirements and recommendations for the development of future VOCAs. PMID:23841537
Metacognition: computation, biology and function
Fleming, Stephen M.; Dolan, Raymond J.; Frith, Christopher D.
2012-01-01
Many complex systems maintain a self-referential check and balance. In animals, such reflective monitoring and control processes have been grouped under the rubric of metacognition. In this introductory article to a Theme Issue on metacognition, we review recent and rapidly progressing developments from neuroscience, cognitive psychology, computer science and philosophy of mind. While each of these areas is represented in detail by individual contributions to the volume, we take this opportunity to draw links between disciplines, and highlight areas where further integration is needed. Specifically, we cover the definition, measurement, neurobiology and possible functions of metacognition, and assess the relationship between metacognition and consciousness. We propose a framework in which level of representation, order of behaviour and access consciousness are orthogonal dimensions of the conceptual landscape. PMID:22492746
Metacognition: computation, biology and function.
Fleming, Stephen M; Dolan, Raymond J; Frith, Christopher D
2012-05-19
Many complex systems maintain a self-referential check and balance. In animals, such reflective monitoring and control processes have been grouped under the rubric of metacognition. In this introductory article to a Theme Issue on metacognition, we review recent and rapidly progressing developments from neuroscience, cognitive psychology, computer science and philosophy of mind. While each of these areas is represented in detail by individual contributions to the volume, we take this opportunity to draw links between disciplines, and highlight areas where further integration is needed. Specifically, we cover the definition, measurement, neurobiology and possible functions of metacognition, and assess the relationship between metacognition and consciousness. We propose a framework in which level of representation, order of behaviour and access consciousness are orthogonal dimensions of the conceptual landscape. PMID:22492746
Computing Functions by Approximating the Input
ERIC Educational Resources Information Center
Goldberg, Mayer
2012-01-01
In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…
ERIC Educational Resources Information Center
Laing, Gregory Kenneth; Perrin, Ronald William
2012-01-01
This paper presents the findings of a field study conducted to ascertain the perceptions of first year accounting students concerning the integration of computer applications in the accounting curriculum. The results indicate that both student cohorts perceived the computer as a valuable educational tool. The use of computers to enhance the…
Genre Analysis of Tax Computation Letters: How and Why Tax Accountants Write the Way They Do
ERIC Educational Resources Information Center
Flowerdew, John; Wan, Alina
2006-01-01
This study is a genre analysis which explores the specific discourse community of tax accountants. Tax computation letters from one international accounting firm in Hong Kong were analyzed and compared. To probe deeper into the tax accounting discourse community, a group of tax accountants from the same firm was observed and questioned. The texts…
Teaching with Computers: A Cautionary Finding in an Accounting Class
ERIC Educational Resources Information Center
Jones, Stuart H.; Wright, Michael
2005-01-01
The study assesses the effects of a hypertext learning aid and GPA on performance in advanced financial accounting. Results indicate that the type of learning aid and GPA significantly affect performance. High GPA students performed better than did the low GPA students. In the study, two versions of the hypertext learning aid were utilized by two…
49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 9 2011-10-01 2011-10-01 false Computers and data processing equipment (account XX-27-46). 1242.46 Section 1242.46 Transportation Other Regulations Relating to Transportation... RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account...
49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 9 2014-10-01 2014-10-01 false Computers and data processing equipment (account XX-27-46). 1242.46 Section 1242.46 Transportation Other Regulations Relating to Transportation... RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account...
49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 9 2013-10-01 2013-10-01 false Computers and data processing equipment (account XX-27-46). 1242.46 Section 1242.46 Transportation Other Regulations Relating to Transportation... RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account...
49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 9 2012-10-01 2012-10-01 false Computers and data processing equipment (account XX-27-46). 1242.46 Section 1242.46 Transportation Other Regulations Relating to Transportation... RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account...
49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 9 2010-10-01 2010-10-01 false Computers and data processing equipment (account XX-27-46). 1242.46 Section 1242.46 Transportation Other Regulations Relating to Transportation... RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account...
Connecting Neural Coding to Number Cognition: A Computational Account
ERIC Educational Resources Information Center
Prather, Richard W.
2012-01-01
The current study presents a series of computational simulations that demonstrate how the neural coding of numerical magnitude may influence number cognition and development. This includes behavioral phenomena cataloged in cognitive literature such as the development of numerical estimation and operational momentum. Though neural research has…
ERIC Educational Resources Information Center
The Newsletter of the Comprehensive Center-Region VI, 1999
1999-01-01
Controversy surrounding the accountability movement is related to how the movement began in response to dissatisfaction with public schools. Opponents see it as one-sided, somewhat mean-spirited, and a threat to the professional status of teachers. Supporters argue that all other spheres of the workplace have accountability systems and that the…
ERIC Educational Resources Information Center
Lashway, Larry
1999-01-01
This issue reviews publications that provide a starting point for principals looking for a way through the accountability maze. Each publication views accountability differently, but collectively these readings argue that even in an era of state-mandated assessment, principals can pursue proactive strategies that serve students' needs. James A.…
Sequential decisions: a computational comparison of observational and reinforcement accounts.
Mohammadi Sepahvand, Nazanin; Stöttinger, Elisabeth; Danckert, James; Anderson, Britt
2014-01-01
Right brain damaged patients show impairments in sequential decision making tasks for which healthy people do not show any difficulty. We hypothesized that this difficulty could be due to the failure of right brain damage patients to develop well-matched models of the world. Our motivation is the idea that to navigate uncertainty, humans use models of the world to direct the decisions they make when interacting with their environment. The better the model is, the better their decisions are. To explore the model building and updating process in humans and the basis for impairment after brain injury, we used a computational model of non-stationary sequence learning. RELPH (Reinforcement and Entropy Learned Pruned Hypothesis space) was able to qualitatively and quantitatively reproduce the results of left and right brain damaged patient groups and healthy controls playing a sequential version of Rock, Paper, Scissors. Our results suggests that, in general, humans employ a sub-optimal reinforcement based learning method rather than an objectively better statistical learning approach, and that differences between right brain damaged and healthy control groups can be explained by different exploration policies, rather than qualitatively different learning mechanisms. PMID:24747416
On computation of Hough functions
NASA Astrophysics Data System (ADS)
Wang, Houjun; Boyd, John P.; Akmaev, Rashid A.
2016-04-01
Hough functions are the eigenfunctions of the Laplace tidal equation governing fluid motion on a rotating sphere with a resting basic state. Several numerical methods have been used in the past. In this paper, we compare two of those methods: normalized associated Legendre polynomial expansion and Chebyshev collocation. Both methods are not widely used, but both have some advantages over the commonly used unnormalized associated Legendre polynomial expansion method. Comparable results are obtained using both methods. For the first method we note some details on numerical implementation. The Chebyshev collocation method was first used for the Laplace tidal problem by Boyd (1976) and is relatively easy to use. A compact MATLAB code is provided for this method. We also illustrate the importance and effect of including a parity factor in Chebyshev polynomial expansions for modes with odd zonal wave numbers.
Accounting Students Are Unable to Recognize the Various Types of Accounting Functions.
ERIC Educational Resources Information Center
Frank, Gary B.; And Others
1989-01-01
The authors discuss 258 undergraduate business majors' perceptions of the nature and uses of financial and managerial accounting. Perceptions were measured with Stapel Scales constructed on 11 descriptive statements. Findings indicated that students distinguish between financial and managerial accounting, but that they do not view the two as…
JENNINGS, BARBARA J.; MCALLISTER, PAULA L.
2002-04-01
In October 2000, the personnel responsible for administration of the corporate computers managed by the Scientific Computing Department assembled to reengineer the process of creating and deleting users' computer accounts. Using the Carnegie Mellon Software Engineering Institute (SEI) Capability Maturity Model (CMM) for quality improvement process, the team performed the reengineering by way of process modeling, defining and measuring the maturity of the processes, per SEI and CMM practices. The computers residing in the classified environment are bound by security requirements of the Secure Classified Network (SCN) Security Plan. These security requirements delimited the scope of the project, specifically mandating validation of all user accounts on the central corporate computer systems. System administrators, in addition to their assigned responsibilities, were spending valuable hours performing the additional tacit responsibility of tracking user accountability for user-generated data. For example, in cases where the data originator was no longer an employee, the administrators were forced to spend considerable time and effort determining the appropriate management personnel to assume ownership or disposition of the former owner's data files. In order to prevent this sort of problem from occurring and to have a defined procedure in the event of an anomaly, the computer account management procedure was thoroughly reengineered, as detailed in this document. An automated procedure is now in place that is initiated and supplied data by central corporate processes certifying the integrity, timeliness and authentication of account holders and their management. Automated scripts identify when an account is about to expire, to preempt the problem of data becoming ''orphaned'' without a responsible ''owner'' on the system. The automated account-management procedure currently operates on and provides a standard process for all of the computers maintained by the
ERIC Educational Resources Information Center
Owhoso, Vincent; Malgwi, Charles A.; Akpomi, Margaret
2014-01-01
The authors examine whether students who completed a computer-based intervention program, designed to help them develop abilities and skills in introductory accounting, later declared accounting as a major. A sample of 1,341 students participated in the study, of which 74 completed the intervention program (computer-based assisted learning [CBAL])…
Computer Games Functioning as Motivation Stimulants
ERIC Educational Resources Information Center
Lin, Grace Hui Chin; Tsai, Tony Kung Wan; Chien, Paul Shih Chieh
2011-01-01
Numerous scholars have recommended computer games can function as influential motivation stimulants of English learning, showing benefits as learning tools (Clarke and Dede, 2007; Dede, 2009; Klopfer and Squire, 2009; Liu and Chu, 2010; Mitchell, Dede & Dunleavy, 2009). This study aimed to further test and verify the above suggestion,…
Deterministic Function Computation with Chemical Reaction Networks*
Chen, Ho-Lin; Doty, David; Soloveichik, David
2013-01-01
Chemical reaction networks (CRNs) formally model chemistry in a well-mixed solution. CRNs are widely used to describe information processing occurring in natural cellular regulatory networks, and with upcoming advances in synthetic biology, CRNs are a promising language for the design of artificial molecular control circuitry. Nonetheless, despite the widespread use of CRNs in the natural sciences, the range of computational behaviors exhibited by CRNs is not well understood. CRNs have been shown to be efficiently Turing-universal (i.e., able to simulate arbitrary algorithms) when allowing for a small probability of error. CRNs that are guaranteed to converge on a correct answer, on the other hand, have been shown to decide only the semilinear predicates (a multi-dimensional generalization of “eventually periodic” sets). We introduce the notion of function, rather than predicate, computation by representing the output of a function f : ℕk → ℕl by a count of some molecular species, i.e., if the CRN starts with x1, …, xk molecules of some “input” species X1, …, Xk, the CRN is guaranteed to converge to having f(x1, …, xk) molecules of the “output” species Y1, …, Yl. We show that a function f : ℕk → ℕl is deterministically computed by a CRN if and only if its graph {(x, y) ∈ ℕk × ℕl ∣ f(x) = y} is a semilinear set. Finally, we show that each semilinear function f (a function whose graph is a semilinear set) can be computed by a CRN on input x in expected time O(polylog ∥x∥1). PMID:25383068
The emerging discipline of Computational Functional Anatomy
Miller, Michael I.; Qiu, Anqi
2010-01-01
Computational Functional Anatomy (CFA) is the study of functional and physiological response variables in anatomical coordinates. For this we focus on two things: (i) the construction of bijections (via diffeomorphisms) between the coordinatized manifolds of human anatomy, and (ii) the transfer (group action and parallel transport) of functional information into anatomical atlases via these bijections. We review advances in the unification of the bijective comparison of anatomical submanifolds via point-sets including points, curves and surface triangulations as well as dense imagery. We examine the transfer via these bijections of functional response variables into anatomical coordinates via group action on scalars and matrices in DTI as well as parallel transport of metric information across multiple templates which preserves the inner product. PMID:19103297
New Computer Simulations of Macular Neural Functioning
NASA Technical Reports Server (NTRS)
Ross, Muriel D.; Doshay, D.; Linton, S.; Parnas, B.; Montgomery, K.; Chimento, T.
1994-01-01
We use high performance graphics workstations and supercomputers to study the functional significance of the three-dimensional (3-D) organization of gravity sensors. These sensors have a prototypic architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scaled-up, 3-D versions run on a Cray Y-MP supercomputer. A semi-automated method of reconstruction of neural tissue from serial sections studied in a transmission electron microscope has been developed to eliminate tedious conventional photography. The reconstructions use a mesh as a step in generating a neural surface for visualization. Two meshes are required to model calyx surfaces. The meshes are connected and the resulting prisms represent the cytoplasm and the bounding membranes. A finite volume analysis method is employed to simulate voltage changes along the calyx in response to synapse activation on the calyx or on calyceal processes. The finite volume method insures that charge is conserved at the calyx-process junction. These and other models indicate that efferent processes act as voltage followers, and that the morphology of some afferent processes affects their functioning. In a final application, morphological information is symbolically represented in three dimensions in a computer. The possible functioning of the connectivities is tested using mathematical interpretations of physiological parameters taken from the literature. Symbolic, 3-D simulations are in progress to probe the functional significance of the connectivities. This research is expected to advance computer-based studies of macular functioning and of synaptic plasticity.
Efficient computation of Wigner-Eisenbud functions
NASA Astrophysics Data System (ADS)
Raffah, Bahaaudin M.; Abbott, Paul C.
2013-06-01
The R-matrix method, introduced by Wigner and Eisenbud (1947) [1], has been applied to a broad range of electron transport problems in nanoscale quantum devices. With the rapid increase in the development and modeling of nanodevices, efficient, accurate, and general computation of Wigner-Eisenbud functions is required. This paper presents the Mathematica package WignerEisenbud, which uses the Fourier discrete cosine transform to compute the Wigner-Eisenbud functions in dimensionless units for an arbitrary potential in one dimension, and two dimensions in cylindrical coordinates. Program summaryProgram title: WignerEisenbud Catalogue identifier: AEOU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html Distribution format: tar.gz Programming language: Mathematica Operating system: Any platform supporting Mathematica 7.0 and above Keywords: Wigner-Eisenbud functions, discrete cosine transform (DCT), cylindrical nanowires Classification: 7.3, 7.9, 4.6, 5 Nature of problem: Computing the 1D and 2D Wigner-Eisenbud functions for arbitrary potentials using the DCT. Solution method: The R-matrix method is applied to the physical problem. Separation of variables is used for eigenfunction expansion of the 2D Wigner-Eisenbud functions. Eigenfunction computation is performed using the DCT to convert the Schrödinger equation with Neumann boundary conditions to a generalized matrix eigenproblem. Limitations: Restricted to uniform (rectangular grid) sampling of the potential. In 1D the number of sample points, n, results in matrix computations involving n×n matrices. Unusual features: Eigenfunction expansion using the DCT is fast and accurate. Users can specify scattering potentials using functions, or interactively using mouse input. Use of dimensionless units permits application to a
Computing Balance Column Amount in Ledger Accounts. Student Manual and Instructor's Manual.
ERIC Educational Resources Information Center
McElveen, Peggy C.
Supporting performance objective 31 of the V-TECS (Vocational-Technical Education Consortium of States) Secretarial Catalog, both a set of student materials and an instructor's manual on computing the balance column amount in ledger accounts are included in this packet, which is one of a series. The student materials include a record of a…
ERIC Educational Resources Information Center
Morrison, Robert G.; Doumas, Leonidas A. A.; Richland, Lindsey E.
2011-01-01
Theories accounting for the development of analogical reasoning tend to emphasize either the centrality of relational knowledge accretion or changes in information processing capability. Simulations in LISA (Hummel & Holyoak, 1997, 2003), a neurally inspired computer model of analogical reasoning, allow us to explore how these factors may…
ERIC Educational Resources Information Center
Lai, Ming-Ling
2008-01-01
Purpose: This study aims to assess the state of technology readiness of professional accounting students in Malaysia, to examine their level of internet self-efficacy, to assess their prior computing experience, and to explore if they are satisfied with the professional course that they are pursuing in improving their technology skills.…
Written and Computer-Mediated Accounting Communication Skills: An Employer Perspective
ERIC Educational Resources Information Center
Jones, Christopher G.
2011-01-01
Communication skills are a fundamental personal competency for a successful career in accounting. What is not so obvious is the specific written communication skill set employers look for and the extent those skills are computer mediated. Using survey research, this article explores the particular skills employers desire and their satisfaction…
Neutron monitor yield function: New improved computations
NASA Astrophysics Data System (ADS)
Mishev, A. L.; Usoskin, I. G.; Kovaltsov, G. A.
2013-06-01
A ground-based neutron monitor (NM) is a standard tool to measure cosmic ray (CR) variability near Earth, and it is crucially important to know its yield function for primary CRs. Although there are several earlier theoretically calculated yield functions, none of them agrees with experimental data of latitude surveys of sea-level NMs, thus suggesting for an inconsistency. A newly computed yield function of the standard sea-level 6NM64 NM is presented here separately for primary CR protons and α-particles, the latter representing also heavier species of CRs. The computations have been done using the GEANT-4 PLANETOCOSMICS Monte-Carlo tool and a realistic curved atmospheric model. For the first time, an effect of the geometrical correction of the NM effective area, related to the finite lateral expansion of the CR induced atmospheric cascade, is considered, which was neglected in the previous studies. This correction slightly enhances the relative impact of higher-energy CRs (energy above 5-10 GeV/nucleon) in NM count rate. The new computation finally resolves the long-standing problem of disagreement between the theoretically calculated spatial variability of CRs over the globe and experimental latitude surveys. The newly calculated yield function, corrected for this geometrical factor, appears fully consistent with the experimental latitude surveys of NMs performed during three consecutive solar minima in 1976-1977, 1986-1987, and 1996-1997. Thus, we provide a new yield function of the standard sea-level NM 6NM64 that is validated against experimental data.
Computer network defense through radial wave functions
NASA Astrophysics Data System (ADS)
Malloy, Ian J.
The purpose of this research is to synthesize basic and fundamental findings in quantum computing, as applied to the attack and defense of conventional computer networks. The concept focuses on uses of radio waves as a shield for, and attack against traditional computers. A logic bomb is analogous to a landmine in a computer network, and if one was to implement it as non-trivial mitigation, it will aid computer network defense. As has been seen in kinetic warfare, the use of landmines has been devastating to geopolitical regions in that they are severely difficult for a civilian to avoid triggering given the unknown position of a landmine. Thus, the importance of understanding a logic bomb is relevant and has corollaries to quantum mechanics as well. The research synthesizes quantum logic phase shifts in certain respects using the Dynamic Data Exchange protocol in software written for this work, as well as a C-NOT gate applied to a virtual quantum circuit environment by implementing a Quantum Fourier Transform. The research focus applies the principles of coherence and entanglement from quantum physics, the concept of expert systems in artificial intelligence, principles of prime number based cryptography with trapdoor functions, and modeling radio wave propagation against an event from unknown parameters. This comes as a program relying on the artificial intelligence concept of an expert system in conjunction with trigger events for a trapdoor function relying on infinite recursion, as well as system mechanics for elliptic curve cryptography along orbital angular momenta. Here trapdoor both denotes the form of cipher, as well as the implied relationship to logic bombs.
The intrinsic quasar luminosity function: Accounting for accretion disk anisotropy
DiPompeo, M. A.; Myers, A. D.; Brotherton, M. S.; Runnoe, J. C.; Green, R. F.
2014-05-20
Quasar luminosity functions are a fundamental probe of the growth and evolution of supermassive black holes. Measuring the intrinsic luminosity function is difficult in practice, due to a multitude of observational and systematic effects. As sample sizes increase and measurement errors drop, characterizing the systematic effects is becoming more important. It is well known that the continuum emission from the accretion disk of quasars is anisotropic—in part due to its disk-like structure—but current luminosity function calculations effectively assume isotropy over the range of unobscured lines of sight. Here, we provide the first steps in characterizing the effect of random quasar orientations and simple models of anisotropy on observed luminosity functions. We find that the effect of orientation is not insignificant and exceeds other potential corrections such as those from gravitational lensing of foreground structures. We argue that current observational constraints may overestimate the intrinsic luminosity function by as much as a factor of ∼2 on the bright end. This has implications for models of quasars and their role in the universe, such as quasars' contribution to cosmological backgrounds.
Computational functions in biochemical reaction networks.
Arkin, A; Ross, J
1994-01-01
In prior work we demonstrated the implementation of logic gates, sequential computers (universal Turing machines), and parallel computers by means of the kinetics of chemical reaction mechanisms. In the present article we develop this subject further by first investigating the computational properties of several enzymatic (single and multiple) reaction mechanisms: we show their steady states are analogous to either Boolean or fuzzy logic gates. Nearly perfect digital function is obtained only in the regime in which the enzymes are saturated with their substrates. With these enzymatic gates, we construct combinational chemical networks that execute a given truth-table. The dynamic range of a network's output is strongly affected by "input/output matching" conditions among the internal gate elements. We find a simple mechanism, similar to the interconversion of fructose-6-phosphate between its two bisphosphate forms (fructose-1,6-bisphosphate and fructose-2,6-bisphosphate), that functions analogously to an AND gate. When the simple model is supplanted with one in which the enzyme rate laws are derived from experimental data, the steady state of the mechanism functions as an asymmetric fuzzy aggregation operator with properties akin to a fuzzy AND gate. The qualitative behavior of the mechanism does not change when situated within a large model of glycolysis/gluconeogenesis and the TCA cycle. The mechanism, in this case, switches the pathway's mode from glycolysis to gluconeogenesis in response to chemical signals of low blood glucose (cAMP) and abundant fuel for the TCA cycle (acetyl coenzyme A). Images FIGURE 3 FIGURE 4 FIGURE 5 FIGURE 7 FIGURE 10 FIGURE 12 FIGURE 13 FIGURE 14 FIGURE 15 FIGURE 16 PMID:7948674
Discrete Wigner functions and quantum computational speedup
Galvao, Ernesto F.
2005-04-01
Gibbons et al. [Phys. Rev. A 70, 062101 (2004)] have recently defined a class of discrete Wigner functions W to represent quantum states in a finite Hilbert space dimension d. I characterize the set C{sub d} of states having non-negative W simultaneously in all definitions of W in this class. For d{<=}5 I show C{sub d} is the convex hull of stabilizer states. This supports the conjecture that negativity of W is necessary for exponential speedup in pure-state quantum computation.
Accounting for a Functional Category: German "Drohen" "to Threaten"
ERIC Educational Resources Information Center
Heine, Bernd; Miyashita, Hiroyuki
2008-01-01
In many languages there are words that behave like lexical verbs and on the one hand and like functional categories expressing distinctions of tense, aspect, modality, etc. on the other. The grammatical status of such words is frequently controversial; while some authors treat them as belonging to one and the same grammatical category, others…
Computer simulation as a teaching aid in pharmacy management--Part 1: Principles of accounting.
Morrison, D J
1987-06-01
The need for pharmacists to develop management expertise through participation in formal courses is now widely acknowledged. Many schools of pharmacy lay the foundations for future management training by providing introductory courses as an integral or elective part of the undergraduate syllabus. The benefit of such courses may, however, be limited by the lack of opportunity for the student to apply the concepts and procedures in a practical working environment. Computer simulations provide a means to overcome this problem, particularly in the field of resource management. In this, the first of two articles, the use of a computer model to demonstrate basic accounting principles is described. PMID:3301875
A cognitive neurobiological account of deception: evidence from functional neuroimaging.
Spence, Sean A; Hunter, Mike D; Farrow, Tom F D; Green, Russell D; Leung, David H; Hughes, Catherine J; Ganesan, Venkatasubramanian
2004-01-01
An organism may use misinformation, knowingly (through deception) or unknowingly (as in the case of camouflage), to gain advantage in a competitive environment. From an evolutionary perspective, greater tactical deception occurs among primates closer to humans, with larger neocortices. In humans, the onset of deceptive behaviours in childhood exhibits a developmental trajectory, which may be regarded as 'normal' in the majority and deficient among a minority with certain neurodevelopmental disorders (e.g. autism). In the human adult, deception and lying exhibit features consistent with their use of 'higher' or 'executive' brain systems. Accurate detection of deception in humans may be of particular importance in forensic practice, while an understanding of its cognitive neurobiology may have implications for models of 'theory of mind' and social cognition, and societal notions of responsibility, guilt and mitigation. In recent years, functional neuroimaging techniques (especially functional magnetic resonance imaging) have been used to study deception. Though few in number, and using very different experimental protocols, studies published in the peer-reviewed literature exhibit certain consistencies. Attempted deception is associated with activation of executive brain regions (particularly prefrontal and anterior cingulate cortices), while truthful responding has not been shown to be associated with any areas of increased activation (relative to deception). Hence, truthful responding may comprise a relative 'baseline' in human cognition and communication. The subject who lies may necessarily engage 'higher' brain centres, consistent with a purpose or intention (to deceive). While the principle of executive control during deception remains plausible, its precise anatomy awaits elucidation. PMID:15590616
Computational based functional analysis of Bacillus phytases.
Verma, Anukriti; Singh, Vinay Kumar; Gaur, Smriti
2016-02-01
Phytase is an enzyme which catalyzes the total hydrolysis of phytate to less phosphorylated myo-inositol derivatives and inorganic phosphate and digests the undigestable phytate part present in seeds and grains and therefore provides digestible phosphorus, calcium and other mineral nutrients. Phytases are frequently added to the feed of monogastric animals so that bioavailability of phytic acid-bound phosphate increases, ultimately enhancing the nutritional value of diets. The Bacillus phytase is very suitable to be used in animal feed because of its optimum pH with excellent thermal stability. Present study is aimed to perform an in silico comparative characterization and functional analysis of phytases from Bacillus amyloliquefaciens to explore physico-chemical properties using various bio-computational tools. All proteins are acidic and thermostable and can be used as suitable candidates in the feed industry. PMID:26672917
Functional requirements for gas characterization system computer software
Tate, D.D.
1996-01-01
This document provides the Functional Requirements for the Computer Software operating the Gas Characterization System (GCS), which monitors the combustible gasses in the vapor space of selected tanks. Necessary computer functions are defined to support design, testing, operation, and change control. The GCS requires several individual computers to address the control and data acquisition functions of instruments and sensors. These computers are networked for communication, and must multi-task to accommodate operation in parallel.
Green's Function Analysis of Periodic Structures in Computational Electromagnetics
NASA Astrophysics Data System (ADS)
Van Orden, Derek
2011-12-01
Periodic structures are used widely in electromagnetic devices, including filters, waveguiding structures, and antennas. Their electromagnetic properties may be analyzed computationally by solving an integral equation, in which an unknown equivalent current distribution in a single unit cell is convolved with a periodic Green's function that accounts for the system's boundary conditions. Fast computation of the periodic Green's function is therefore essential to achieve high accuracy solutions of complicated periodic structures, including analysis of modal wave propagation and scattering from external sources. This dissertation first presents alternative spectral representations of the periodic Green's function of the Helmholtz equation for cases of linear periodic systems in 2D and 3D free space and near planarly layered media. Although there exist multiple representations of the periodic Green's function, most are not efficient in the important case where the fields are observed near the array axis. We present spectral-spatial representations for rapid calculation of the periodic Green's functions for linear periodic arrays of current sources residing in free space as well as near a planarly layered medium. They are based on the integral expansion of the periodic Green's functions in terms of the spectral parameters transverse to the array axis. These schemes are important for the rapid computation of the interaction among unit cells of a periodic array, and, by extension, the complex dispersion relations of guided waves. Extensions of this approach to planar periodic structures are discussed. With these computation tools established, we study the traveling wave properties of linear resonant arrays placed near surfaces, and examine the coupling mechanisms that lead to radiation into guided waves supported by the surface. This behavior is especially important to understand the properties of periodic structures printed on dielectric substrates, such as periodic
Computation of the lattice Green function for a dislocation
NASA Astrophysics Data System (ADS)
Tan, Anne Marie Z.; Trinkle, Dallas R.
2016-08-01
Modeling isolated dislocations is challenging due to their long-ranged strain fields. Flexible boundary condition methods capture the correct long-range strain field of a defect by coupling the defect core to an infinite harmonic bulk through the lattice Green function (LGF). To improve the accuracy and efficiency of flexible boundary condition methods, we develop a numerical method to compute the LGF specifically for a dislocation geometry; in contrast to previous methods, where the LGF was computed for the perfect bulk as an approximation for the dislocation. Our approach directly accounts for the topology of a dislocation, and the errors in the LGF computation converge rapidly for edge dislocations in a simple cubic model system as well as in BCC Fe with an empirical potential. When used within the flexible boundary condition approach, the dislocation LGF relaxes dislocation core geometries in fewer iterations than when the perfect bulk LGF is used as an approximation for the dislocation, making a flexible boundary condition approach more efficient.
An Atomistic Statistically Effective Energy Function for Computational Protein Design.
Topham, Christopher M; Barbe, Sophie; André, Isabelle
2016-08-01
Shortcomings in the definition of effective free-energy surfaces of proteins are recognized to be a major contributory factor responsible for the low success rates of existing automated methods for computational protein design (CPD). The formulation of an atomistic statistically effective energy function (SEEF) suitable for a wide range of CPD applications and its derivation from structural data extracted from protein domains and protein-ligand complexes are described here. The proposed energy function comprises nonlocal atom-based and local residue-based SEEFs, which are coupled using a novel atom connectivity number factor to scale short-range, pairwise, nonbonded atomic interaction energies and a surface-area-dependent cavity energy term. This energy function was used to derive additional SEEFs describing the unfolded-state ensemble of any given residue sequence based on computed average energies for partially or fully solvent-exposed fragments in regions of irregular structure in native proteins. Relative thermal stabilities of 97 T4 bacteriophage lysozyme mutants were predicted from calculated energy differences for folded and unfolded states with an average unsigned error (AUE) of 0.84 kcal mol(-1) when compared to experiment. To demonstrate the utility of the energy function for CPD, further validation was carried out in tests of its capacity to recover cognate protein sequences and to discriminate native and near-native protein folds, loop conformers, and small-molecule ligand binding poses from non-native benchmark decoys. Experimental ligand binding free energies for a diverse set of 80 protein complexes could be predicted with an AUE of 2.4 kcal mol(-1) using an additional energy term to account for the loss in ligand configurational entropy upon binding. The atomistic SEEF is expected to improve the accuracy of residue-based coarse-grained SEEFs currently used in CPD and to extend the range of applications of extant atom-based protein statistical
Zhou, Xinlin; Wei, Wei; Zhang, Yiyun; Cui, Jiaxin; Chen, Chuansheng
2015-01-01
Studies have shown that numerosity processing (e.g., comparison of numbers of dots in two dot arrays) is significantly correlated with arithmetic performance. Researchers have attributed this association to the fact that both tasks share magnitude processing. The current investigation tested an alternative hypothesis, which states that visual perceptual ability (as measured by a figure-matching task) can account for the close relation between numerosity processing and arithmetic performance (computational fluency). Four hundred and twenty four third- to fifth-grade children (220 boys and 204 girls, 8.0–11.0 years old; 120 third graders, 146 fourth graders, and 158 fifth graders) were recruited from two schools (one urban and one suburban) in Beijing, China. Six classes were randomly selected from each school, and all students in each selected class participated in the study. All children were given a series of cognitive and mathematical tests, including numerosity comparison, figure matching, forward verbal working memory, visual tracing, non-verbal matrices reasoning, mental rotation, choice reaction time, arithmetic tests and curriculum-based mathematical achievement test. Results showed that figure-matching ability had higher correlations with numerosity processing and computational fluency than did other cognitive factors (e.g., forward verbal working memory, visual tracing, non-verbal matrix reasoning, mental rotation, and choice reaction time). More important, hierarchical multiple regression showed that figure matching ability accounted for the well-established association between numerosity processing and computational fluency. In support of the visual perception hypothesis, the results suggest that visual perceptual ability, rather than magnitude processing, may be the shared component of numerosity processing and arithmetic performance. PMID:26441740
DeRobertis, Christopher V.; Lu, Yantian T.
2010-02-23
A method, system, and program storage device for creating a new user account or user group with a unique identification number in a computing environment having multiple user registries is provided. In response to receiving a command to create a new user account or user group, an operating system of a clustered computing environment automatically checks multiple registries configured for the operating system to determine whether a candidate identification number for the new user account or user group has been assigned already to one or more existing user accounts or groups, respectively. The operating system automatically assigns the candidate identification number to the new user account or user group created in a target user registry if the checking indicates that the candidate identification number has not been assigned already to any of the existing user accounts or user groups, respectively.
Computer program for Bessel and Hankel functions
NASA Technical Reports Server (NTRS)
Kreider, Kevin L.; Saule, Arthur V.; Rice, Edward J.; Clark, Bruce J.
1991-01-01
A set of FORTRAN subroutines for calculating Bessel and Hankel functions is presented. The routines calculate Bessel and Hankel functions of the first and second kinds, as well as their derivatives, for wide ranges of integer order and real or complex argument in single or double precision. Depending on the order and argument, one of three evaluation methods is used: the power series definition, an Airy function expansion, or an asymptotic expansion. Routines to calculate Airy functions and their derivatives are also included.
Computer method for identification of boiler transfer functions
NASA Technical Reports Server (NTRS)
Miles, J. H.
1972-01-01
Iterative computer aided procedure was developed which provides for identification of boiler transfer functions using frequency response data. Method uses frequency response data to obtain satisfactory transfer function for both high and low vapor exit quality data.
ERIC Educational Resources Information Center
Basile, Anthony; D'Aquila, Jill M.
2002-01-01
Accounting students received either traditional instruction (n=46) or used computer-mediated communication and WebCT course management software. There were no significant differences in attitudes about the course. However, computer users were more positive about course delivery and course management tools. (Contains 17 references.) (SK)
Computing black hole partition functions from quasinormal modes
NASA Astrophysics Data System (ADS)
Arnold, Peter; Szepietowski, Phillip; Vaman, Diana
2016-07-01
We propose a method of computing one-loop determinants in black hole space-times (with emphasis on asymptotically anti-de Sitter black holes) that may be used for numerics when completely-analytic results are unattainable. The method utilizes the expression for one-loop determinants in terms of quasinormal frequencies determined by Denef, Hartnoll and Sachdev in [1]. A numerical evaluation must face the fact that the sum over the quasinormal modes, indexed by momentum and overtone numbers, is divergent. A necessary ingredient is then a regularization scheme to handle the divergent contributions of individual fixed-momentum sectors to the partition function. To this end, we formulate an effective two-dimensional problem in which a natural refinement of standard heat kernel techniques can be used to account for contributions to the partition function at fixed momentum. We test our method in a concrete case by reproducing the scalar one-loop determinant in the BTZ black hole background. We then discuss the application of such techniques to more complicated spacetimes.
Some computational techniques for estimating human operator describing functions
NASA Technical Reports Server (NTRS)
Levison, W. H.
1986-01-01
Computational procedures for improving the reliability of human operator describing functions are described. Special attention is given to the estimation of standard errors associated with mean operator gain and phase shift as computed from an ensemble of experimental trials. This analysis pertains to experiments using sum-of-sines forcing functions. Both open-loop and closed-loop measurement environments are considered.
Computer Use and the Relation between Age and Cognitive Functioning
ERIC Educational Resources Information Center
Soubelet, Andrea
2012-01-01
This article investigates whether computer use for leisure could mediate or moderate the relations between age and cognitive functioning. Findings supported smaller age differences in measures of cognitive functioning for people who reported spending more hours using a computer. Because of the cross-sectional design of the study, two alternative…
Pair correlation function integrals: Computation and use
NASA Astrophysics Data System (ADS)
Wedberg, Rasmus; O'Connell, John P.; Peters, Günther H.; Abildskov, Jens
2011-08-01
We describe a method for extending radial distribution functions obtained from molecular simulations of pure and mixed molecular fluids to arbitrary distances. The method allows total correlation function integrals to be reliably calculated from simulations of relatively small systems. The long-distance behavior of radial distribution functions is determined by requiring that the corresponding direct correlation functions follow certain approximations at long distances. We have briefly described the method and tested its performance in previous communications [R. Wedberg, J. P. O'Connell, G. H. Peters, and J. Abildskov, Mol. Simul. 36, 1243 (2010);, 10.1080/08927020903536366 Fluid Phase Equilib. 302, 32 (2011)], 10.1016/j.fluid.2010.10.004, but describe here its theoretical basis more thoroughly and derive long-distance approximations for the direct correlation functions. We describe the numerical implementation of the method in detail, and report numerical tests complementing previous results. Pure molecular fluids are here studied in the isothermal-isobaric ensemble with isothermal compressibilities evaluated from the total correlation function integrals and compared with values derived from volume fluctuations. For systems where the radial distribution function has structure beyond the sampling limit imposed by the system size, the integration is more reliable, and usually more accurate, than simple integral truncation.
Singular Function Integration in Computational Physics
NASA Astrophysics Data System (ADS)
Hasbun, Javier
2009-03-01
In teaching computational methods in the undergraduate physics curriculum, standard integration approaches taught include the rectangular, trapezoidal, Simpson, Romberg, and others. Over time, these techniques have proven to be invaluable and students are encouraged to employ the most efficient method that is expected to perform best when applied to a given problem. However, some physics research applications require techniques that can handle singularities. While decreasing the step size in traditional approaches is an alternative, this may not always work and repetitive processes make this route even more inefficient. Here, I present two existing integration rules designed to handle singular integrals. I compare them to traditional rules as well as to the exact analytic results. I suggest that it is perhaps time to include such approaches in the undergraduate computational physics course.
45 CFR 302.20 - Separation of cash handling and accounting functions.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 45 Public Welfare 2 2011-10-01 2011-10-01 false Separation of cash handling and accounting functions. 302.20 Section 302.20 Public Welfare Regulations Relating to Public Welfare OFFICE OF CHILD..., DEPARTMENT OF HEALTH AND HUMAN SERVICES STATE PLAN REQUIREMENTS § 302.20 Separation of cash handling...
45 CFR 302.20 - Separation of cash handling and accounting functions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 45 Public Welfare 2 2014-10-01 2012-10-01 true Separation of cash handling and accounting functions. 302.20 Section 302.20 Public Welfare Regulations Relating to Public Welfare OFFICE OF CHILD..., DEPARTMENT OF HEALTH AND HUMAN SERVICES STATE PLAN REQUIREMENTS § 302.20 Separation of cash handling...
ERIC Educational Resources Information Center
Young (Arthur) and Co., Washington, DC.
Several years ago, Montgomery County Public Schools (MCPS) began a Management Operations Review and Evaluation (MORE) of the entire school system, excluding school-based instruction. This MORE study is an evaluation of MCPS's current accounting system and certain related financial services functions within the Department of Financial Services. In…
45 CFR 302.20 - Separation of cash handling and accounting functions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 45 Public Welfare 2 2010-10-01 2010-10-01 false Separation of cash handling and accounting functions. 302.20 Section 302.20 Public Welfare Regulations Relating to Public Welfare OFFICE OF CHILD SUPPORT ENFORCEMENT (CHILD SUPPORT ENFORCEMENT PROGRAM), ADMINISTRATION FOR CHILDREN AND...
Basic mathematical function libraries for scientific computation
NASA Technical Reports Server (NTRS)
Galant, David C.
1989-01-01
Ada packages implementing selected mathematical functions for the support of scientific and engineering applications were written. The packages provide the Ada programmer with the mathematical function support found in the languages Pascal and FORTRAN as well as an extended precision arithmetic and a complete complex arithmetic. The algorithms used are fully described and analyzed. Implementation assumes that the Ada type FLOAT objects fully conform to the IEEE 754-1985 standard for single binary floating-point arithmetic, and that INTEGER objects are 32-bit entities. Codes for the Ada packages are included as appendixes.
The Computer and Its Functions; How to Communicate with the Computer.
ERIC Educational Resources Information Center
Ward, Peggy M.
A brief discussion of why it is important for students to be familiar with computers and their functions and a list of some practical applications introduce this two-part paper. Focusing on how the computer works, the first part explains the various components of the computer, different kinds of memory storage devices, disk operating systems, and…
Code of Federal Regulations, 2010 CFR
2010-10-01
... functions, and loss and damage claims processing (accounts XX-55-76 and XX-55-78). 1242.78 Section 1242.78... Employees performing clerical and accounting functions, and loss and damage claims processing (accounts XX-55-76 and XX-55-78). If the sum of the direct freight and the direct passenger expenses is more...
Inaccuracies of trigonometric functions in computer mathematical libraries
NASA Astrophysics Data System (ADS)
Ito, Takashi; Kojima, Sadamu
Recent progress in the development of high speed computers has enabled us to perform larger and faster numerical experiments in astronomy. However, sometimes the high speed of numerical computation is achieved at the cost of accuracy. In this paper we show an example of accuracy loss by some mathematical functions on certain computer platforms in Astronomical Data Analysis Center, National Astronomical Observatory of Japan. We focus in particular on the numerical inaccuracy in sine and cosine functions, demonstrating how accuracy deterioration emerges. We also describe the measures that we have so far taken against these numerical inaccuracies. In general, computer vendors are not eager to improve the numerical accuracy in the mathematical libraries that they are supposed to be responsible for. Therefore scientists have to be aware of the existence of numerical inaccuracies, and protect their computational results from contamination by the potential errors that many computer platforms inherently contain.
Examining Functions in Mathematics and Science Using Computer Interfacing.
ERIC Educational Resources Information Center
Walton, Karen Doyle
1988-01-01
Introduces microcomputer interfacing as a method for explaining and demonstrating various aspects of the concept of function. Provides three experiments with illustrations and typical computer graphic displays: pendulum motion, pendulum study using two pendulums, and heat absorption and radiation. (YP)
Saint-Georges, Catherine; Mahdhaoui, Ammar; Chetouani, Mohamed; Cassel, Raquel S.; Laznik, Marie-Christine; Apicella, Fabio; Muratori, Pietro; Maestro, Sandra; Muratori, Filippo; Cohen, David
2011-01-01
Background To assess whether taking into account interaction synchrony would help to better differentiate autism (AD) from intellectual disability (ID) and typical development (TD) in family home movies of infants aged less than 18 months, we used computational methods. Methodology and Principal Findings First, we analyzed interactive sequences extracted from home movies of children with AD (N = 15), ID (N = 12), or TD (N = 15) through the Infant and Caregiver Behavior Scale (ICBS). Second, discrete behaviors between baby (BB) and Care Giver (CG) co-occurring in less than 3 seconds were selected as single interactive patterns (or dyadic events) for analysis of the two directions of interaction (CG→BB and BB→CG) by group and semester. To do so, we used a Markov assumption, a Generalized Linear Mixed Model, and non negative matrix factorization. Compared to TD children, BBs with AD exhibit a growing deviant development of interactive patterns whereas those with ID rather show an initial delay of development. Parents of AD and ID do not differ very much from parents of TD when responding to their child. However, when initiating interaction, parents use more touching and regulation up behaviors as early as the first semester. Conclusion When studying interactive patterns, deviant autistic behaviors appear before 18 months. Parents seem to feel the lack of interactive initiative and responsiveness of their babies and try to increasingly supply soliciting behaviors. Thus we stress that credence should be given to parents' intuition as they recognize, long before diagnosis, the pathological process through the interactive pattern with their child. PMID:21818320
ERIC Educational Resources Information Center
Fox, Janna; Cheng, Liying
2015-01-01
In keeping with the trend to elicit multiple stakeholder responses to operational tests as part of test validation, this exploratory mixed methods study examines test-taker accounts of an Internet-based (i.e., computer-administered) test in the high-stakes context of proficiency testing for university admission. In 2013, as language testing…
Factors accounting for psychosocial functioning in patients with low back pain
Steuden, Stanisława; Kuryłowicz, Joanna
2009-01-01
Low back pain (LBP) is a chronic disorder which exerts a profound impact on various spheres of psychosocial functioning, including emotional distress, functional limitations and decrements in social contacts. The objective of this study was to investigate the associations between the indices of psychosocial functioning in patients with chronic LBP and a range of psychological factors. Specifically, the study aimed at exploring the relative participation of personality, social support, disease-related cognitive appraisals and coping styles in accounting for the differences in psychosocial functioning of patients with LBP. One-hundred-twenty patients with LBP took part in the study and completed a battery of psychological questionnaires: NEO–Five Factors Inventory, Ways of Coping Questionnaire, Disease-Related Social Support Scale, Disease-Related Appraisals Scale and Psychosocial Functioning Questionnaire (PFQ). The PFQ dimensions were used as dependent variables in a series of stepwise regression analysis models with the scores from other questionnaires entered as independent variables. A cognitive appraisal of the disease in terms of an obstacle was strongly related to all domains of functioning; however, other appraisals (threat, challenge, harm, profit and overall disease importance) were uniquely associated with particular domains of functioning. Deprivation of social support was a significant predictor of distress experienced in interpersonal context and of sense of being disabled. Among basic personality traits, agreeableness was negatively associated with distress in interpersonal context, and conscientiousness was positively related to acceptance of life with the disease. Problem-focus coping was linked to higher acceptance of life with the disease. Among sociodemographic variables, older age and lower educational level were related to greater subjective feelings of being disabled. Pain severity was found unrelated to any of psychosocial functioning
Casadio, Maura; Tamagnone, Irene; Summa, Susanna; Sanguineti, Vittorio
2013-01-01
Computational models of neuromotor recovery after a stroke might help to unveil the underlying physiological mechanisms and might suggest how to make recovery faster and more effective. At least in principle, these models could serve: (i) To provide testable hypotheses on the nature of recovery; (ii) To predict the recovery of individual patients; (iii) To design patient-specific “optimal” therapy, by setting the treatment variables for maximizing the amount of recovery or for achieving a better generalization of the learned abilities across different tasks. Here we review the state of the art of computational models for neuromotor recovery through exercise, and their implications for treatment. We show that to properly account for the computational mechanisms of neuromotor recovery, multiple levels of description need to be taken into account. The review specifically covers models of recovery at central, functional and muscle synergy level. PMID:23986688
Computer-Intensive Algebra and Students' Conceptual Knowledge of Functions.
ERIC Educational Resources Information Center
O'Callaghan, Brian R.
1998-01-01
Describes a research project that examined the effects of the Computer-Intensive Algebra (CIA) and traditional algebra curricula on students' (N=802) understanding of the function concept. Results indicate that CIA students achieved a better understanding of functions and were better at the components of modeling, interpreting, and translating.…
Assessment of cognitive function in alcoholics by computer: a control study.
Acker, C; Acker, W; Shaw, G K
1984-01-01
Results are presented of the performance by 103 alcoholics and 90 controls on six computer-administered tests of cognitive function. The main analysis compared performance of the two groups when pre-existing differences in intellectual capacity, as estimated by NART, were accounted for statistically. The performance of the alcoholics was worse, at a statistically significant level, on 18 of 23 measures. Procedurally, the tests were found to offer practical advantages over conventional procedures. PMID:6508877
Convergence rate for numerical computation of the lattice Green's function.
Ghazisaeidi, M; Trinkle, D R
2009-03-01
Flexible boundary-condition methods couple an isolated defect to bulk through the bulk lattice Green's function. Direct computation of the lattice Green's function requires projecting out the singular subspace of uniform displacements and forces for the infinite lattice. We calculate the convergence rates for elastically isotropic and anisotropic cases for three different techniques: relative displacement, elastic Green's function correction, and discontinuity correction. The discontinuity correction has the most rapid convergence for the general case. PMID:19392089
PERFORMANCE OF A COMPUTER-BASED ASSESSMENT OF COGNITIVE FUNCTION MEASURES IN TWO COHORTS OF SENIORS
Espeland, Mark A.; Katula, Jeffrey A.; Rushing, Julia; Kramer, Arthur F.; Jennings, Janine M.; Sink, Kaycee M.; Nadkarni, Neelesh K.; Reid, Kieran F.; Castro, Cynthia M.; Church, Timothy; Kerwin, Diana R.; Williamson, Jeff D.; Marottoli, Richard A.; Rushing, Scott; Marsiske, Michael; Rapp, Stephen R.
2013-01-01
Background Computer-administered assessment of cognitive function is being increasingly incorporated in clinical trials, however its performance in these settings has not been systematically evaluated. Design The Seniors Health and Activity Research Program (SHARP) pilot trial (N=73) developed a computer-based tool for assessing memory performance and executive functioning. The Lifestyle Interventions and Independence for Seniors (LIFE) investigators incorporated this battery in a full scale multicenter clinical trial (N=1635). We describe relationships that test scores have with those from interviewer-administered cognitive function tests and risk factors for cognitive deficits and describe performance measures (completeness, intra-class correlations). Results Computer-based assessments of cognitive function had consistent relationships across the pilot and full scale trial cohorts with interviewer-administered assessments of cognitive function, age, and a measure of physical function. In the LIFE cohort, their external validity was further demonstrated by associations with other risk factors for cognitive dysfunction: education, hypertension, diabetes, and physical function. Acceptable levels of data completeness (>83%) were achieved on all computer-based measures, however rates of missing data were higher among older participants (odds ratio=1.06 for each additional year; p<0.001) and those who reported no current computer use (odds ratio=2.71; p<0.001). Intra-class correlations among clinics were at least as low (ICC≤0.013) as for interviewer measures (ICC≤0.023), reflecting good standardization. All cognitive measures loaded onto the first principal component (global cognitive function), which accounted for 40% of the overall variance. Conclusion Our results support the use of computer-based tools for assessing cognitive function in multicenter clinical trials of older individuals. PMID:23589390
Wigner Function Negativity and Contextuality in Quantum Computation on Rebits
NASA Astrophysics Data System (ADS)
Delfosse, Nicolas; Allard Guerin, Philippe; Bian, Jacob; Raussendorf, Robert
2015-04-01
We describe a universal scheme of quantum computation by state injection on rebits (states with real density matrices). For this scheme, we establish contextuality and Wigner function negativity as computational resources, extending results of M. Howard et al. [Nature (London) 510, 351 (2014), 10.1038/nature13460] to two-level systems. For this purpose, we define a Wigner function suited to systems of n rebits and prove a corresponding discrete Hudson's theorem. We introduce contextuality witnesses for rebit states and discuss the compatibility of our result with state-independent contextuality.
Computer method for identification of boiler transfer functions
NASA Technical Reports Server (NTRS)
Miles, J. H.
1971-01-01
An iterative computer method is described for identifying boiler transfer functions using frequency response data. An objective penalized performance measure and a nonlinear minimization technique are used to cause the locus of points generated by a transfer function to resemble the locus of points obtained from frequency response measurements. Different transfer functions can be tried until a satisfactory empirical transfer function to the system is found. To illustrate the method, some examples and some results from a study of a set of data consisting of measurements of the inlet impedance of a single tube forced flow boiler with inserts are given.
A large-scale evaluation of computational protein function prediction
Radivojac, Predrag; Clark, Wyatt T; Ronnen Oron, Tal; Schnoes, Alexandra M; Wittkop, Tobias; Sokolov, Artem; Graim, Kiley; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa; Pandey, Gaurav; Yunes, Jeffrey M; Talwalkar, Ameet S; Repo, Susanna; Souza, Michael L; Piovesan, Damiano; Casadio, Rita; Wang, Zheng; Cheng, Jianlin; Fang, Hai; Gough, Julian; Koskinen, Patrik; Törönen, Petri; Nokso-Koivisto, Jussi; Holm, Liisa; Cozzetto, Domenico; Buchan, Daniel W A; Bryson, Kevin; Jones, David T; Limaye, Bhakti; Inamdar, Harshal; Datta, Avik; Manjari, Sunitha K; Joshi, Rajendra; Chitale, Meghana; Kihara, Daisuke; Lisewski, Andreas M; Erdin, Serkan; Venner, Eric; Lichtarge, Olivier; Rentzsch, Robert; Yang, Haixuan; Romero, Alfonso E; Bhat, Prajwal; Paccanaro, Alberto; Hamp, Tobias; Kassner, Rebecca; Seemayer, Stefan; Vicedo, Esmeralda; Schaefer, Christian; Achten, Dominik; Auer, Florian; Böhm, Ariane; Braun, Tatjana; Hecht, Maximilian; Heron, Mark; Hönigschmid, Peter; Hopf, Thomas; Kaufmann, Stefanie; Kiening, Michael; Krompass, Denis; Landerer, Cedric; Mahlich, Yannick; Roos, Manfred; Björne, Jari; Salakoski, Tapio; Wong, Andrew; Shatkay, Hagit; Gatzmann, Fanny; Sommer, Ingolf; Wass, Mark N; Sternberg, Michael J E; Škunca, Nives; Supek, Fran; Bošnjak, Matko; Panov, Panče; Džeroski, Sašo; Šmuc, Tomislav; Kourmpetis, Yiannis A I; van Dijk, Aalt D J; ter Braak, Cajo J F; Zhou, Yuanpeng; Gong, Qingtian; Dong, Xinran; Tian, Weidong; Falda, Marco; Fontana, Paolo; Lavezzo, Enrico; Di Camillo, Barbara; Toppo, Stefano; Lan, Liang; Djuric, Nemanja; Guo, Yuhong; Vucetic, Slobodan; Bairoch, Amos; Linial, Michal; Babbitt, Patricia C; Brenner, Steven E; Orengo, Christine; Rost, Burkhard; Mooney, Sean D; Friedberg, Iddo
2013-01-01
Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be high. Here we report the results from the first large-scale community-based Critical Assessment of protein Function Annotation (CAFA) experiment. Fifty-four methods representing the state-of-the-art for protein function prediction were evaluated on a target set of 866 proteins from eleven organisms. Two findings stand out: (i) today’s best protein function prediction algorithms significantly outperformed widely-used first-generation methods, with large gains on all types of targets; and (ii) although the top methods perform well enough to guide experiments, there is significant need for improvement of currently available tools. PMID:23353650
The flight telerobotic servicer: From functional architecture to computer architecture
NASA Technical Reports Server (NTRS)
Lumia, Ronald; Fiala, John
1989-01-01
After a brief tutorial on the NASA/National Bureau of Standards Standard Reference Model for Telerobot Control System Architecture (NASREM) functional architecture, the approach to its implementation is shown. First, interfaces must be defined which are capable of supporting the known algorithms. This is illustrated by considering the interfaces required for the SERVO level of the NASREM functional architecture. After interface definition, the specific computer architecture for the implementation must be determined. This choice is obviously technology dependent. An example illustrating one possible mapping of the NASREM functional architecture to a particular set of computers which implements it is shown. The result of choosing the NASREM functional architecture is that it provides a technology independent paradigm which can be mapped into a technology dependent implementation capable of evolving with technology in the laboratory and in space.
Computational approaches for rational design of proteins with novel functionalities
Tiwari, Manish Kumar; Singh, Ranjitha; Singh, Raushan Kumar; Kim, In-Won; Lee, Jung-Kul
2012-01-01
Proteins are the most multifaceted macromolecules in living systems and have various important functions, including structural, catalytic, sensory, and regulatory functions. Rational design of enzymes is a great challenge to our understanding of protein structure and physical chemistry and has numerous potential applications. Protein design algorithms have been applied to design or engineer proteins that fold, fold faster, catalyze, catalyze faster, signal, and adopt preferred conformational states. The field of de novo protein design, although only a few decades old, is beginning to produce exciting results. Developments in this field are already having a significant impact on biotechnology and chemical biology. The application of powerful computational methods for functional protein designing has recently succeeded at engineering target activities. Here, we review recently reported de novo functional proteins that were developed using various protein design approaches, including rational design, computational optimization, and selection from combinatorial libraries, highlighting recent advances and successes. PMID:24688643
A large-scale evaluation of computational protein function prediction.
Radivojac, Predrag; Clark, Wyatt T; Oron, Tal Ronnen; Schnoes, Alexandra M; Wittkop, Tobias; Sokolov, Artem; Graim, Kiley; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa; Pandey, Gaurav; Yunes, Jeffrey M; Talwalkar, Ameet S; Repo, Susanna; Souza, Michael L; Piovesan, Damiano; Casadio, Rita; Wang, Zheng; Cheng, Jianlin; Fang, Hai; Gough, Julian; Koskinen, Patrik; Törönen, Petri; Nokso-Koivisto, Jussi; Holm, Liisa; Cozzetto, Domenico; Buchan, Daniel W A; Bryson, Kevin; Jones, David T; Limaye, Bhakti; Inamdar, Harshal; Datta, Avik; Manjari, Sunitha K; Joshi, Rajendra; Chitale, Meghana; Kihara, Daisuke; Lisewski, Andreas M; Erdin, Serkan; Venner, Eric; Lichtarge, Olivier; Rentzsch, Robert; Yang, Haixuan; Romero, Alfonso E; Bhat, Prajwal; Paccanaro, Alberto; Hamp, Tobias; Kaßner, Rebecca; Seemayer, Stefan; Vicedo, Esmeralda; Schaefer, Christian; Achten, Dominik; Auer, Florian; Boehm, Ariane; Braun, Tatjana; Hecht, Maximilian; Heron, Mark; Hönigschmid, Peter; Hopf, Thomas A; Kaufmann, Stefanie; Kiening, Michael; Krompass, Denis; Landerer, Cedric; Mahlich, Yannick; Roos, Manfred; Björne, Jari; Salakoski, Tapio; Wong, Andrew; Shatkay, Hagit; Gatzmann, Fanny; Sommer, Ingolf; Wass, Mark N; Sternberg, Michael J E; Škunca, Nives; Supek, Fran; Bošnjak, Matko; Panov, Panče; Džeroski, Sašo; Šmuc, Tomislav; Kourmpetis, Yiannis A I; van Dijk, Aalt D J; ter Braak, Cajo J F; Zhou, Yuanpeng; Gong, Qingtian; Dong, Xinran; Tian, Weidong; Falda, Marco; Fontana, Paolo; Lavezzo, Enrico; Di Camillo, Barbara; Toppo, Stefano; Lan, Liang; Djuric, Nemanja; Guo, Yuhong; Vucetic, Slobodan; Bairoch, Amos; Linial, Michal; Babbitt, Patricia C; Brenner, Steven E; Orengo, Christine; Rost, Burkhard; Mooney, Sean D; Friedberg, Iddo
2013-03-01
Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be high. Here we report the results from the first large-scale community-based critical assessment of protein function annotation (CAFA) experiment. Fifty-four methods representing the state of the art for protein function prediction were evaluated on a target set of 866 proteins from 11 organisms. Two findings stand out: (i) today's best protein function prediction algorithms substantially outperform widely used first-generation methods, with large gains on all types of targets; and (ii) although the top methods perform well enough to guide experiments, there is considerable need for improvement of currently available tools. PMID:23353650
Plaut, D C; Booth, J R
2000-10-01
Existing accounts of single-word semantic priming phenomena incorporate multiple mechanisms, such as spreading activation, expectancy-based processes, and postlexical semantic matching. The authors provide empirical and computational support for a single-mechanism distributed network account. Previous studies have found greater semantic priming for low- than for high-frequency target words as well as inhibition following unrelated primes only at long stimulus-onset asynchronies (SOAs). A series of experiments examined the modulation of these effects by individual differences in age or perceptual ability. Third-grade, 6th-grade, and college students performed a lexical-decision task on high- and low-frequency target words preceded by related, unrelated, and nonword primes. Greater priming for low-frequency targets was exhibited only by participants with high perceptual ability. Moreover, unlike the college students, the children showed no inhibition even at the long SOA. The authors provide an account of these results in terms of the properties of distributed network models and support this account with an explicit computational simulation. PMID:11089407
Outcomes Assessment of Computer-Assisted Behavioral Objectives for Accounting Graduates.
ERIC Educational Resources Information Center
Moore, John W.; Mitchem, Cheryl E.
1997-01-01
Presents behavioral objectives for accounting students and an outcomes assessment plan with five steps: (1) identification and definition of student competencies; (2) selection of valid instruments; (3) integration of assessment and instruction; (4) determination of levels of assessment; and (5) attribution of improvements to the program. (SK)
Computational design of proteins with novel structure and functions
NASA Astrophysics Data System (ADS)
Wei, Yang; Lu-Hua, Lai
2016-01-01
Computational design of proteins is a relatively new field, where scientists search the enormous sequence space for sequences that can fold into desired structure and perform desired functions. With the computational approach, proteins can be designed, for example, as regulators of biological processes, novel enzymes, or as biotherapeutics. These approaches not only provide valuable information for understanding of sequence-structure-function relations in proteins, but also hold promise for applications to protein engineering and biomedical research. In this review, we briefly introduce the rationale for computational protein design, then summarize the recent progress in this field, including de novo protein design, enzyme design, and design of protein-protein interactions. Challenges and future prospects of this field are also discussed. Project supported by the National Basic Research Program of China (Grant No. 2015CB910300), the National High Technology Research and Development Program of China (Grant No. 2012AA020308), and the National Natural Science Foundation of China (Grant No. 11021463).
Tempel, David G.; Aspuru-Guzik, Alán
2012-01-01
We prove that the theorems of TDDFT can be extended to a class of qubit Hamiltonians that are universal for quantum computation. The theorems of TDDFT applied to universal Hamiltonians imply that single-qubit expectation values can be used as the basic variables in quantum computation and information theory, rather than wavefunctions. From a practical standpoint this opens the possibility of approximating observables of interest in quantum computations directly in terms of single-qubit quantities (i.e. as density functionals). Additionally, we also demonstrate that TDDFT provides an exact prescription for simulating universal Hamiltonians with other universal Hamiltonians that have different, and possibly easier-to-realize two-qubit interactions. This establishes the foundations of TDDFT for quantum computation and opens the possibility of developing density functionals for use in quantum algorithms. PMID:22553483
Tempel, David G; Aspuru-Guzik, Alán
2012-01-01
We prove that the theorems of TDDFT can be extended to a class of qubit Hamiltonians that are universal for quantum computation. The theorems of TDDFT applied to universal Hamiltonians imply that single-qubit expectation values can be used as the basic variables in quantum computation and information theory, rather than wavefunctions. From a practical standpoint this opens the possibility of approximating observables of interest in quantum computations directly in terms of single-qubit quantities (i.e. as density functionals). Additionally, we also demonstrate that TDDFT provides an exact prescription for simulating universal Hamiltonians with other universal Hamiltonians that have different, and possibly easier-to-realize two-qubit interactions. This establishes the foundations of TDDFT for quantum computation and opens the possibility of developing density functionals for use in quantum algorithms. PMID:22553483
SNAP: A computer program for generating symbolic network functions
NASA Technical Reports Server (NTRS)
Lin, P. M.; Alderson, G. E.
1970-01-01
The computer program SNAP (symbolic network analysis program) generates symbolic network functions for networks containing R, L, and C type elements and all four types of controlled sources. The program is efficient with respect to program storage and execution time. A discussion of the basic algorithms is presented, together with user's and programmer's guides.
Robust Computation of Morse-Smale Complexes of Bilinear Functions
Norgard, G; Bremer, P T
2010-11-30
The Morse-Smale (MS) complex has proven to be a useful tool in extracting and visualizing features from scalar-valued data. However, existing algorithms to compute the MS complex are restricted to either piecewise linear or discrete scalar fields. This paper presents a new combinatorial algorithm to compute MS complexes for two dimensional piecewise bilinear functions defined on quadrilateral meshes. We derive a new invariant of the gradient flow within a bilinear cell and use it to develop a provably correct computation which is unaffected by numerical instabilities. This includes a combinatorial algorithm to detect and classify critical points as well as a way to determine the asymptotes of cell-based saddles and their intersection with cell edges. Finally, we introduce a simple data structure to compute and store integral lines on quadrilateral meshes which by construction prevents intersections and enables us to enforce constraints on the gradient flow to preserve known invariants.
Computer program for calculating and fitting thermodynamic functions
NASA Technical Reports Server (NTRS)
Mcbride, Bonnie J.; Gordon, Sanford
1992-01-01
A computer program is described which (1) calculates thermodynamic functions (heat capacity, enthalpy, entropy, and free energy) for several optional forms of the partition function, (2) fits these functions to empirical equations by means of a least-squares fit, and (3) calculates, as a function of temperture, heats of formation and equilibrium constants. The program provides several methods for calculating ideal gas properties. For monatomic gases, three methods are given which differ in the technique used for truncating the partition function. For diatomic and polyatomic molecules, five methods are given which differ in the corrections to the rigid-rotator harmonic-oscillator approximation. A method for estimating thermodynamic functions for some species is also given.
A computational account of the production effect: Still playing twenty questions with nature.
Jamieson, Randall K; Mewhort, D J K; Hockley, William E
2016-06-01
People remember words that they read aloud better than words that they read silently, a result known as the production effect. The standing explanation for the production effect is that producing a word renders it distinctive in memory and, thus, memorable at test. By 1 key account, distinctiveness is defined in terms of sensory feedback. We formalize the sensory-feedback account using MINERVA 2, a standard model of memory. The model accommodates the basic result in recognition as well as the fact that the mixed-list production effect is larger than its pure-list counterpart, that the production effect is robust to forgetting, and that the production and generation effects have additive influences on performance. A final simulation addresses the strength-based account and suggests that it will be more difficult to distinguish a strength-based versus distinctiveness-based explanation than is typically thought. We conclude that the production effect is consistent with existing theory and discuss our analysis in relation to Alan Newell's (1973) classic criticism of psychology and call for an analysis of psychological principles instead of laboratory phenomena. (PsycINFO Database Record PMID:27244357
Computing the hadronic vacuum polarization function by analytic continuation
Feng, Xu; Hashimoto, Shoji; Hotzel, Grit; Jansen, Karl; Petschlies, Marcus; Renner, Dru B.
2013-08-29
We propose a method to compute the hadronic vacuum polarization function on the lattice at continuous values of photon momenta bridging between the spacelike and timelike regions. We provide two independent demonstrations to show that this method leads to the desired hadronic vacuum polarization function in Minkowski spacetime. We present with the example of the leading-order QCD correction to the muon anomalous magnetic moment that this approach can provide a valuable alternative method for calculations of physical quantities where the hadronic vacuum polarization function enters.
A Survey of Computational Intelligence Techniques in Protein Function Prediction
Tiwari, Arvind Kumar; Srivastava, Rajeev
2014-01-01
During the past, there was a massive growth of knowledge of unknown proteins with the advancement of high throughput microarray technologies. Protein function prediction is the most challenging problem in bioinformatics. In the past, the homology based approaches were used to predict the protein function, but they failed when a new protein was different from the previous one. Therefore, to alleviate the problems associated with homology based traditional approaches, numerous computational intelligence techniques have been proposed in the recent past. This paper presents a state-of-the-art comprehensive review of various computational intelligence techniques for protein function predictions using sequence, structure, protein-protein interaction network, and gene expression data used in wide areas of applications such as prediction of DNA and RNA binding sites, subcellular localization, enzyme functions, signal peptides, catalytic residues, nuclear/G-protein coupled receptors, membrane proteins, and pathway analysis from gene expression datasets. This paper also summarizes the result obtained by many researchers to solve these problems by using computational intelligence techniques with appropriate datasets to improve the prediction performance. The summary shows that ensemble classifiers and integration of multiple heterogeneous data are useful for protein function prediction. PMID:25574395
Computer-Supported Instructional Communication: A Multidisciplinary Account of Relevant Factors
ERIC Educational Resources Information Center
Rummel, Nikol; Kramer, Nicole
2010-01-01
The papers in the present special issue summarize research that aims at compiling and understanding variables associated with successful communication in computer-supported instructional settings. Secondly, the papers add to the question of how adaptiveness of instructional communication may be achieved. A particular strength of the special issue…
Computation of three-dimensional flows using two stream functions
NASA Technical Reports Server (NTRS)
Greywall, Mahesh S.
1991-01-01
An approach to compute 3-D flows using two stream functions is presented. The method generates a boundary fitted grid as part of its solution. Commonly used two steps for computing the flow fields are combined into a single step in the present approach: (1) boundary fitted grid generation; and (2) solution of Navier-Stokes equations on the generated grid. The presented method can be used to directly compute 3-D viscous flows, or the potential flow approximation of this method can be used to generate grids for other algorithms to compute 3-D viscous flows. The independent variables used are chi, a spatial coordinate, and xi and eta, values of stream functions along two sets of suitably chosen intersecting stream surfaces. The dependent variables used are the streamwise velocity, and two functions that describe the stream surfaces. Since for a 3-D flow there is no unique way to define two sets of intersecting stream surfaces to cover the given flow, different types of two sets of intersecting stream surfaces are considered. First, the metric of the (chi, xi, eta) curvilinear coordinate system associated with each type is presented. Next, equations for the steady state transport of mass, momentum, and energy are presented in terms of the metric of the (chi, xi, eta) coordinate system. Also included are the inviscid and the parabolized approximations to the general transport equations.
Integrated command, control, communications and computation system functional architecture
NASA Technical Reports Server (NTRS)
Cooley, C. G.; Gilbert, L. E.
1981-01-01
The functional architecture for an integrated command, control, communications, and computation system applicable to the command and control portion of the NASA End-to-End Data. System is described including the downlink data processing and analysis functions required to support the uplink processes. The functional architecture is composed of four elements: (1) the functional hierarchy which provides the decomposition and allocation of the command and control functions to the system elements; (2) the key system features which summarize the major system capabilities; (3) the operational activity threads which illustrate the interrelationahip between the system elements; and (4) the interfaces which illustrate those elements that originate or generate data and those elements that use the data. The interfaces also provide a description of the data and the data utilization and access techniques.
Computer-based accountability system (Phase I) for special nuclear materials at Argonne-West
Ingermanson, R.S.; Proctor, A.E.
1982-05-01
An automated accountability system for special nuclear materials (SNM) is under development at Argonne National Laboratory-West. Phase I of the development effort has established the following basic features of the system: a unique file organization allows rapid updating or retrieval of the status of various SNM, based on batch numbers, storage location, serial number, or other attributes. Access to the program is controlled by an interactive user interface that can be easily understood by operators who have had no prior background in electronic data processing. Extensive use of structured programming techniques make the software package easy to understand and to modify for specific applications. All routines are written in FORTRAN.
Kavanagh, Liam C.; Winkielman, Piotr
2016-01-01
There is a broad theoretical and empirical interest in spontaneous mimicry, or the automatic reproduction of a model’s behavior. Evidence shows that people mimic models they like, and that mimicry enhances liking for the mimic. Yet, there is no satisfactory account of this phenomenon, especially in terms of its functional significance. While affiliation is often cited as the driver of mimicry, we argue that mimicry is primarily driven by a learning process that helps to produce the appropriate bodily and emotional responses to relevant social situations. Because the learning process and the resulting knowledge is implicit, it cannot easily be rejected, criticized, revised, and employed by the learner in a deliberative or deceptive manner. We argue that these characteristics will lead individuals to preferentially mimic ingroup members, whose implicit information is worth incorporating. Conversely, mimicry of the wrong person is costly because individuals will internalize “bad habits,” including emotional reactions and mannerisms indicating wrong group membership. This pattern of mimicry, in turn, means that observed mimicry is an honest signal of group affiliation. We propose that the preferences of models for the mimic stems from this true signal value. Further, just like facial expressions, mimicry communicates a genuine disposition when it is truly spontaneous. Consequently, perceivers are attuned to relevant cues such as appropriate timing, fidelity, and selectivity. Our account, while assuming no previously unknown biological endowments, also explains greater mimicry of powerful people, and why affiliation can be signaled by mimicry of seemingly inconsequential behaviors. PMID:27064398
Kavanagh, Liam C; Winkielman, Piotr
2016-01-01
There is a broad theoretical and empirical interest in spontaneous mimicry, or the automatic reproduction of a model's behavior. Evidence shows that people mimic models they like, and that mimicry enhances liking for the mimic. Yet, there is no satisfactory account of this phenomenon, especially in terms of its functional significance. While affiliation is often cited as the driver of mimicry, we argue that mimicry is primarily driven by a learning process that helps to produce the appropriate bodily and emotional responses to relevant social situations. Because the learning process and the resulting knowledge is implicit, it cannot easily be rejected, criticized, revised, and employed by the learner in a deliberative or deceptive manner. We argue that these characteristics will lead individuals to preferentially mimic ingroup members, whose implicit information is worth incorporating. Conversely, mimicry of the wrong person is costly because individuals will internalize "bad habits," including emotional reactions and mannerisms indicating wrong group membership. This pattern of mimicry, in turn, means that observed mimicry is an honest signal of group affiliation. We propose that the preferences of models for the mimic stems from this true signal value. Further, just like facial expressions, mimicry communicates a genuine disposition when it is truly spontaneous. Consequently, perceivers are attuned to relevant cues such as appropriate timing, fidelity, and selectivity. Our account, while assuming no previously unknown biological endowments, also explains greater mimicry of powerful people, and why affiliation can be signaled by mimicry of seemingly inconsequential behaviors. PMID:27064398
Brown, Timothy M.; Allen, Annette E.; al-Enezi, Jazi; Wynne, Jonathan; Schlangen, Luc; Hommes, Vanja; Lucas, Robert J.
2013-01-01
In addition to rods and cones, photoreception in mammals extends to a third retinal cell type expressing the photopigment melanopsin. The influences of this novel opsin are widespread, ranging from pupillary and circadian responses to brightness perception, yet established approaches to quantifying the biological effects of light do not adequately account for melanopsin sensitivity. We have recently proposed a novel metric, the melanopic sensitivity function (VZλ), to address this deficiency. Here, we further validate this new measure with a variety of tests based on potential barriers to its applicability identified in the literature or relating to obvious practical benefits. Using electrophysiogical approaches and pupillometry, initially in rodless+coneless mice, our data demonstrate that under a very wide range of different conditions (including switching between stimuli with highly divergent spectral content) the VZλ function provides an accurate prediction of the sensitivity of melanopsin-dependent responses. We further show that VZλ provides the best available description of the spectral sensitivity of at least one aspect of the visual response in mice with functional rods and cones: tonic firing activity in the lateral geniculate nuclei. Together, these data establish VZλ as an important new approach for light measurement with widespread practical utility. PMID:23301090
Optimization of removal function in computer controlled optical surfacing
NASA Astrophysics Data System (ADS)
Chen, Xi; Guo, Peiji; Ren, Jianfeng
2010-10-01
The technical principle of computer controlled optical surfacing (CCOS) and the common method of optimizing removal function that is used in CCOS are introduced in this paper. A new optimizing method time-sharing synthesis of removal function is proposed to solve problems of the removal function being far away from Gaussian type and slow approaching of the removal function error that encountered in the mode of planet motion or translation-rotation. Detailed time-sharing synthesis of using six removal functions is discussed. For a given region on the workpiece, six positions are selected as the centers of the removal function; polishing tool controlled by the executive system of CCOS revolves around each centre to complete a cycle in proper order. The overall removal function obtained by the time-sharing process is the ratio of total material removal in six cycles to time duration of the six cycles, which depends on the arrangement and distribution of the six removal functions. Simulations on the synthesized overall removal functions under two different modes of motion, i.e., planet motion and translation-rotation are performed from which the optimized combination of tool parameters and distribution of time-sharing synthesis removal functions are obtained. The evaluation function when optimizing is determined by an approaching factor which is defined as the ratio of the material removal within the area of half of the polishing tool coverage from the polishing center to the total material removal within the full polishing tool coverage area. After optimization, it is found that the optimized removal function obtained by time-sharing synthesis is closer to the ideal Gaussian type removal function than those by the traditional methods. The time-sharing synthesis method of the removal function provides an efficient way to increase the convergence speed of the surface error in CCOS for the fabrication of aspheric optical surfaces, and to reduce the intermediate- and high
Why do interracial interactions impair executive function? A resource depletion account.
Richeson, Jennifer A; Trawalter, Sophie
2005-06-01
Three studies investigated the veracity of a resource depletion account of the impairment of inhibitory task performance after interracial contact. White individuals engaged in either an interracial or same-race interaction, then completed an ostensibly unrelated Stroop color-naming test. In each study, the self-regulatory demands of the interaction were either increased (Study 1) or decreased (Studies 2 and 3). Results revealed that increasing the self-regulatory demands of an interracial interaction led to greater Stroop interference compared with control, whereas reducing self-regulatory demands led to less Stroop interference. Manipulating self-regulatory demands did not affect Stroop performance after same-race interactions. Taken together, the present studies point to resource depletion as the likely mechanism underlying the impairment of cognitive functioning after interracial dyadic interactions. PMID:15982114
17 CFR 1.32 - Segregated account; daily computation and record.
Code of Federal Regulations, 2010 CFR
2010-04-01
...., “securities haircuts”) as set forth in Rule 15c3-1(c)(2)(vi) of the Securities and Exchange Commission (17 CFR... (17 CFR 240.15c3-1(c)(11)(i)). (c) The daily computations required by this section must be completed... business day, on a currency-by-currency basis: (1) The total amount of customer funds on deposit...
Schweikardt, Thorsten; Olivares, Concepción; Solano, Francisco; Jaenicke, Elmar; García-Borrón, José Carlos; Decker, Heinz
2007-10-01
Tyrosinases are the first and rate-limiting enzymes in the synthesis of melanin pigments responsible for colouring hair, skin and eyes. Mutation of tyrosinases often decreases melanin production resulting in albinism, but the effects are not always understood at the molecular level. Homology modelling of mouse tyrosinase based on recently published crystal structures of non-mammalian tyrosinases provides an active site model accounting for loss-of-function mutations. According to the model, the copper-binding histidines are located in a helix bundle comprising four densely packed helices. A loop containing residues M374, S375 and V377 connects the CuA and CuB centres, with the peptide oxygens of M374 and V377 serving as hydrogen acceptors for the NH-groups of the imidazole rings of the copper-binding His367 and His180. Therefore, this loop is essential for the stability of the active site architecture. A double substitution (374)MS(375) --> (374)GG(375) or a single M374G mutation lead to a local perturbation of the protein matrix at the active site affecting the orientation of the H367 side chain, that may be unable to bind CuB reliably, resulting in loss of activity. The model also accounts for loss of function in two naturally occurring albino mutations, S380P and V393F. The hydroxyl group in S380 contributes to the correct orientation of M374, and the substitution of V393 for a bulkier phenylalanine sterically impedes correct side chain packing at the active site. Therefore, our model explains the mechanistic necessity for conservation of not only active site histidines but also adjacent amino acids in tyrosinase. PMID:17850513
Time-Dependent Density Functional Theory for Universal Quantum Computation
NASA Astrophysics Data System (ADS)
Tempel, David
2015-03-01
In this talk, I will discuss how the theorems of TDDFT can be applied to a class of qubit Hamiltonians that are universal for quantum computation. The theorems of TDDFT applied to universal Hamiltonians imply that single-qubit expectation values can be used as the basic variables in quantum computation and information theory, rather than wavefunctions. From a practical standpoint this opens the possibility of approximating observables of interest in quantum computations directly in terms of single-qubit quantities (i.e. as density functionals). Additionally, I will discuss how TDDFT provides an exact prescription for simulating universal Hamiltonians with other universal Hamiltonians that have different, and possibly easier-to-realize two-qubit interactions.
Computational predictions of energy materials using density functional theory
NASA Astrophysics Data System (ADS)
Jain, Anubhav; Shin, Yongwoo; Persson, Kristin A.
2016-01-01
In the search for new functional materials, quantum mechanics is an exciting starting point. The fundamental laws that govern the behaviour of electrons have the possibility, at the other end of the scale, to predict the performance of a material for a targeted application. In some cases, this is achievable using density functional theory (DFT). In this Review, we highlight DFT studies predicting energy-related materials that were subsequently confirmed experimentally. The attributes and limitations of DFT for the computational design of materials for lithium-ion batteries, hydrogen production and storage materials, superconductors, photovoltaics and thermoelectric materials are discussed. In the future, we expect that the accuracy of DFT-based methods will continue to improve and that growth in computing power will enable millions of materials to be virtually screened for specific applications. Thus, these examples represent a first glimpse of what may become a routine and integral step in materials discovery.
Optimized Kaiser-Bessel Window Functions for Computed Tomography.
Nilchian, Masih; Ward, John Paul; Vonesch, Cedric; Unser, Michael
2015-11-01
Kaiser-Bessel window functions are frequently used to discretize tomographic problems because they have two desirable properties: 1) their short support leads to a low computational cost and 2) their rotational symmetry makes their imaging transform independent of the direction. In this paper, we aim at optimizing the parameters of these basis functions. We present a formalism based on the theory of approximation and point out the importance of the partition-of-unity condition. While we prove that, for compact-support functions, this condition is incompatible with isotropy, we show that minimizing the deviation from the partition of unity condition is highly beneficial. The numerical results confirm that the proposed tuning of the Kaiser-Bessel window functions yields the best performance. PMID:26151939
Computer Code For Calculation Of The Mutual Coherence Function
NASA Astrophysics Data System (ADS)
Bugnolo, Dimitri S.
1986-05-01
We present a computer code in FORTRAN 77 for the calculation of the mutual coherence function (MCF) of a plane wave normally incident on a stochastic half-space. This is an exact result. The user need only input the path length, the wavelength, the outer scale size, and the structure constant. This program may be used to calculate the MCF of a well-collimated laser beam in the atmosphere.
Computations involving differential operators and their actions on functions
NASA Technical Reports Server (NTRS)
Crouch, Peter E.; Grossman, Robert; Larson, Richard
1991-01-01
The algorithms derived by Grossmann and Larson (1989) are further developed for rewriting expressions involving differential operators. The differential operators involved arise in the local analysis of nonlinear dynamical systems. These algorithms are extended in two different directions: the algorithms are generalized so that they apply to differential operators on groups and the data structures and algorithms are developed to compute symbolically the action of differential operators on functions. Both of these generalizations are needed for applications.
Functional imaging of the brain using computed tomography.
Berninger, W H; Axel, L; Norman, D; Napel, S; Redington, R W
1981-03-01
Data from rapid-sequence CT scans of the same cross section, obtained following bolus injection of contrast material, were analyzed by functional imaging. The information contained in a large number of images can be compressed into one or two gray-scale images which can be evaluated both qualitatively and quantitatively. The computational techniques are described and applied to the generation of images depicting bolus transit time, arrival time, peak time, and effective width. PMID:7465851
Computational aspects of the continuum quaternionic wave functions for hydrogen
Morais, J.
2014-10-15
Over the past few years considerable attention has been given to the role played by the Hydrogen Continuum Wave Functions (HCWFs) in quantum theory. The HCWFs arise via the method of separation of variables for the time-independent Schrödinger equation in spherical coordinates. The HCWFs are composed of products of a radial part involving associated Laguerre polynomials multiplied by exponential factors and an angular part that is the spherical harmonics. In the present paper we introduce the continuum wave functions for hydrogen within quaternionic analysis ((R)QHCWFs), a result which is not available in the existing literature. In particular, the underlying functions are of three real variables and take on either values in the reduced and full quaternions (identified, respectively, with R{sup 3} and R{sup 4}). We prove that the (R)QHCWFs are orthonormal to one another. The representation of these functions in terms of the HCWFs are explicitly given, from which several recurrence formulae for fast computer implementations can be derived. A summary of fundamental properties and further computation of the hydrogen-like atom transforms of the (R)QHCWFs are also discussed. We address all the above and explore some basic facts of the arising quaternionic function theory. As an application, we provide the reader with plot simulations that demonstrate the effectiveness of our approach. (R)QHCWFs are new in the literature and have some consequences that are now under investigation.
INTEGRATING COMPUTATIONAL PROTEIN FUNCTION PREDICTION INTO DRUG DISCOVERY INITIATIVES
Grant, Marianne A.
2014-01-01
Pharmaceutical researchers must evaluate vast numbers of protein sequences and formulate innovative strategies for identifying valid targets and discovering leads against them as a way of accelerating drug discovery. The ever increasing number and diversity of novel protein sequences identified by genomic sequencing projects and the success of worldwide structural genomics initiatives have spurred great interest and impetus in the development of methods for accurate, computationally empowered protein function prediction and active site identification. Previously, in the absence of direct experimental evidence, homology-based protein function annotation remained the gold-standard for in silico analysis and prediction of protein function. However, with the continued exponential expansion of sequence databases, this approach is not always applicable, as fewer query protein sequences demonstrate significant homology to protein gene products of known function. As a result, several non-homology based methods for protein function prediction that are based on sequence features, structure, evolution, biochemical and genetic knowledge have emerged. Herein, we review current bioinformatic programs and approaches for protein function prediction/annotation and discuss their integration into drug discovery initiatives. The development of such methods to annotate protein functional sites and their application to large protein functional families is crucial to successfully utilizing the vast amounts of genomic sequence information available to drug discovery and development processes. PMID:25530654
Preprocessing functions for computed radiography images in a PACS environment
NASA Astrophysics Data System (ADS)
McNitt-Gray, Michael F.; Pietka, Ewa; Huang, H. K.
1992-05-01
In a picture archiving and communications system (PACS), images are acquired from several modalities including computed radiography (CR). This modality has unique image characteristics and presents several problems that need to be resolved before the image is available for viewing at a display workstation. A set of preprocessing functions have been applied to all CR images in a PACS environment to enhance the display of images. The first function reformats CR images that are acquired with different plate sizes to a standard size for display. Another function removes the distracting white background caused by the collimation used at the time of exposure. A third function determines the orientation of each image and rotates those images that are in nonstandard positions into a standard viewing position. Another function creates a default look-up table based on the gray levels actually used by the image (instead of allocated gray levels). Finally, there is a function which creates (for chest images only) the piece-wise linear look-up tables that can be applied to enhance different tissue densities. These functions have all been implemented in a PACS environment. Each of these functions have been very successful in improving the viewing conditions of CR images and contribute to the clinical acceptance of PACS by reducing the effort required to display CR images.
Helie, Sebastien; Chakravarthy, Srinivasa; Moustafa, Ahmed A.
2013-01-01
Many computational models of the basal ganglia (BG) have been proposed over the past twenty-five years. While computational neuroscience models have focused on closely matching the neurobiology of the BG, computational cognitive neuroscience (CCN) models have focused on how the BG can be used to implement cognitive and motor functions. This review article focuses on CCN models of the BG and how they use the neuroanatomy of the BG to account for cognitive and motor functions such as categorization, instrumental conditioning, probabilistic learning, working memory, sequence learning, automaticity, reaching, handwriting, and eye saccades. A total of 19 BG models accounting for one or more of these functions are reviewed and compared. The review concludes with a discussion of the limitations of existing CCN models of the BG and prescriptions for future modeling, including the need for computational models of the BG that can simultaneously account for cognitive and motor functions, and the need for a more complete specification of the role of the BG in behavioral functions. PMID:24367325
On the Hydrodynamic Function of Sharkskin: A Computational Investigation
NASA Astrophysics Data System (ADS)
Boomsma, Aaron; Sotiropoulos, Fotis
2014-11-01
Denticles (placoid scales) are small structures that cover the epidermis of some sharks. The hydrodynamic function of denticles is unclear. Because they resemble riblets, they have been thought to passively reduce skin-friction-for which there is some experimental evidence. Others have experimentally shown that denticles increase skin-friction and have hypothesized that denticles act as vortex generators to delay separation. To help clarify their function, we use high-resolution large eddy and direct numerical simulations, with an immersed boundary method, to simulate flow patterns past and calculate the drag force on Mako Short Fin denticles. Simulations are carried out for the denticles placed in a canonical turbulent boundary layer as well as in the vicinity of a separation bubble. The computed results elucidate the three-dimensional structure of the flow around denticles and provide insights into the hydrodynamic function of sharkskin.
A Riemannian framework for orientation distribution function computing.
Cheng, Jian; Ghosh, Aurobrata; Jiang, Tianzi; Deriche, Rachid
2009-01-01
Compared with Diffusion Tensor Imaging (DTI), High Angular Resolution Imaging (HARDI) can better explore the complex microstructure of white matter. Orientation Distribution Function (ODF) is used to describe the probability of the fiber direction. Fisher information metric has been constructed for probability density family in Information Geometry theory and it has been successfully applied for tensor computing in DTI. In this paper, we present a state of the art Riemannian framework for ODF computing based on Information Geometry and sparse representation of orthonormal bases. In this Riemannian framework, the exponential map, logarithmic map and geodesic have closed forms. And the weighted Frechet mean exists uniquely on this manifold. We also propose a novel scalar measurement, named Geometric Anisotropy (GA), which is the Riemannian geodesic distance between the ODF and the isotropic ODF. The Renyi entropy H1/2 of the ODF can be computed from the GA. Moreover, we present an Affine-Euclidean framework and a Log-Euclidean framework so that we can work in an Euclidean space. As an application, Lagrange interpolation on ODF field is proposed based on weighted Frechet mean. We validate our methods on synthetic and real data experiments. Compared with existing Riemannian frameworks on ODF, our framework is model-free. The estimation of the parameters, i.e. Riemannian coordinates, is robust and linear. Moreover it should be noted that our theoretical results can be used for any probability density function (PDF) under an orthonormal basis representation. PMID:20426075
Computational models of basal-ganglia pathway functions: focus on functional neuroanatomy
Schroll, Henning; Hamker, Fred H.
2013-01-01
Over the past 15 years, computational models have had a considerable impact on basal-ganglia research. Most of these models implement multiple distinct basal-ganglia pathways and assume them to fulfill different functions. As there is now a multitude of different models, it has become complex to keep track of their various, sometimes just marginally different assumptions on pathway functions. Moreover, it has become a challenge to oversee to what extent individual assumptions are corroborated or challenged by empirical data. Focusing on computational, but also considering non-computational models, we review influential concepts of pathway functions and show to what extent they are compatible with or contradict each other. Moreover, we outline how empirical evidence favors or challenges specific model assumptions and propose experiments that allow testing assumptions against each other. PMID:24416002
Analog computation of auto and cross-correlation functions
NASA Technical Reports Server (NTRS)
1974-01-01
For analysis of the data obtained from the cross beam systems it was deemed desirable to compute the auto- and cross-correlation functions by both digital and analog methods to provide a cross-check of the analysis methods and an indication as to which of the two methods would be most suitable for routine use in the analysis of such data. It is the purpose of this appendix to provide a concise description of the equipment and procedures used for the electronic analog analysis of the cross beam data. A block diagram showing the signal processing and computation set-up used for most of the analog data analysis is provided. The data obtained at the field test sites were recorded on magnetic tape using wide-band FM recording techniques. The data as recorded were band-pass filtered by electronic signal processing in the data acquisition systems.
Kling, Daniel; Tillmar, Andreas; Egeland, Thore; Mostad, Petter
2015-09-01
Several applications necessitate an unbiased determination of relatedness, be it in linkage or association studies or in a forensic setting. An appropriate model to compute the joint probability of some genetic data for a set of persons given some hypothesis about the pedigree structure is then required. The increasing number of markers available through high-density SNP microarray typing and NGS technologies intensifies the demand, where using a large number of markers may lead to biased results due to strong dependencies between closely located loci, both within pedigrees (linkage) and in the population (allelic association or linkage disequilibrium (LD)). We present a new general model, based on a Markov chain for inheritance patterns and another Markov chain for founder allele patterns, the latter allowing us to account for LD. We also demonstrate a specific implementation for X chromosomal markers that allows for computation of likelihoods based on hypotheses of alleged relationships and genetic marker data. The algorithm can simultaneously account for linkage, LD, and mutations. We demonstrate its feasibility using simulated examples. The algorithm is implemented in the software FamLinkX, providing a user-friendly GUI for Windows systems (FamLinkX, as well as further usage instructions, is freely available at www.famlink.se ). Our software provides the necessary means to solve cases where no previous implementation exists. In addition, the software has the possibility to perform simulations in order to further study the impact of linkage and LD on computed likelihoods for an arbitrary set of markers. PMID:25425094
Hapuarachchi, T; Scholkmann, F; Caldwell, M; Hagmann, C; Kleiser, S; Metz, A J; Pastewski, M; Wolf, M; Tachtsidis, I
2016-01-01
We present a computational model of metabolism in the preterm neonatal brain. The model has the capacity to mimic haemodynamic and metabolic changes during functional activation and simulate functional near-infrared spectroscopy (fNIRS) data. As an initial test of the model's efficacy, we simulate data obtained from published studies investigating functional activity in preterm neonates. In addition we simulated recently collected data from preterm neonates during visual activation. The model is well able to predict the haemodynamic and metabolic changes from these observations. In particular, we found that changes in cerebral blood flow and blood pressure may account for the observed variability of the magnitude and sign of stimulus-evoked haemodynamic changes reported in preterm infants. PMID:26782202
Computer Modeling of the Earliest Cellular Structures and Functions
NASA Astrophysics Data System (ADS)
Pohorille, Andrew
2000-03-01
In the absence of extinct or extant record of protocells (the earliest ancestors of contemporary cells), the most direct way to test ourunderstanding of the origin of cellular life is to construct laboratory models of protocells. Such efforts are currently underway in the NASA Astrobiology Program. They are accompanied by computational studies aimed at explaining self-organization of simple molecules into ordered structures and developing designs for molecules that perform protocellular functions. Many of these functions, such as import of nutrients, capture and storage of energy, and response to changes in the environment are carried out by proteins bound to membranes. We will discuss a series of large-scale, molecular-level computer simulations which demonstrate (a) how small proteins (peptides)organize themselves into ordered structures at water-membrane interfaces and insert into membranes, (b) how these peptides aggregate to form membrane-spanning structures (e.g. channels), and (c) by what mechanisms such aggregates perform essential protocellular functions, such as proton transport of protons across cell walls, a key step in cellular bioenergetics. The simulations were performed using the molecular dynamics method, in which Newton's equations of motion for each atom in the system are solved iteratively. The problems of interest required simulations on multi-nanosecond time scales, which corresponded to 10^6-10^8 time steps.
Computer Modeling of the Earliest Cellular Structures and Functions
NASA Technical Reports Server (NTRS)
Pohorille, Andrew; Chipot, Christophe; Schweighofer, Karl
2000-01-01
In the absence of extinct or extant record of protocells (the earliest ancestors of contemporary cells). the most direct way to test our understanding of the origin of cellular life is to construct laboratory models of protocells. Such efforts are currently underway in the NASA Astrobiology Program. They are accompanied by computational studies aimed at explaining self-organization of simple molecules into ordered structures and developing designs for molecules that perform proto-cellular functions. Many of these functions, such as import of nutrients, capture and storage of energy. and response to changes in the environment are carried out by proteins bound to membrane< We will discuss a series of large-scale, molecular-level computer simulations which demonstrate (a) how small proteins (peptides) organize themselves into ordered structures at water-membrane interfaces and insert into membranes, (b) how these peptides aggregate to form membrane-spanning structures (eg. channels), and (c) by what mechanisms such aggregates perform essential proto-cellular functions, such as proton transport of protons across cell walls, a key step in cellular bioenergetics. The simulations were performed using the molecular dynamics method, in which Newton's equations of motion for each item in the system are solved iteratively. The problems of interest required simulations on multi-nanosecond time scales, which corresponded to 10(exp 6)-10(exp 8) time steps.
Complete RNA inverse folding: computational design of functional hammerhead ribozymes
Dotu, Ivan; Garcia-Martin, Juan Antonio; Slinger, Betty L.; Mechery, Vinodh; Meyer, Michelle M.; Clote, Peter
2014-01-01
Nanotechnology and synthetic biology currently constitute one of the most innovative, interdisciplinary fields of research, poised to radically transform society in the 21st century. This paper concerns the synthetic design of ribonucleic acid molecules, using our recent algorithm, RNAiFold, which can determine all RNA sequences whose minimum free energy secondary structure is a user-specified target structure. Using RNAiFold, we design ten cis-cleaving hammerhead ribozymes, all of which are shown to be functional by a cleavage assay. We additionally use RNAiFold to design a functional cis-cleaving hammerhead as a modular unit of a synthetic larger RNA. Analysis of kinetics on this small set of hammerheads suggests that cleavage rate of computationally designed ribozymes may be correlated with positional entropy, ensemble defect, structural flexibility/rigidity and related measures. Artificial ribozymes have been designed in the past either manually or by SELEX (Systematic Evolution of Ligands by Exponential Enrichment); however, this appears to be the first purely computational design and experimental validation of novel functional ribozymes. RNAiFold is available at http://bioinformatics.bc.edu/clotelab/RNAiFold/. PMID:25209235
21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow...
21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow...
21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow...
21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow...
21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow...
Vinckier, F; Gaillard, R; Palminteri, S; Rigoux, L; Salvador, A; Fornito, A; Adapa, R; Krebs, M O; Pessiglione, M; Fletcher, P C
2016-07-01
A state of pathological uncertainty about environmental regularities might represent a key step in the pathway to psychotic illness. Early psychosis can be investigated in healthy volunteers under ketamine, an NMDA receptor antagonist. Here, we explored the effects of ketamine on contingency learning using a placebo-controlled, double-blind, crossover design. During functional magnetic resonance imaging, participants performed an instrumental learning task, in which cue-outcome contingencies were probabilistic and reversed between blocks. Bayesian model comparison indicated that in such an unstable environment, reinforcement learning parameters are downregulated depending on confidence level, an adaptive mechanism that was specifically disrupted by ketamine administration. Drug effects were underpinned by altered neural activity in a fronto-parietal network, which reflected the confidence-based shift to exploitation of learned contingencies. Our findings suggest that an early characteristic of psychosis lies in a persistent doubt that undermines the stabilization of behavioral policy resulting in a failure to exploit regularities in the environment. PMID:26055423
Non-functioning adrenal adenomas discovered incidentally on computed tomography
Mitnick, J.S.; Bosniak, M.A.; Megibow, A.J.; Naidich, D.P.
1983-08-01
Eighteen patients with unilateral non-metastatic non-functioning adrenal masses were studied with computed tomography (CT). Pathological examination in cases revealed benign adrenal adenomas. The others were followed up with serial CT scans and found to show no change in tumor size over a period of six months to three years. On the basis of these findings, the authors suggest certain criteria of a benign adrenal mass, including (a) diameter less than 5 cm, (b) smooth contour, (c) well-defined margin, and (d) no change in size on follow-up. Serial CT scanning can be used as an alternative to surgery in the management of many of these patients.
Pechey, Rachel; Couturier, Dominique-Laurent; Deary, Ian J.; Marteau, Theresa M.
2016-01-01
Objective Executive function, impulsivity, and intelligence are correlated markers of cognitive resource that predict health-related behaviours. It is unknown whether executive function and impulsivity are unique predictors of these behaviours after accounting for intelligence. Methods Data from 6069 participants from the Avon Longitudinal Study of Parents and Children were analysed to investigate whether components of executive function (selective attention, attentional control, working memory, and response inhibition) and impulsivity (parent-rated) measured between ages 8 and 10, predicted having ever drunk alcohol, having ever smoked, fruit and vegetable consumption, physical activity, and overweight at age 13, after accounting for intelligence at age 8 and childhood socioeconomic characteristics. Results Higher intelligence predicted having drunk alcohol, not smoking, greater fruit and vegetable consumption, and not being overweight. After accounting for intelligence, impulsivity predicted alcohol use (odds ratio = 1.10; 99% confidence interval = 1.02, 1.19) and smoking (1.22; 1.11, 1.34). Working memory predicted not being overweight (0.90; 0.81, 0.99). Conclusions After accounting for intelligence, executive function predicts overweight status but not health-related behaviours in early adolescence, whilst impulsivity predicts the onset of alcohol and cigarette use, all with small effects. This suggests overlap between executive function and intelligence as predictors of health behaviour in this cohort, with trait impulsivity accounting for additional variance. PMID:27479488
Computing the effective action with the functional renormalization group
NASA Astrophysics Data System (ADS)
Codello, Alessandro; Percacci, Roberto; Rachwał, Lesław; Tonero, Alberto
2016-04-01
The "exact" or "functional" renormalization group equation describes the renormalization group flow of the effective average action Γ _k. The ordinary effective action Γ _0 can be obtained by integrating the flow equation from an ultraviolet scale k=Λ down to k=0. We give several examples of such calculations at one-loop, both in renormalizable and in effective field theories. We reproduce the four-point scattering amplitude in the case of a real scalar field theory with quartic potential and in the case of the pion chiral Lagrangian. In the case of gauge theories, we reproduce the vacuum polarization of QED and of Yang-Mills theory. We also compute the two-point functions for scalars and gravitons in the effective field theory of scalar fields minimally coupled to gravity.
ERIC Educational Resources Information Center
Smith, Thomas M.; Rowley, Kristie J.
2005-01-01
During the past decade or so, popular rhetoric has shifted away from site-based management and participatory governance as the centerpiece of school reform strategies as accountability and standards-based reform have become the reform mantra of policy makers at all levels of government. Critics of accountability-based reforms have suggested that…
Enzymatic Halogenases and Haloperoxidases: Computational Studies on Mechanism and Function.
Timmins, Amy; de Visser, Sam P
2015-01-01
Despite the fact that halogenated compounds are rare in biology, a number of organisms have developed processes to utilize halogens and in recent years, a string of enzymes have been identified that selectively insert halogen atoms into, for instance, a CH aliphatic bond. Thus, a number of natural products, including antibiotics, contain halogenated functional groups. This unusual process has great relevance to the chemical industry for stereoselective and regiospecific synthesis of haloalkanes. Currently, however, industry utilizes few applications of biological haloperoxidases and halogenases, but efforts are being worked on to understand their catalytic mechanism, so that their catalytic function can be upscaled. In this review, we summarize experimental and computational studies on the catalytic mechanism of a range of haloperoxidases and halogenases with structurally very different catalytic features and cofactors. This chapter gives an overview of heme-dependent haloperoxidases, nonheme vanadium-dependent haloperoxidases, and flavin adenine dinucleotide-dependent haloperoxidases. In addition, we discuss the S-adenosyl-l-methionine fluoridase and nonheme iron/α-ketoglutarate-dependent halogenases. In particular, computational efforts have been applied extensively for several of these haloperoxidases and halogenases and have given insight into the essential structural features that enable these enzymes to perform the unusual halogen atom transfer to substrates. PMID:26415843
Range, Doppler and astrometric observables computed from Time Transfer Functions: a survey
NASA Astrophysics Data System (ADS)
Hees, A.; Bertone, S.; Le Poncin-Lafitte, C.; Teyssandier, P.
2015-08-01
Determining range, Doppler and astrometric observables is of crucial interest for modelling and analyzing space observations. We recall how these observables can be computed when the travel time of a light ray is known as a function of the positions of the emitter and the receiver for a given instant of reception (or emission). For a long time, such a function-called a reception (or emission) time transfer function has been almost exclusively calculated by integrating the null geodesic equations describing the light rays. However, other methods avoiding such an integration have been considerably developed in the last twelve years. We give a survey of the analytical results obtained with these new methods up to the third order in the gravitational constant G for a mass monopole. We briefly discuss the case of quasi-conjunctions, where higher-order enhanced terms must be taken into account for correctly calculating the effects. We summarize the results obtained at the first order in G when the multipole structure and the motion of an axisymmetric body is taken into account. We present some applications to on-going or future missions like Gaia and Juno. We give a short review of the recent works devoted to the numerical estimates of the time transfer functions and their derivatives.
The Time Transfer Functions: an efficient tool to compute range, Doppler and astrometric observables
NASA Astrophysics Data System (ADS)
Hees, A.; Bertone, S.; Le Poncin-Lafitte, C.; Teyssandier, P.
2015-12-01
Determining range, Doppler and astrometric observables is of crucial interest for modelling and analyzing space observations. We recall how these observables can be computed when the travel time of a light ray is known as a function of the positions of the emitter and the receiver for a given instant of reception (or emission). For a long time, such a function--called a reception (or emission) time transfer function--has been almost exclusively calculated by integrating the null geodesic equations describing the light rays. However, other methods avoiding such an integration have been considerably developped in the last twelve years. We give a survey of the analytical results obtained with these new methods up to the third order in the gravitational constant G for a mass monopole. We briefly discuss the case of quasi-conjunctions, where higher-order enhanced terms must be taken into account for correctly calculating the effects. We summarize the results obtained at the first order in G when the multipole structure and the motion of an axisymmetric body is taken into account. We present some applications to on-going or future missions like Gaia and Juno. We give a short review of the recent works devoted to the numerical estimates of the time transfer functions and their derivatives.
Ohta, Shinri; Fukui, Naoki; Sakai, Kuniyoshi L.
2013-01-01
The nature of computational principles of syntax remains to be elucidated. One promising approach to this problem would be to construct formal and abstract linguistic models that parametrically predict the activation modulations in the regions specialized for linguistic processes. In this article, we review recent advances in theoretical linguistics and functional neuroimaging in the following respects. First, we introduce the two fundamental linguistic operations: Merge (which combines two words or phrases to form a larger structure) and Search (which searches and establishes a syntactic relation of two words or phrases). We also illustrate certain universal properties of human language, and present hypotheses regarding how sentence structures are processed in the brain. Hypothesis I is that the Degree of Merger (DoM), i.e., the maximum depth of merged subtrees within a given domain, is a key computational concept to properly measure the complexity of tree structures. Hypothesis II is that the basic frame of the syntactic structure of a given linguistic expression is determined essentially by functional elements, which trigger Merge and Search. We then present our recent functional magnetic resonance imaging experiment, demonstrating that the DoM is indeed a key syntactic factor that accounts for syntax-selective activations in the left inferior frontal gyrus and supramarginal gyrus. Hypothesis III is that the DoM domain changes dynamically in accordance with iterative Merge applications, the Search distances, and/or task requirements. We confirm that the DoM accounts for activations in various sentence types. Hypothesis III successfully explains activation differences between object- and subject-relative clauses, as well as activations during explicit syntactic judgment tasks. A future research on the computational principles of syntax will further deepen our understanding of uniquely human mental faculties. PMID:24385957
Zendehrouh, Sareh
2015-11-01
Recent work on decision-making field offers an account of dual-system theory for decision-making process. This theory holds that this process is conducted by two main controllers: a goal-directed system and a habitual system. In the reinforcement learning (RL) domain, the habitual behaviors are connected with model-free methods, in which appropriate actions are learned through trial-and-error experiences. However, goal-directed behaviors are associated with model-based methods of RL, in which actions are selected using a model of the environment. Studies on cognitive control also suggest that during processes like decision-making, some cortical and subcortical structures work in concert to monitor the consequences of decisions and to adjust control according to current task demands. Here a computational model is presented based on dual system theory and cognitive control perspective of decision-making. The proposed model is used to simulate human performance on a variant of probabilistic learning task. The basic proposal is that the brain implements a dual controller, while an accompanying monitoring system detects some kinds of conflict including a hypothetical cost-conflict one. The simulation results address existing theories about two event-related potentials, namely error related negativity (ERN) and feedback related negativity (FRN), and explore the best account of them. Based on the results, some testable predictions are also presented. PMID:26339919
An Evolutionary Computation Approach to Examine Functional Brain Plasticity.
Roy, Arnab; Campbell, Colin; Bernier, Rachel A; Hillary, Frank G
2016-01-01
One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength
An Evolutionary Computation Approach to Examine Functional Brain Plasticity
Roy, Arnab; Campbell, Colin; Bernier, Rachel A.; Hillary, Frank G.
2016-01-01
One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength
Computer Modeling of Protocellular Functions: Peptide Insertion in Membranes
NASA Technical Reports Server (NTRS)
Rodriquez-Gomez, D.; Darve, E.; Pohorille, A.
2006-01-01
Lipid vesicles became the precursors to protocells by acquiring the capabilities needed to survive and reproduce. These include transport of ions, nutrients and waste products across cell walls and capture of energy and its conversion into a chemically usable form. In modem organisms these functions are carried out by membrane-bound proteins (about 30% of the genome codes for this kind of proteins). A number of properties of alpha-helical peptides suggest that their associations are excellent candidates for protobiological precursors of proteins. In particular, some simple a-helical peptides can aggregate spontaneously and form functional channels. This process can be described conceptually by a three-step thermodynamic cycle: 1 - folding of helices at the water-membrane interface, 2 - helix insertion into the lipid bilayer and 3 - specific interactions of these helices that result in functional tertiary structures. Although a crucial step, helix insertion has not been adequately studied because of the insolubility and aggregation of hydrophobic peptides. In this work, we use computer simulation methods (Molecular Dynamics) to characterize the energetics of helix insertion and we discuss its importance in an evolutionary context. Specifically, helices could self-assemble only if their interactions were sufficiently strong to compensate the unfavorable Free Energy of insertion of individual helices into membranes, providing a selection mechanism for protobiological evolution.
Computation of Multimodal Size-Velocity-Temperature Spray Distribution Functions
NASA Astrophysics Data System (ADS)
Archambault, Mark R.
2002-09-01
An alternative approach to modeling spray flows-one which does not involve simulation or stochastic integration is to directly compute the evolution of the probability density function (PDF) describing the drops. The purpose of this paper is to continue exploring an alternative method of solving the spray flow problem. The approach is to derive and solve a set of Eulerian moment transport equations for the quantities of interest in the spray, coupled with the appropriate gas-phase (Eulerian) equations. A second purpose is to continue to explore how a maximum-entropy criterion may be used to provide closure for such a moment-based model. The hope is to further develop an Eulerian-Eulerian model that will permit one to solve for detailed droplet statistics directly without the use of stochastic integration or post-averaging of simulations.
Imaging local brain function with emission computed tomography
Kuhl, D.E.
1984-03-01
Positron emission tomography (PET) using /sup 18/F-fluorodeoxyglucose (FDG) was used to map local cerebral glucose utilization in the study of local cerebral function. This information differs fundamentally from structural assessment by means of computed tomography (CT). In normal human volunteers, the FDG scan was used to determine the cerebral metabolic response to conrolled sensory stimulation and the effects of aging. Cerebral metabolic patterns are distinctive among depressed and demented elderly patients. The FDG scan appears normal in the depressed patient, studded with multiple metabolic defects in patients with multiple infarct dementia, and in the patients with Alzheimer disease, metabolism is particularly reduced in the parietal cortex, but only slightly reduced in the caudate and thalamus. The interictal FDG scan effectively detects hypometabolic brain zones that are sites of onset for seizures in patients with partial epilepsy, even though these zones usually appear normal on CT scans. The future prospects of PET are discussed.
Optimizing high performance computing workflow for protein functional annotation.
Stanberry, Larissa; Rekepalli, Bhanu; Liu, Yuan; Giblock, Paul; Higdon, Roger; Montague, Elizabeth; Broomall, William; Kolker, Natali; Kolker, Eugene
2014-09-10
Functional annotation of newly sequenced genomes is one of the major challenges in modern biology. With modern sequencing technologies, the protein sequence universe is rapidly expanding. Newly sequenced bacterial genomes alone contain over 7.5 million proteins. The rate of data generation has far surpassed that of protein annotation. The volume of protein data makes manual curation infeasible, whereas a high compute cost limits the utility of existing automated approaches. In this work, we present an improved and optmized automated workflow to enable large-scale protein annotation. The workflow uses high performance computing architectures and a low complexity classification algorithm to assign proteins into existing clusters of orthologous groups of proteins. On the basis of the Position-Specific Iterative Basic Local Alignment Search Tool the algorithm ensures at least 80% specificity and sensitivity of the resulting classifications. The workflow utilizes highly scalable parallel applications for classification and sequence alignment. Using Extreme Science and Engineering Discovery Environment supercomputers, the workflow processed 1,200,000 newly sequenced bacterial proteins. With the rapid expansion of the protein sequence universe, the proposed workflow will enable scientists to annotate big genome data. PMID:25313296
Computational Effective Fault Detection by Means of Signature Functions
Baranski, Przemyslaw; Pietrzak, Piotr
2016-01-01
The paper presents a computationally effective method for fault detection. A system’s responses are measured under healthy and ill conditions. These signals are used to calculate so-called signature functions that create a signal space. The current system’s response is projected into this space. The signal location in this space easily allows to determine the fault. No classifier such as a neural network, hidden Markov models, etc. is required. The advantage of this proposed method is its efficiency, as computing projections amount to calculating dot products. Therefore, this method is suitable for real-time embedded systems due to its simplicity and undemanding processing capabilities which permit the use of low-cost hardware and allow rapid implementation. The approach performs well for systems that can be considered linear and stationary. The communication presents an application, whereby an industrial process of moulding is supervised. The machine is composed of forms (dies) whose alignment must be precisely set and maintained during the work. Typically, the process is stopped periodically to manually control the alignment. The applied algorithm allows on-line monitoring of the device by analysing the acceleration signal from a sensor mounted on a die. This enables to detect failures at an early stage thus prolonging the machine’s life. PMID:26949942
Computational Effective Fault Detection by Means of Signature Functions.
Baranski, Przemyslaw; Pietrzak, Piotr
2016-01-01
The paper presents a computationally effective method for fault detection. A system's responses are measured under healthy and ill conditions. These signals are used to calculate so-called signature functions that create a signal space. The current system's response is projected into this space. The signal location in this space easily allows to determine the fault. No classifier such as a neural network, hidden Markov models, etc. is required. The advantage of this proposed method is its efficiency, as computing projections amount to calculating dot products. Therefore, this method is suitable for real-time embedded systems due to its simplicity and undemanding processing capabilities which permit the use of low-cost hardware and allow rapid implementation. The approach performs well for systems that can be considered linear and stationary. The communication presents an application, whereby an industrial process of moulding is supervised. The machine is composed of forms (dies) whose alignment must be precisely set and maintained during the work. Typically, the process is stopped periodically to manually control the alignment. The applied algorithm allows on-line monitoring of the device by analysing the acceleration signal from a sensor mounted on a die. This enables to detect failures at an early stage thus prolonging the machine's life. PMID:26949942
Assessing executive function using a computer game: computational modeling of cognitive processes.
Hagler, Stuart; Jimison, Holly Brugge; Pavel, Misha
2014-07-01
Early and reliable detection of cognitive decline is one of the most important challenges of current healthcare. In this project, we developed an approach whereby a frequently played computer game can be used to assess a variety of cognitive processes and estimate the results of the pen-and-paper trail making test (TMT)--known to measure executive function, as well as visual pattern recognition, speed of processing, working memory, and set-switching ability. We developed a computational model of the TMT based on a decomposition of the test into several independent processes, each characterized by a set of parameters that can be estimated from play of a computer game designed to resemble the TMT. An empirical evaluation of the model suggests that it is possible to use the game data to estimate the parameters of the underlying cognitive processes and using the values of the parameters to estimate the TMT performance. Cognitive measures and trends in these measures can be used to identify individuals for further assessment, to provide a mechanism for improving the early detection of neurological problems, and to provide feedback and monitoring for cognitive interventions in the home. PMID:25014944
Chemical Visualization of Boolean Functions: A Simple Chemical Computer
NASA Astrophysics Data System (ADS)
Blittersdorf, R.; Müller, J.; Schneider, F. W.
1995-08-01
We present a chemical realization of the Boolean functions AND, OR, NAND, and NOR with a neutralization reaction carried out in three coupled continuous flow stirred tank reactors (CSTR). Two of these CSTR's are used as input reactors, the third reactor marks the output. The chemical reaction is the neutralization of hydrochloric acid (HCl) with sodium hydroxide (NaOH) in the presence of phenolphtalein as an indicator, which is red in alkaline solutions and colorless in acidic solutions representing the two binary states 1 and 0, respectively. The time required for a "chemical computation" is determined by the flow rate of reactant solutions into the reactors since the neutralization reaction itself is very fast. While the acid flow to all reactors is equal and constant, the flow rate of NaOH solution controls the states of the input reactors. The connectivities between the input and output reactors determine the flow rate of NaOH solution into the output reactor, according to the chosen Boolean function. Thus the state of the output reactor depends on the states of the input reactors.
Accounting for Accountability.
ERIC Educational Resources Information Center
Colorado State Dept. of Education, Denver. Cooperative Accountability Project.
This publication reports on two Regional Educational Accountability Conferences on Techniques sponsored by the Cooperative Accountability Project. Accountability is described as an "emotionally-charged issue" and an "operationally demanding concept." Overviewing accountability, major speakers emphasized that accountability is a means toward…
Memory and Generativity in Very High Functioning Autism: A Firsthand Account, and an Interpretation
ERIC Educational Resources Information Center
Boucher, Jill
2007-01-01
JS is a highly able person with Asperger syndrome whose language and intellectual abilities are, and always have been, superior. The first part of this short article consists of JS's analytical account of his atypical memory abilities, and the strategies he uses for memorizing and learning. JS has also described specific difficulties with creative…
A survey. Financial accounting and internal control functions pursued by hospital boards.
Gavin, T A
1984-09-01
Justification for a board committee's existence is its ability to devote time to issues judged to be important by the full board. This seems to have happened. Multiple committees pursue more functions than the other committee structures. Boards lacking an FA/IC committee pursue significantly fewer functions than their counterparts with committees. Substantial respondent agreement exists on those functions most and least frequently pursued, those perceived to be most and least important, and those perceived to be most and least effectively undertaken. Distinctions between committee structures and the full board, noted in the previous paragraph, hold true with respect to the importance of functions. All board structures identified reviewing the budget and comparing it to actual results as important. Committee structures are generally more inclined to address functions related to the work of the independent auditor and the effectiveness of the hospital's system and controls than are full board structures. Functions related to the internal auditor are pursued least frequently by all FA/IC board structures. The following suggestions are made to help boards pay adequate attention to and obtain objective information about the financial affairs of their hospitals. Those boards that do not have some form of an FA/IC committee should consider starting one. Evidence shows chief financial officers have been a moving force in establishing and strengthening such committees. Boards having a joint or single committee structure should consider upgrading their structure to either a single committee or multiple committees respectively. The complexity of the healthcare environment requires that more FA/IC functions be addressed by the board. The board or its FA/IC committee(s) should meet with their independent CPA's, fiscal intermediary auditors, and internal auditors. Where the hospital lacks an internal audit function a study should be undertaken to determine the feasibility of
HANOIPC3: a computer program to evaluate executive functions.
Guevara, M A; Rizo, L; Ruiz-Díaz, M; Hernández-González, M
2009-08-01
This article describes a computer program (HANOIPC3) based on the Tower of Hanoi game that, by analyzing a series of parameters during execution, allows a fast and accurate evaluation of data related to certain executive functions, especially planning, organizing and problem-solving. This computerized version has only one level of difficulty based on the use of 3 disks, but it stipulates an additional rule: only one disk may be moved at a time, and only to an adjacent peg (i.e., no peg can be skipped over). In the original version--without this stipulation--the minimum number of movements required to complete the task is 7, but under the conditions of this computerized version this increases to 26. HANOIPC3 has three important advantages: (1) it allows a researcher or clinician to modify the rules by adding or removing certain conditions, thus augmenting the utility and flexibility in test execution and the interpretation of results; (2) it allows to provide on-line feedback to subjects about their execution; and, (3) it creates a specific file to store the scores that correspond to the parameters obtained during trials. The parameters that can be measured include: latencies (time taken for each movement, measured in seconds), total test time, total number of movements, and the number of correct and incorrect movements. The efficacy and adaptability of this program has been confirmed. PMID:19303660
ERIC Educational Resources Information Center
Ohio State Univ., Columbus. Center on Education and Training for Employment.
This publication identifies 20 subjects appropriate for use in a competency list for the occupation of accounting specialist, 1 of 12 occupations within the business/computer technologies cluster. Each unit consists of a number of competencies; a list of competency builders is provided for each competency. Titles of the 20 units are as follows:…
pH-Regulated Mechanisms Account for Pigment-Type Differences in Epidermal Barrier Function
Gunathilake, Roshan; Schurer, Nanna Y.; Shoo, Brenda A.; Celli, Anna; Hachem, Jean-Pierre; Crumrine, Debra; Sirimanna, Ganga; Feingold, Kenneth R.; Mauro, Theodora M.; Elias, Peter M.
2009-01-01
To determine whether pigment type determines differences in epidermal function, we studied stratum corneum (SC) pH, permeability barrier homeostasis, and SC integrity in three geographically disparate populations with pigment type I–II versus IV–V skin (Fitzpatrick I–VI scale). Type IV–V subjects showed: (i) lower surface pH (≈0.5 U); (ii) enhanced SC integrity (transepidermal water loss change with sequential tape strippings); and (iii) more rapid barrier recovery than type I–II subjects. Enhanced barrier function could be ascribed to increased epidermal lipid content, increased lamellar body production, and reduced acidity, leading to enhanced lipid processing. Compromised SC integrity in type I–II subjects could be ascribed to increased serine protease activity, resulting in accelerated desmoglein-1 (DSG-1)/corneodesmosome degradation. In contrast, DSG-1-positive CDs persisted in type IV–V subjects, but due to enhanced cathepsin-D activity, SC thickness did not increase. Adjustment of pH of type I–II SC to type IV–V levels improved epidermal function. Finally, dendrites from type IV–V melanocytes were more acidic than those from type I–II subjects, and they transfer more melanosomes to the SC, suggesting that melanosome secretion could contribute to the more acidic pH of type IV–V skin. These studies show marked pigment-type differences in epidermal structure and function that are pH driven. PMID:19177137
ERIC Educational Resources Information Center
Dempsey, Lynn; Skarakis-Doyle, Elizabeth
2010-01-01
The conceptual framework of the World Health Organization's International Classification of Functioning, Disability and Health (ICF) has the potential to advance understanding of developmental language impairment (LI) and enhance clinical practice. The framework provides a systematic way of unifying numerous lines of research, which have linked a…
Calibration function for the Orbitrap FTMS accounting for the space charge effect.
Gorshkov, Mikhail V; Good, David M; Lyutvinskiy, Yaroslav; Yang, Hongqian; Zubarev, Roman A
2010-11-01
Ion storage in an electrostatic trap has been implemented with the introduction of the Orbitrap Fourier transform mass spectrometer (FTMS), which demonstrates performance similar to high-field ion cyclotron resonance MS. High mass spectral characteristics resulted in rapid acceptance of the Orbitrap FTMS for Life Sciences applications. The basics of Orbitrap operation are well documented; however, like in any ion trap MS technology, its performance is limited by interactions between the ion clouds. These interactions result in ion cloud couplings, systematic errors in measured masses, interference between ion clouds of different size yet with close m/z ratios, etc. In this work, we have characterized the space-charge effect on the measured frequency for the Orbitrap FTMS, looking for the possibility to achieve sub-ppm levels of mass measurement accuracy (MMA) for peptides in a wide range of total ion population. As a result of this characterization, we proposed an m/z calibration law for the Orbitrap FTMS that accounts for the total ion population present in the trap during a data acquisition event. Using this law, we were able to achieve a zero-space charge MMA limit of 80 ppb for the commercial Orbitrap FTMS system and sub-ppm level of MMA over a wide range of total ion populations with the automatic gain control values varying from 10 to 10(7). PMID:20696596
Sacchi, Emanuele; Sayed, Tarek
2014-11-01
Collision modification factors (CMFs) are commonly used to quantify the impact of safety countermeasures. The CMFs obtained from observational before-after (BA) studies are usually estimated by averaging the safety impact (i.e., index of effectiveness) for a group of treatment sites. The heterogeneity among the treatment locations, in terms of their characteristics, and the effect of this heterogeneity on safety treatment effectiveness are usually ignored. This is in contrast to treatment evaluations in other fields like medical statistics where variations in the magnitude (or in the direction) of response to the same treatment given to different patients are considered. This paper introduces an approach for estimating a CMFunction from BA safety studies that account for variable treatment location characteristics (heterogeneity). The treatment sites heterogeneity was incorporated into the CMFunction using fixed-effects and random-effects regression models. In addition to heterogeneity, the paper also advocates the use of CMFunctions with a time variable to acknowledge that the safety treatment (intervention) effects do not occur instantaneously but are spread over future time. This is achieved using non-linear intervention (Koyck) models, developed within a hierarchical full Bayes (FB) context. To demonstrate the approach, a case study is presented to evaluate the safety effectiveness of the "Signal Head Upgrade Program" recently implemented in the city of Surrey (British Columbia, Canada), where signal visibility was improved at several urban signalized intersections. The results demonstrated the importance of considering treatment sites heterogeneity and time trends when developing CMFunctions. PMID:25033279
Elusive accountabilities in the HIV scale-up: 'ownership' as a functional tautology.
Esser, Daniel E
2014-01-01
Mounting concerns over aid effectiveness have rendered 'ownership' a central concept in the vocabulary of development assistance for health (DAH). The article investigates the application of both 'national ownership' and 'country ownership' in the broader development discourse as well as more specifically in the context of internationally funded HIV/AIDS interventions. Based on comprehensive literature reviews, the research uncovers a multiplicity of definitions, most of which either divert from or plainly contradict the concept's original meaning and intent. During the last 10 years in particular, it appears that both public and private donors have advocated for greater 'ownership' by recipient governments and countries to hedge their own political risk rather than to work towards greater inclusion of the latter in agenda-setting and programming. Such politically driven semantic dynamics suggest that the concept's salience is not merely a discursive reflection of globally skewed power relations in DAH but a deliberate exercise in limiting donors' accountabilities. At the same time, the research also finds evidence that this conceptual contortion frames current global public health scholarship, thus adding further urgency to the need to critically re-evaluate the international political economy of global public health from a discursive perspective. PMID:24498888
Mukherjee, Dwaipayan; Botelho, Danielle; Gow, Andrew J.; Zhang, Junfeng; Georgopoulos, Panos G.
2013-01-01
A computational, multiscale toxicodynamic model has been developed to quantify and predict pulmonary effects due to uptake of engineered nanomaterials (ENMs) in mice. The model consists of a collection of coupled toxicodynamic modules, that were independently developed and tested using information obtained from the literature. The modules were developed to describe the dynamics of tissue with explicit focus on the cells and the surfactant chemicals that regulate the process of breathing, as well as the response of the pulmonary system to xenobiotics. Alveolar type I and type II cells, and alveolar macrophages were included in the model, along with surfactant phospholipids and surfactant proteins, to account for processes occurring at multiple biological scales, coupling cellular and surfactant dynamics affected by nanoparticle exposure, and linking the effects to tissue-level lung function changes. Nanoparticle properties such as size, surface chemistry, and zeta potential were explicitly considered in modeling the interactions of these particles with biological media. The model predictions were compared with in vivo lung function response measurements in mice and analysis of mice lung lavage fluid following exposures to silver and carbon nanoparticles. The predictions were found to follow the trends of observed changes in mouse surfactant composition over 7 days post dosing, and are in good agreement with the observed changes in mouse lung function over the same period of time. PMID:24312506
NASA Astrophysics Data System (ADS)
Yudin, I. P.; Perepelkin, E. E.; Tyutyunnikov, S. I.
2011-11-01
A simulation of the beam injection line in a synchrotron is performed within the Veksler and Baldin Laboratory of High Energy Physics, Joint Institute for Nuclear Research (VBLHEP JINR), project "The Development and Implementation of Units of a Synchrotron for Hadron Therapy." The parameters of the injection line are chosen for the transport of beams with intensities of 25-100 mA through the injection channel of the synchrotron with account for the space-charge effect. The simulation was performed using the method of macroparticles (the PIC method). The approach of massively parallel computations on graphics processors using Compute Unified Device Architecture (CUDA) technology was applied for the acceleration of computations. The 66-fold speedup of computations was obtained using the Tesla C1060 computing module instead of a single-core CPU with 2.4 GHz.
Enhancing functionality and performance in the PVM network computing system
Sunderam, V.
1996-09-01
The research funded by this grant is part of an ongoing research project in heterogeneous distributed computing with the PVM system, at Emory as well as at Oak Ridge Labs and the University of Tennessee. This grant primarily supports research at Emory that continues to evolve new concepts and systems in distributed computing, but it also includes the PI`s ongoing interaction with the other groups in terms of collaborative research as well as software systems development and maintenance. We have continued our second year efforts (July 1995 - June 1996), on the same topics as during the first year, namely (a) visualization of PVM programs to complement XPVM displays; (b) I/O and generalized distributed computing in PVM; and (c) evolution of a multithreaded concurrent computing model. 12 refs.
Texture functions in image analysis: A computationally efficient solution
NASA Technical Reports Server (NTRS)
Cox, S. C.; Rose, J. F.
1983-01-01
A computationally efficient means for calculating texture measurements from digital images by use of the co-occurrence technique is presented. The calculation of the statistical descriptors of image texture and a solution that circumvents the need for calculating and storing a co-occurrence matrix are discussed. The results show that existing efficient algorithms for calculating sums, sums of squares, and cross products can be used to compute complex co-occurrence relationships directly from the digital image input.
Mata, Fernanda; Sallum, Isabela; Miranda, Débora M.; Bechara, Antoine; Malloy-Diniz, Leandro F.
2013-01-01
Studies that use the Iowa Gambling Task (IGT) and its age-appropriate versions as indices of affective decision-making during childhood and adolescence have demonstrated significant individual differences in scores. Our study investigated the association between general intellectual functioning and socioeconomic status (SES) and its effect on the development of affective decision-making in preschoolers by using a computerized version of the Children's Gambling Task (CGT). We administered the CGT and the Columbia Mental Maturity Scale (CMMS) to 137 Brazilian children between the ages of 3 and 5 years old to assess their general intellectual functioning. We also used the Brazilian Criterion of Economic Classification (CCEB) to assess their SES. Age differences between 3- and 4-years-old, but not between 4- and 5-years-old, confirmed the results obtained by Kerr and Zelazo (2004), indicating the rapid development of affective decision-making during the preschool period. Both 4- and 5-years-old performed significantly above chance on blocks 3, 4, and 5 of the CGT, whereas 3-years-old mean scores did not differ from chance. We found that general intellectual functioning was not related to affective decision-making. On the other hand, our findings showed that children with high SES performed better on the last block of the CGT in comparison to children with low SES, which indicates that children from the former group seem more likely to use the information about the gain/loss aspects of the decks to efficiently choose cards from the advantageous deck throughout the task. PMID:23760222
'A Leg to Stand On' by Oliver Sacks: a unique autobiographical account of functional paralysis.
Stone, Jon; Perthen, Jo; Carson, Alan J
2012-09-01
Oliver Sacks, the well known neurologist and writer, published his fourth book, 'A Leg to Stand On', in 1984 following an earlier essay 'The Leg' in 1982. The book described his recovery after a fall in a remote region of Norway in which he injured his leg. Following surgery to reattach his quadriceps muscle, he experienced an emotional period in which his leg no longer felt a part of his body, and he struggled to regain his ability to walk. Sacks attributed the experience to a neurologically determined disorder of body-image and bodyego induced by peripheral injury. In the first edition of his book Sacks explicitly rejected the diagnosis of 'hysterical paralysis' as it was then understood, although he approached this diagnosis more closely in subsequent revisions. In this article we propose that, in the light of better understanding of functional neurological symptoms, Sacks' experiences deserve to be reappraised as a unique insight in to a genuinely experienced functional/psychogenic leg paralysis following injury. PMID:22872718
NASA Astrophysics Data System (ADS)
Daswani, Ujla; Sharma, Pratibha; Kumar, Ashok
2015-01-01
Benzothiazole moiety is found to play an important role in medicinal chemistry with a wide range of pharmacological activities. Herein, a simple, benzothiazole derivative viz., 2-chlorobenzothiazole (2CBT) has been analyzed. The spectroscopic properties of the target compound were examined by FT-IR (4400-450 cm-1), FT-Raman (4000-50 cm-1), and NMR techniques. The 1H and 13C NMR spectra were recorded in DMSO. Theoretical calculations were performed by ab initio Hartree Fock and Density Functional Theory (DFT)/B3LYP method using varied basis sets combination. The scaled B3LYP/6-311++G(d,p) results precisely complements with the experimental findings. Electronic absorption spectra along with energy and oscillator strength were obtained by TDDFT method. Atomic charges have also been reported. Total density isosurface and total density mapped with electrostatic potential surface (MESP) has been shown.
Buechner, Andreas; Beynon, Andy; Szyfter, Witold; Niemczyk, Kazimierz; Hoppe, Ulrich; Hey, Matthias; Brokx, Jan; Eyles, Julie; Van de Heyning, Paul; Paludetti, Gaetano; Zarowski, Andrzej; Quaranta, Nicola; Wesarg, Thomas; Festen, Joost; Olze, Heidi; Dhooge, Ingeborg; Müller-Deile, Joachim; Ramos, Angel; Roman, Stephane; Piron, Jean-Pierre; Cuda, Domenico; Burdo, Sandro; Grolman, Wilko; Vaillard, Samantha Roux; Huarte, Alicia; Frachet, Bruno; Morera, Constantine; Garcia-Ibáñez, Luis; Abels, Daniel; Walger, Martin; Müller-Mazotta, Jochen; Leone, Carlo Antonio; Meyer, Bernard; Dillier, Norbert; Steffens, Thomas; Gentine, André; Mazzoli, Manuela; Rypkema, Gerben; Killian, Matthijs; Smoorenburg, Guido
2011-01-01
Efficacy of the SPEAK and ACE coding strategies was compared with that of a new strategy, MP3000™, by 37 European implant centers including 221 subjects. The SPEAK and ACE strategies are based on selection of 8–10 spectral components with the highest levels, while MP3000 is based on the selection of only 4–6 components, with the highest levels relative to an estimate of the spread of masking. The pulse rate per component was fixed. No significant difference was found for the speech scores and for coding preference between the SPEAK/ACE and MP3000 strategies. Battery life was 24% longer for the MP3000 strategy. With MP3000 the best results were found for a selection of six components. In addition, the best results were found for a masking function with a low-frequency slope of 50 dB/Bark and a high-frequency slope of 37 dB/Bark (50/37) as compared to the other combinations examined of 40/30 and 20/15 dB/Bark. The best results found for the steepest slopes do not seem to agree with current estimates of the spread of masking in electrical stimulation. Future research might reveal if performance with respect to SPEAK/ACE can be enhanced by increasing the number of channels in MP3000 beyond 4–6 and it should shed more light on the optimum steepness of the slopes of the masking functions applied in MP3000. PMID:22251806
Reynolds, Michael; Besner, Derek
2005-08-01
There are pervasive lexical influences on the time that it takes to read aloud novel letter strings that sound like real words (e.g., brane from brain). However, the literature presents a complicated picture, given that the time taken to read aloud such items is sometimes shorter and sometimes longer than a control string (e.g.,frane) and that the time to read aloud is sometimes affected by the frequency of the base word and other times is not. In the present review, we first organize these data to show that there is considerably more consistency than has previously been acknowledged. We then consider six different accounts that have been proposed to explain various aspects of these data. Four of them immediately fail in one way or another. The remaining two accounts may be able to explain these findings, but they either make counterintuitive assumptions or invoke a novel mechanism solely to explain these findings. A new account is advanced that is able to explain all of the effects reviewed here and has none of the problems associated with the other accounts. According to this account, different types of lexical knowledge are used when pseudohomophones and nonword controls are read aloud in mixed and pure lists. This account is then implemented in Coltheart, Rastle, Perry, Langdon, and Ziegler's (2001) dual route cascaded model in order to provide an existence proof that it accommodates all of the effects, while retaining the ability to simulate three standard effects seen in nonword reading aloud. PMID:16447376
Challenges in computational studies of enzyme structure, function and dynamics.
Carvalho, Alexandra T P; Barrozo, Alexandre; Doron, Dvir; Kilshtain, Alexandra Vardi; Major, Dan Thomas; Kamerlin, Shina Caroline Lynn
2014-11-01
In this review we give an overview of the field of Computational enzymology. We start by describing the birth of the field, with emphasis on the work of the 2013 chemistry Nobel Laureates. We then present key features of the state-of-the-art in the field, showing what theory, accompanied by experiments, has taught us so far about enzymes. We also briefly describe computational methods, such as quantum mechanics-molecular mechanics approaches, reaction coordinate treatment, and free energy simulation approaches. We finalize by discussing open questions and challenges. PMID:25306098
Aguilar, I; Tsuruta, S; Misztal, I
2010-06-01
Data included 90,242,799 test day records from first, second and third parities of 5,402,484 Holstein cows and 9,326,754 animals in the pedigree. Additionally, daily temperature humidity indexes (THI) from 202 weather stations were available. The fixed effects included herd test day, age at calving, milking frequency and days in milk classes (DIM). Random effects were additive genetic, permanent environment and herd-year and were fit as random regressions. Covariates included linear splines with four knots at 5, 50, 200 and 305 DIM and a function of THI. Mixed model equations were solved using an iteration on data program with a preconditioned conjugate gradient algorithm. Preconditioners used were diagonal (D), block diagonal due to traits (BT) and block diagonal due to traits and correlated effects (BTCORR). One run included BT with a 'diagonalized' model in which the random effects were reparameterized for diagonal (co)variance matrices among traits (BTDIAG). Memory requirements were 8.7 Gb for D, 10.4 Gb for BT and BTDIAG, and 24.3 Gb for BTCORR. Computing times (rounds) were 14 days (952) for D, 10.7 days (706) for BT, 7.7 days (494) for BTDIAG and 4.6 days (289) for BTCORR. The convergence pattern was strongly influenced by the choice of fixed effects. When sufficient memory is available, the option BTCORR is the fastest and simplest to implement; the next efficient method, BTDIAG, requires additional steps for diagonalization and back-diagonalization. PMID:20536641
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.
1983-01-01
Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.H.
1980-01-01
Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
A method to account for outliers in the development of safety performance functions.
El-Basyouny, Karim; Sayed, Tarek
2010-07-01
Accident data sets can include some unusual data points that are not typical of the rest of the data. The presence of these data points (usually termed outliers) can have a significant impact on the estimates of the parameters of safety performance functions (SPFs). Few studies have considered outliers analysis in the development of SPFs. In these studies, the practice has been to identify and then exclude outliers from further analysis. This paper introduces alternative mixture models based on the multivariate Poisson lognormal (MVPLN) regression. The proposed approach presents outlier resistance modeling techniques that provide robust safety inferences by down-weighting the outlying observations rather than rejecting them. The first proposed model is a scale-mixture model that is obtained by replacing the normal distribution in the Poisson-lognormal hierarchy by the Student t distribution, which has heavier tails. The second model is a two-component mixture (contaminated normal model) where it is assumed that most of the observations come from a basic distribution, whereas the remaining few outliers arise from an alternative distribution that has a larger variance. The results indicate that the estimates of the extra-Poisson variation parameters were considerably smaller under the mixture models leading to higher precision. Also, both mixture models have identified the same set of outliers. In terms of goodness-of-fit, both mixture models have outperformed the MVPLN. The outlier rejecting MVPLN model provided a superior fit in terms of a much smaller DIC and standard deviations for the parameter estimates. However, this approach tends to underestimate uncertainty by producing too small standard deviations for the parameter estimates, which may lead to incorrect conclusions. It is recommended that the proposed outlier resistance modeling techniques be used unless the exclusion of the outlying observations can be justified because of data related reasons (e
A Functional Analytic Approach to Computer-Interactive Mathematics
ERIC Educational Resources Information Center
Ninness, Chris; Rumph, Robin; McCuller, Glen; Harrison, Carol; Ford, Angela M.; Ninness, Sharon K.
2005-01-01
Following a pretest, 11 participants who were naive with regard to various algebraic and trigonometric transformations received an introductory lecture regarding the fundamentals of the rectangular coordinate system. Following the lecture, they took part in a computer-interactive matching-to-sample procedure in which they received training on…
A Computational Framework Discovers New Copy Number Variants with Functional Importance
Banerjee, Samprit; Oldridge, Derek; Poptsova, Maria; Hussain, Wasay M.; Chakravarty, Dimple; Demichelis, Francesca
2011-01-01
Structural variants which cause changes in copy numbers constitute an important component of genomic variability. They account for 0.7% of genomic differences in two individual genomes, of which copy number variants (CNVs) are the largest component. A recent population-based CNV study revealed the need of better characterization of CNVs, especially the small ones (<500 bp).We propose a three step computational framework (Identification of germline Changes in Copy Number or IgC2N) to discover and genotype germline CNVs. First, we detect candidate CNV loci by combining information across multiple samples without imposing restrictions to the number of coverage markers or to the variant size. Secondly, we fine tune the detection of rare variants and infer the putative copy number classes for each locus. Last, for each variant we combine the relative distance between consecutive copy number classes with genetic information in a novel attempt to estimate the reference model bias. This computational approach is applied to genome-wide data from 1250 HapMap individuals. Novel variants were discovered and characterized in terms of size, minor allele frequency, type of polymorphism (gains, losses or both), and mechanism of formation. Using data generated for a subset of individuals by a 42 million marker platform, we validated the majority of the variants with the highest validation rate (66.7%) was for variants of size larger than 1 kb. Finally, we queried transcriptomic data from 129 individuals determined by RNA-sequencing as further validation and to assess the functional role of the new variants. We investigated the possible enrichment for variant's regulatory effect and found that smaller variants (<1 Kb) are more likely to regulate gene transcript than larger variants (p-value = 2.04e-08). Our results support the validity of the computational framework to detect novel variants relevant to disease susceptibility studies and provide evidence of the importance of
NASA Astrophysics Data System (ADS)
Shapoval, V. M.; Sinyukov, Yu. M.; Naboka, V. Yu.
2015-10-01
The theoretical analysis of the p ¯-Λ ⊕p -Λ ¯ correlation function in 10% most central Au+Au collisions at Relativistic Heavy Ion Collider (RHIC) energy √{sNN}=200 GeV shows that the contribution of residual correlations is a necessary factor for obtaining a satisfactory description of the experimental data. Neglecting the residual correlation effect leads to an unrealistically low source radius, about 2 times smaller than the corresponding value for p -Λ ⊕p ¯-Λ ¯ case, when one fits the experimental correlation function within Lednický-Lyuboshitz analytical model. Recently an approach that accounts effectively for residual correlations for the baryon-antibaryon correlation function was proposed, and a good RHIC data description was reached with the source radius extracted from the hydrokinetic model (HKM). The p ¯-Λ scattering length, as well as the parameters characterizing the residual correlation effect—annihilation dip amplitude and its inverse width—were extracted from the corresponding fit. In this paper we use these extracted values and simulated in HKM source functions for Pb+Pb collisions at the LHC energy √{sNN}=2.76 TeV to predict the corresponding p Λ and p Λ ¯ correlation functions.
NASA Astrophysics Data System (ADS)
Francisco, E.; Pendás, A. Martín; Blanco, M. A.
2008-04-01
Given an N-electron molecule and an exhaustive partition of the real space ( R) into m arbitrary regions Ω,Ω,…,Ω ( ⋃i=1mΩ=R), the edf program computes all the probabilities P(n,n,…,n) of having exactly n electrons in Ω, n electrons in Ω,…, and n electrons ( n+n+⋯+n=N) in Ω. Each Ω may correspond to a single basin (atomic domain) or several such basins (functional group). In the later case, each atomic domain must belong to a single Ω. The program can manage both single- and multi-determinant wave functions which are read in from an aimpac-like wave function description ( .wfn) file (T.A. Keith et al., The AIMPAC95 programs, http://www.chemistry.mcmaster.ca/aimpac, 1995). For multi-determinantal wave functions a generalization of the original .wfn file has been introduced. The new format is completely backwards compatible, adding to the previous structure a description of the configuration interaction (CI) coefficients and the determinants of correlated wave functions. Besides the .wfn file, edf only needs the overlap integrals over all the atomic domains between the molecular orbitals (MO). After the P(n,n,…,n) probabilities are computed, edf obtains from them several magnitudes relevant to chemical bonding theory, such as average electronic populations and localization/delocalization indices. Regarding spin, edf may be used in two ways: with or without a splitting of the P(n,n,…,n) probabilities into α and β spin components. Program summaryProgram title: edf Catalogue identifier: AEAJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5387 No. of bytes in distributed program, including test data, etc.: 52 381 Distribution format: tar.gz Programming language: Fortran 77 Computer
NASA Technical Reports Server (NTRS)
Kennedy, J. R.; Fitzpatrick, W. S.
1971-01-01
The computer executive functional system design concepts derived from study of the Space Station/Base are presented. Information Management System hardware configuration as directly influencing the executive design is reviewed. The hardware configuration and generic executive design requirements are considered in detail in a previous report (System Configuration and Executive Requirements Specifications for Reusable Shuttle and Space Station/Base, 9/25/70). This report defines basic system primitives and delineates processes and process control. Supervisor states are considered for describing basic multiprogramming and multiprocessing systems. A high-level computer executive including control of scheduling, allocation of resources, system interactions, and real-time supervisory functions is defined. The description is oriented to provide a baseline for a functional simulation of the computer executive system.
NASA Technical Reports Server (NTRS)
Curran, R. T.; Hornfeck, W. A.
1972-01-01
The functional requirements for the design of an interpretive simulator for the space ultrareliable modular computer (SUMC) are presented. A review of applicable existing computer simulations is included along with constraints on the SUMC simulator functional design. Input requirements, output requirements, and language requirements for the simulator are discussed in terms of a SUMC configuration which may vary according to the application.
NASA Astrophysics Data System (ADS)
Meyer-Baese, Uwe H.; Meyer-Baese, Anke; Ramirez, Javier; Garcia, Antonio
2003-08-01
In this paper, a new parallel hardware architecture dedicated to compute the Gaussian Potential Function is proposed. This function is commonly utilized in neural radial basis classifiers for pattern recognition as described by Lee; Girosi and Poggio; and Musavi et al. Attention to a simplified Gaussian Potential Function which processes uncorrelated features is confined. Operations of most interest included by the Gaussian potential function are the exponential and the square function. Our hardware computes the exponential function and its exponent at the same time. The contributions of all features to the exponent are computed in parallel. This parallelism reduces computational delay in the output function. The duration does not depend on the number of features processed. Software and hardware case studies are presented to evaluate the new CORDIC.
Using computational models to relate structural and functional brain connectivity
Hlinka, Jaroslav; Coombes, Stephen
2012-01-01
Modern imaging methods allow a non-invasive assessment of both structural and functional brain connectivity. This has lead to the identification of disease-related alterations affecting functional connectivity. The mechanism of how such alterations in functional connectivity arise in a structured network of interacting neural populations is as yet poorly understood. Here we use a modeling approach to explore the way in which this can arise and to highlight the important role that local population dynamics can have in shaping emergent spatial functional connectivity patterns. The local dynamics for a neural population is taken to be of the Wilson–Cowan type, whilst the structural connectivity patterns used, describing long-range anatomical connections, cover both realistic scenarios (from the CoComac database) and idealized ones that allow for more detailed theoretical study. We have calculated graph–theoretic measures of functional network topology from numerical simulations of model networks. The effect of the form of local dynamics on the observed network state is quantified by examining the correlation between structural and functional connectivity. We document a profound and systematic dependence of the simulated functional connectivity patterns on the parameters controlling the dynamics. Importantly, we show that a weakly coupled oscillator theory explaining these correlations and their variation across parameter space can be developed. This theoretical development provides a novel way to characterize the mechanisms for the breakdown of functional connectivity in diseases through changes in local dynamics. PMID:22805059
Introduction to Classical Density Functional Theory by a Computational Experiment
ERIC Educational Resources Information Center
Jeanmairet, Guillaume; Levy, Nicolas; Levesque, Maximilien; Borgis, Daniel
2014-01-01
We propose an in silico experiment to introduce the classical density functional theory (cDFT). Density functional theories, whether quantum or classical, rely on abstract concepts that are nonintuitive; however, they are at the heart of powerful tools and active fields of research in both physics and chemistry. They led to the 1998 Nobel Prize in…
The computational foundations of time dependent density functional theory
NASA Astrophysics Data System (ADS)
Whitfield, James
2014-03-01
The mathematical foundations of TDDFT are established through the formal existence of a fictitious non-interacting system (known as the Kohn-Sham system), which can reproduce the one-electron reduced probability density of the actual system. We build upon these works and show that on the interior of the domain of existence, the Kohn-Sham system can be efficiently obtained given the time-dependent density. Since a quantum computer can efficiently produce such time-dependent densities, we present a polynomial time quantum algorithm to generate the time-dependent Kohn-Sham potential with controllable error bounds. Further, we find that systems do not immediately become non-representable but rather become ill-representable as one approaches this boundary. A representability parameter is defined in our work which quantifies the distance to the boundary of representability and the computational difficulty of finding the Kohn-Sham system.
Computational approaches to identify functional genetic variants in cancer genomes
Gonzalez-Perez, Abel; Mustonen, Ville; Reva, Boris; Ritchie, Graham R.S.; Creixell, Pau; Karchin, Rachel; Vazquez, Miguel; Fink, J. Lynn; Kassahn, Karin S.; Pearson, John V.; Bader, Gary; Boutros, Paul C.; Muthuswamy, Lakshmi; Ouellette, B.F. Francis; Reimand, Jüri; Linding, Rune; Shibata, Tatsuhiro; Valencia, Alfonso; Butler, Adam; Dronov, Serge; Flicek, Paul; Shannon, Nick B.; Carter, Hannah; Ding, Li; Sander, Chris; Stuart, Josh M.; Stein, Lincoln D.; Lopez-Bigas, Nuria
2014-01-01
The International Cancer Genome Consortium (ICGC) aims to catalog genomic abnormalities in tumors from 50 different cancer types. Genome sequencing reveals hundreds to thousands of somatic mutations in each tumor, but only a minority drive tumor progression. We present the result of discussions within the ICGC on how to address the challenge of identifying mutations that contribute to oncogenesis, tumor maintenance or response to therapy, and recommend computational techniques to annotate somatic variants and predict their impact on cancer phenotype. PMID:23900255
A brain-computer interface to support functional recovery.
Kjaer, Troels W; Sørensen, Helge B
2013-01-01
Brain-computer interfaces (BCI) register changes in brain activity and utilize this to control computers. The most widely used method is based on registration of electrical signals from the cerebral cortex using extracranially placed electrodes also called electroencephalography (EEG). The features extracted from the EEG may, besides controlling the computer, also be fed back to the patient for instance as visual input. This facilitates a learning process. BCI allow us to utilize brain activity in the rehabilitation of patients after stroke. The activity of the cerebral cortex varies with the type of movement we imagine, and by letting the patient know the type of brain activity best associated with the intended movement the rehabilitation process may be faster and more efficient. The focus of BCI utilization in medicine has changed in recent years. While we previously focused on devices facilitating communication in the rather few patients with locked-in syndrome, much interest is now devoted to the therapeutic use of BCI in rehabilitation. For this latter group of patients, the device is not intended to be a lifelong assistive companion but rather a 'teacher' during the rehabilitation period. PMID:23859968
Computerizing the Accounting Curriculum.
ERIC Educational Resources Information Center
Nash, John F.; England, Thomas G.
1986-01-01
Discusses the use of computers in college accounting courses. Argues that the success of new efforts in using computers in teaching accounting is dependent upon increasing instructors' computer skills, and choosing appropriate hardware and software, including commercially available business software packages. (TW)
Computer Corner: Spreadsheets, Power Series, Generating Functions, and Integers.
ERIC Educational Resources Information Center
Snow, Donald R.
1989-01-01
Implements a table algorithm on a spreadsheet program and obtains functions for several number sequences such as the Fibonacci and Catalan numbers. Considers other applications of the table algorithm to integers represented in various number bases. (YP)
Adaptive, associative, and self-organizing functions in neural computing.
Kohonen, T
1987-12-01
This paper contains an attempt to describe certain adaptive and cooperative functions encountered in neural networks. The approach is a compromise between biological accuracy and mathematical clarity. two types of differential equation seem to describe the basic effects underlying the information of these functions: the equation for the electrical activity of the neuron and the adaptation equation that describes changes in its input connectivities. Various phenomena and operations are derivable from them: clustering of activity in a laterally interconnected nework; adaptive formation of feature detectors; the autoassociative memory function; and self-organized formation of ordered sensory maps. The discussion tends to reason what functions are readily amenable to analytical modeling and which phenomena seem to ensue from the more complex interactions that take place in the brain. PMID:20523469
Multiple multiresolution representation of functions and calculus for fast computation
Fann, George I; Harrison, Robert J; Hill, Judith C; Jia, Jun; Galindo, Diego A
2010-01-01
We describe the mathematical representations, data structure and the implementation of the numerical calculus of functions in the software environment multiresolution analysis environment for scientific simulations, MADNESS. In MADNESS, each smooth function is represented using an adaptive pseudo-spectral expansion using the multiwavelet basis to a arbitrary but finite precision. This is an extension of the capabilities of most of the existing net, mesh and spectral based methods where the discretization is based on a single adaptive mesh, or expansions.
Evaluation of computing systems using functionals of a Stochastic process
NASA Technical Reports Server (NTRS)
Meyer, J. F.; Wu, L. T.
1980-01-01
An intermediate model was used to represent the probabilistic nature of a total system at a level which is higher than the base model and thus closer to the performance variable. A class of intermediate models, which are generally referred to as functionals of a Markov process, were considered. A closed form solution of performability for the case where performance is identified with the minimum value of a functional was developed.
Computational strategies for the design of new enzymatic functions.
Świderek, K; Tuñón, I; Moliner, V; Bertran, J
2015-09-15
In this contribution, recent developments in the design of biocatalysts are reviewed with particular emphasis in the de novo strategy. Studies based on three different reactions, Kemp elimination, Diels-Alder and Retro-Aldolase, are used to illustrate different success achieved during the last years. Finally, a section is devoted to the particular case of designed metalloenzymes. As a general conclusion, the interplay between new and more sophisticated engineering protocols and computational methods, based on molecular dynamics simulations with Quantum Mechanics/Molecular Mechanics potentials and fully flexible models, seems to constitute the bed rock for present and future successful design strategies. PMID:25797438
Automated attendance accounting system
NASA Technical Reports Server (NTRS)
Chapman, C. P. (Inventor)
1973-01-01
An automated accounting system useful for applying data to a computer from any or all of a multiplicity of data terminals is disclosed. The system essentially includes a preselected number of data terminals which are each adapted to convert data words of decimal form to another form, i.e., binary, usable with the computer. Each data terminal may take the form of a keyboard unit having a number of depressable buttons or switches corresponding to selected data digits and/or function digits. A bank of data buffers, one of which is associated with each data terminal, is provided as a temporary storage. Data from the terminals is applied to the data buffers on a digit by digit basis for transfer via a multiplexer to the computer.
Applications of a new wall function to turbulent flow computations
NASA Technical Reports Server (NTRS)
Chen, Y. S.
1986-01-01
A new wall function approach is developed based on a wall law suitable for incompressible turbulent boundary layers under strong adverse pressure gradients. This wall law was derived from a one-dimensional analysis of the turbulent kinetic energy equation with gradient diffusion concept employed in modeling the near-wall shear stress gradient. Numerical testing cases for the present wall functions include turbulent separating flows around an airfoil and turbulent recirculating flows in several confined regions. Improvements on the predictions using the present wall functions are illustrated. For cases of internal recirculating flows, one modification factor for improving the performance of the k-epsilon turbulence model in the flow recirculation regions is also included.
Bread dough rheology: Computing with a damage function model
NASA Astrophysics Data System (ADS)
Tanner, Roger I.; Qi, Fuzhong; Dai, Shaocong
2015-01-01
We describe an improved damage function model for bread dough rheology. The model has relatively few parameters, all of which can easily be found from simple experiments. Small deformations in the linear region are described by a gel-like power-law memory function. A set of large non-reversing deformations - stress relaxation after a step of shear, steady shearing and elongation beginning from rest, and biaxial stretching, is used to test the model. With the introduction of a revised strain measure which includes a Mooney-Rivlin term, all of these motions can be well described by the damage function described in previous papers. For reversing step strains, larger amplitude oscillatory shearing and recoil reasonable predictions have been found. The numerical methods used are discussed and we give some examples.
Grimme, Stefan; Steinmetz, Marc
2016-08-01
We present a revised form of a double hybrid density functional (DHDF) dubbed PWRB95. It contains semi-local Perdew-Wang exchange and Becke95 correlation with a fixed amount of 50% non-local Fock exchange. New features are that the robust random phase approximation (RPA) is used to calculate the non-local correlation part instead of a second-order perturbative treatment as in standard DHDF, and the non-self-consistent evaluation of the Fock exchange with KS-orbitals at the GGA level which leads to a significant reduction of the computational effort. To account for London dispersion effects we include the non-local VV10 dispersion functional. Only three empirical scaling parameters were adjusted. The PWRB95 results for extensive standard thermochemical benchmarks (GMTKN30 data base) are compared to those of well-known functionals from the classes of (meta-)GGAs, (meta-)hybrid functionals, and DHDFs, as well as to standard (direct) RPA. The new method is furthermore tested on prototype bond activations with (Ni/Pd)-based transition metal catalysts, and two difficult cases for DHDF, namely the isomerization reaction of the [Cu2(en)2O2](2+) complex and the singlet-triplet energy difference in highly unsaturated cyclacenes. The results show that PWRB95 is almost as accurate as standard DHDF for main-group thermochemistry but has a similar or better performance for non-covalent interactions, more difficult transition metal containing molecules and other electronically problematic cases. Because of its relatively weak basis set dependence, PWRB95 can be applied even in combination with AO basis sets of only triple-zeta quality which yields huge overall computational savings by a factor of about 40 compared to standard DHDF/'quadruple-zeta' calculations. Structure optimizations of small molecules with PWRB95 indicate an accurate description of bond distances superior to that provided by TPSS-D3, PBE0-D3, or other RPA type methods. PMID:26695184
Anderson, Rachel M.; Cosme, Caitlin V.; Glanz, Ryan M.; Miller, Mary C.; Romig-Martin, Sara A.; LaLumiere, Ryan T.
2015-01-01
The prelimbic region (PL) of the medial prefrontal cortex (mPFC) is implicated in the relapse of drug-seeking behavior. Optimal mPFC functioning relies on synaptic connections involving dendritic spines in pyramidal neurons, whereas prefrontal dysfunction resulting from elevated glucocorticoids, stress, aging, and mental illness are each linked to decreased apical dendritic branching and spine density in pyramidal neurons in these cortical fields. The fact that cocaine use induces activation of the stress-responsive hypothalamo-pituitary-adrenal axis raises the possibility that cocaine-related impairments in mPFC functioning may be manifested by similar changes in neuronal architecture in mPFC. Nevertheless, previous studies have generally identified increases, rather than decreases, in structural plasticity in mPFC after cocaine self-administration. Here, we use 3D imaging and analysis of dendritic spine morphometry to show that chronic cocaine self-administration leads to mild decreases of apical dendritic branching, prominent dendritic spine attrition in PL pyramidal neurons, and working memory deficits. Importantly, these impairments were largely accounted for in groups of rats that self-administered cocaine compared with yoked-cocaine- and saline-matched counterparts. Follow-up experiments failed to demonstrate any effects of either experimenter-administered cocaine or food self-administration on structural alterations in PL neurons. Finally, we verified that the cocaine self-administration group was distinguished by more protracted increases in adrenocortical activity compared with yoked-cocaine- and saline-matched controls. These studies suggest a mechanism whereby increased adrenocortical activity resulting from chronic cocaine self-administration may contribute to regressive prefrontal structural and functional plasticity. SIGNIFICANCE STATEMENT Stress, aging, and mental illness are each linked to decreased prefrontal plasticity. Here, we show that chronic
Fukui, Miho; Goda, Akiko; Komamura, Kazuo; Nakabo, Ayumi; Masaki, Mitsuru; Yoshida, Chikako; Hirotani, Shinichi; Lee-Kawabata, Masaaki; Tsujino, Takeshi; Mano, Toshiaki; Masuyama, Tohru
2016-02-01
While beta blockade improves left ventricular (LV) function in patients with chronic heart failure (CHF), the mechanisms are not well known. This study aimed to examine whether changes in myocardial collagen metabolism account for LV functional recovery following beta-blocker therapy in 62 CHF patients with reduced ejection fraction (EF). LV function was echocardiographically measured at baseline and 1, 6, and 12 months after bisoprolol therapy along with serum markers of collagen metabolism including C-terminal telopeptide of collagen type I (CITP) and matrix metalloproteinase (MMP)-2. Deceleration time of mitral early velocity (DcT) increased even in the early phase, but LVEF gradually improved throughout the study period. Heart rate (HR) was reduced from the early stage, and CITP gradually decreased. LVEF and DcT increased more so in patients with the larger decreases in CITP (r = -0.33, p < 0.05; r = -0.28, p < 0.05, respectively), and HR (r = -0.31, p < 0.05; r = -0.38, p < 0.05, respectively). In addition, there were greater decreases in CITP, MMP-2 and HR from baseline to 1, 6, or 12 months in patients with above-average improvement in LVEF than in those with below-average improvement in LVEF. Similar results were obtained in terms of DcT. There was no significant correlation between the changes in HR and CITP. In conclusion, improvement in LV systolic/diastolic function was greatest in patients with the larger inhibition of collagen degradation. Changes in myocardial collagen metabolism are closely related to LV functional recovery somewhat independently from HR reduction. PMID:25351137
Nieuwenhuizen, Niels J; Green, Sol A; Chen, Xiuyin; Bailleul, Estelle J D; Matich, Adam J; Wang, Mindy Y; Atkinson, Ross G
2013-02-01
Terpenes are specialized plant metabolites that act as attractants to pollinators and as defensive compounds against pathogens and herbivores, but they also play an important role in determining the quality of horticultural food products. We show that the genome of cultivated apple (Malus domestica) contains 55 putative terpene synthase (TPS) genes, of which only 10 are predicted to be functional. This low number of predicted functional TPS genes compared with other plant species was supported by the identification of only eight potentially functional TPS enzymes in apple 'Royal Gala' expressed sequence tag databases, including the previously characterized apple (E,E)-α-farnesene synthase. In planta functional characterization of these TPS enzymes showed that they could account for the majority of terpene volatiles produced in cv Royal Gala, including the sesquiterpenes germacrene-D and (E)-β-caryophyllene, the monoterpenes linalool and α-pinene, and the homoterpene (E)-4,8-dimethyl-1,3,7-nonatriene. Relative expression analysis of the TPS genes indicated that floral and vegetative tissues were the primary sites of terpene production in cv Royal Gala. However, production of cv Royal Gala floral-specific terpenes and TPS genes was observed in the fruit of some heritage apple cultivars. Our results suggest that the apple TPS gene family has been shaped by a combination of ancestral and more recent genome-wide duplication events. The relatively small number of functional enzymes suggests that the remaining terpenes produced in floral and vegetative and fruit tissues are maintained under a positive selective pressure, while the small number of terpenes found in the fruit of modern cultivars may be related to commercial breeding strategies. PMID:23256150
Nieuwenhuizen, Niels J.; Green, Sol A.; Chen, Xiuyin; Bailleul, Estelle J.D.; Matich, Adam J.; Wang, Mindy Y.; Atkinson, Ross G.
2013-01-01
Terpenes are specialized plant metabolites that act as attractants to pollinators and as defensive compounds against pathogens and herbivores, but they also play an important role in determining the quality of horticultural food products. We show that the genome of cultivated apple (Malus domestica) contains 55 putative terpene synthase (TPS) genes, of which only 10 are predicted to be functional. This low number of predicted functional TPS genes compared with other plant species was supported by the identification of only eight potentially functional TPS enzymes in apple ‘Royal Gala’ expressed sequence tag databases, including the previously characterized apple (E,E)-α-farnesene synthase. In planta functional characterization of these TPS enzymes showed that they could account for the majority of terpene volatiles produced in cv Royal Gala, including the sesquiterpenes germacrene-D and (E)-β-caryophyllene, the monoterpenes linalool and α-pinene, and the homoterpene (E)-4,8-dimethyl-1,3,7-nonatriene. Relative expression analysis of the TPS genes indicated that floral and vegetative tissues were the primary sites of terpene production in cv Royal Gala. However, production of cv Royal Gala floral-specific terpenes and TPS genes was observed in the fruit of some heritage apple cultivars. Our results suggest that the apple TPS gene family has been shaped by a combination of ancestral and more recent genome-wide duplication events. The relatively small number of functional enzymes suggests that the remaining terpenes produced in floral and vegetative and fruit tissues are maintained under a positive selective pressure, while the small number of terpenes found in the fruit of modern cultivars may be related to commercial breeding strategies. PMID:23256150
Code of Federal Regulations, 2014 CFR
2014-04-01
... and Exchange Commission (17 CFR 241.15c3-1(c)(2)(vi)), held for the same futures customer's account... accordance with Rule 240.15c3-1(c)(2)(vi) of the Securities and Exchange Commission (17 CFR 240.15c3-1(c)(2... Exchange Commission (17 CFR 240.15c3-1(c)(11)(i)). (c) Each futures commission merchant is required...
Efficient and Flexible Computation of Many-Electron Wave Function Overlaps
2016-01-01
A new algorithm for the computation of the overlap between many-electron wave functions is described. This algorithm allows for the extensive use of recurring intermediates and thus provides high computational efficiency. Because of the general formalism employed, overlaps can be computed for varying wave function types, molecular orbitals, basis sets, and molecular geometries. This paves the way for efficiently computing nonadiabatic interaction terms for dynamics simulations. In addition, other application areas can be envisaged, such as the comparison of wave functions constructed at different levels of theory. Aside from explaining the algorithm and evaluating the performance, a detailed analysis of the numerical stability of wave function overlaps is carried out, and strategies for overcoming potential severe pitfalls due to displaced atoms and truncated wave functions are presented. PMID:26854874
Linger, Richard C; Pleszkoch, Mark G; Prowell, Stacy J; Sayre, Kirk D; Ankrum, Scott
2013-01-01
Organizations maintaining mainframe legacy software can benefit from code modernization and incorporation of security capabilities to address the current threat environment. Oak Ridge National Laboratory is developing the Hyperion system to compute the behavior of software as a means to gain understanding of software functionality and security properties. Computation of functionality is critical to revealing security attributes, which are in fact specialized functional behaviors of software. Oak Ridge is collaborating with MITRE Corporation to conduct a demonstration project to compute behavior of legacy IBM Assembly Language code for a federal agency. The ultimate goal is to understand functionality and security vulnerabilities as a basis for code modernization. This paper reports on the first phase, to define functional semantics for IBM Assembly instructions and conduct behavior computation experiments.
Memory intensive functional architecture for distributed computer control systems
Dimmler, D.G.
1983-10-01
A memory-intensive functional architectue for distributed data-acquisition, monitoring, and control systems with large numbers of nodes has been conceptually developed and applied in several large-scale and some smaller systems. This discussion concentrates on: (1) the basic architecture; (2) recent expansions of the architecture which now become feasible in view of the rapidly developing component technologies in microprocessors and functional large-scale integration circuits; and (3) implementation of some key hardware and software structures and one system implementation which is a system for performing control and data acquisition of a neutron spectrometer at the Brookhaven High Flux Beam Reactor. The spectrometer is equipped with a large-area position-sensitive neutron detector.
Frequency domain transfer function identification using the computer program SYSFIT
Trudnowski, D.J.
1992-12-01
Because the primary application of SYSFIT for BPA involves studying power system dynamics, this investigation was geared toward simulating the effects that might be encountered in studying electromechanical oscillations in power systems. Although the intended focus of this work is power system oscillations, the studies are sufficiently genetic that the results can be applied to many types of oscillatory systems with closely-spaced modes. In general, there are two possible ways of solving the optimization problem. One is to use a least-squares optimization function and to write the system in such a form that the problem becomes one of linear least-squares. The solution can then be obtained using a standard least-squares technique. The other method involves using a search method to obtain the optimal model. This method allows considerably more freedom in forming the optimization function and model, but it requires an initial guess of the system parameters. SYSFIT employs this second approach. Detailed investigations were conducted into three main areas: (1) fitting to exact frequency response data of a linear system; (2) fitting to the discrete Fourier transformation of noisy data; and (3) fitting to multi-path systems. The first area consisted of investigating the effects of alternative optimization cost function options; using different optimization search methods; incorrect model order, missing response data; closely-spaced poles; and closely-spaced pole-zero pairs. Within the second area, different noise colorations and levels were studied. In the third area, methods were investigated for improving fitting results by incorporating more than one system path. The following is a list of guidelines and properties developed from the study for fitting a transfer function to the frequency response of a system using optimization search methods.
Utility functions and resource management in an oversubscribed heterogeneous computing environment
Khemka, Bhavesh; Friese, Ryan; Briceno, Luis Diego; Siegel, Howard Jay; Maciejewski, Anthony A.; Koenig, Gregory A.; Groer, Christopher S.; Hilton, Marcia M.; Poole, Stephen W.; Okonski, G.; et al
2014-09-26
We model an oversubscribed heterogeneous computing system where tasks arrive dynamically and a scheduler maps the tasks to machines for execution. The environment and workloads are based on those being investigated by the Extreme Scale Systems Center at Oak Ridge National Laboratory. Utility functions that are designed based on specifications from the system owner and users are used to create a metric for the performance of resource allocation heuristics. Each task has a time-varying utility (importance) that the enterprise will earn based on when the task successfully completes execution. We design multiple heuristics, which include a technique to drop lowmore » utility-earning tasks, to maximize the total utility that can be earned by completing tasks. The heuristics are evaluated using simulation experiments with two levels of oversubscription. The results show the benefit of having fast heuristics that account for the importance of a task and the heterogeneity of the environment when making allocation decisions in an oversubscribed environment. Furthermore, the ability to drop low utility-earning tasks allow the heuristics to tolerate the high oversubscription as well as earn significant utility.« less
Computed versus measured ion velocity distribution functions in a Hall effect thruster
Garrigues, L.; Mazouffre, S.; Bourgeois, G.
2012-06-01
We compare time-averaged and time-varying measured and computed ion velocity distribution functions in a Hall effect thruster for typical operating conditions. The ion properties are measured by means of laser induced fluorescence spectroscopy. Simulations of the plasma properties are performed with a two-dimensional hybrid model. In the electron fluid description of the hybrid model, the anomalous transport responsible for the electron diffusion across the magnetic field barrier is deduced from the experimental profile of the time-averaged electric field. The use of a steady state anomalous mobility profile allows the hybrid model to capture some properties like the time-averaged ion mean velocity. Yet, the model fails at reproducing the time evolution of the ion velocity. This fact reveals a complex underlying physics that necessitates to account for the electron dynamics over a short time-scale. This study also shows the necessity for electron temperature measurements. Moreover, the strength of the self-magnetic field due to the rotating Hall current is found negligible.
Utility functions and resource management in an oversubscribed heterogeneous computing environment
Khemka, Bhavesh; Friese, Ryan; Briceno, Luis Diego; Siegel, Howard Jay; Maciejewski, Anthony A.; Koenig, Gregory A.; Groer, Christopher S.; Hilton, Marcia M.; Poole, Stephen W.; Okonski, G.; Rambharos, R.
2014-09-26
We model an oversubscribed heterogeneous computing system where tasks arrive dynamically and a scheduler maps the tasks to machines for execution. The environment and workloads are based on those being investigated by the Extreme Scale Systems Center at Oak Ridge National Laboratory. Utility functions that are designed based on specifications from the system owner and users are used to create a metric for the performance of resource allocation heuristics. Each task has a time-varying utility (importance) that the enterprise will earn based on when the task successfully completes execution. We design multiple heuristics, which include a technique to drop low utility-earning tasks, to maximize the total utility that can be earned by completing tasks. The heuristics are evaluated using simulation experiments with two levels of oversubscription. The results show the benefit of having fast heuristics that account for the importance of a task and the heterogeneity of the environment when making allocation decisions in an oversubscribed environment. Furthermore, the ability to drop low utility-earning tasks allow the heuristics to tolerate the high oversubscription as well as earn significant utility.
Breckenridge, A.; Vahle, M.O.
1993-03-01
Supercomputing '92, a high-performance computing and communications conference was held, November 16--20, 1992 in Minneapolis, Minnesota. This paper documents the applications and technologies that were showcased in Sandia's research booth at that conference. In particular the demonstrations in high-performance networking, audio-visual applications in engineering, virtual reality, and supercomputing applications are all described.
Computational properties of three-term recurrence relations for Kummer functions
NASA Astrophysics Data System (ADS)
Deaño, Alfredo; Segura, Javier; Temme, Nico M.
2010-01-01
Several three-term recurrence relations for confluent hypergeometric functions are analyzed from a numerical point of view. Minimal and dominant solutions for complex values of the variable z are given, derived from asymptotic estimates of the Whittaker functions with large parameters. The Laguerre polynomials and the regular Coulomb wave functions are studied as particular cases, with numerical examples of their computation.
Tawhai, M. H.; Clark, A. R.; Donovan, G. M.; Burrowes, K. S.
2011-01-01
Computational models of lung structure and function necessarily span multiple spatial and temporal scales, i.e., dynamic molecular interactions give rise to whole organ function, and the link between these scales cannot be fully understood if only molecular or organ-level function is considered. Here, we review progress in constructing multiscale finite element models of lung structure and function that are aimed at providing a computational framework for bridging the spatial scales from molecular to whole organ. These include structural models of the intact lung, embedded models of the pulmonary airways that couple to model lung tissue, and models of the pulmonary vasculature that account for distinct structural differences at the extra- and intra-acinar levels. Biophysically based functional models for tissue deformation, pulmonary blood flow, and airway bronchoconstriction are also described. The development of these advanced multiscale models has led to a better understanding of complex physiological mechanisms that govern regional lung perfusion and emergent heterogeneity during bronchoconstriction. PMID:22011236
A Functional Analytic Approach To Computer-Interactive Mathematics
2005-01-01
Following a pretest, 11 participants who were naive with regard to various algebraic and trigonometric transformations received an introductory lecture regarding the fundamentals of the rectangular coordinate system. Following the lecture, they took part in a computer-interactive matching-to-sample procedure in which they received training on particular formula-to-formula and formula-to-graph relations as these formulas pertain to reflections and vertical and horizontal shifts. In training A-B, standard formulas served as samples and factored formulas served as comparisons. In training B-C, factored formulas served as samples and graphs served as comparisons. Subsequently, the program assessed for mutually entailed B-A and C-B relations as well as combinatorially entailed C-A and A-C relations. After all participants demonstrated mutual entailment and combinatorial entailment, we employed a test of novel relations to assess 40 different and complex variations of the original training formulas and their respective graphs. Six of 10 participants who completed training demonstrated perfect or near-perfect performance in identifying novel formula-to-graph relations. Three of the 4 participants who made more than three incorrect responses during the assessment of novel relations showed some commonality among their error patterns. Derived transfer of stimulus control using mathematical relations is discussed. PMID:15898471
Toward high-resolution computational design of helical membrane protein structure and function
Barth, Patrick; Senes, Alessandro
2016-01-01
The computational design of α-helical membrane proteins is still in its infancy but has made important progress. De novo design has produced stable, specific and active minimalistic oligomeric systems. Computational re-engineering can improve stability and modulate the function of natural membrane proteins. Currently, the major hurdle for the field is not computational, but the experimental characterization of the designs. The emergence of new structural methods for membrane proteins will accelerate progress PMID:27273630
Computer programs for calculation of thermodynamic functions of mixing in crystalline solutions
NASA Technical Reports Server (NTRS)
Comella, P. A.; Saxena, S. K.
1972-01-01
The computer programs Beta, GEGIM, REGSOL1, REGSOL2, Matrix, and Quasi are presented. The programs are useful in various calculations for the thermodynamic functions of mixing and the activity-composition relations in rock forming minerals.
SCAIEF, C.C.
1999-12-16
This functions, requirements and specifications document defines the baseline requirements and criteria for the design, purchase, fabrication, construction, installation, and operation of the system to replace the Computer Automated Surveillance System (CASS) alarm monitoring.
Computation of Schenberg response function by using finite element modelling
NASA Astrophysics Data System (ADS)
Frajuca, C.; Bortoli, F. S.; Magalhaes, N. S.
2016-05-01
Schenberg is a detector of gravitational waves resonant mass type, with a central frequency of operation of 3200 Hz. Transducers located on the surface of the resonating sphere, according to a distribution half-dodecahedron, are used to monitor a strain amplitude. The development of mechanical impedance matchers that act by increasing the coupling of the transducers with the sphere is a major challenge because of the high frequency and small in size. The objective of this work is to study the Schenberg response function obtained by finite element modeling (FEM). Finnaly, the result is compared with the result of the simplified model for mass spring type system modeling verifying if that is suitable for the determination of sensitivity detector, as the conclusion the both modeling give the same results.
Computational complexity of time-dependent density functional theory
NASA Astrophysics Data System (ADS)
Whitfield, J. D.; Yung, M.-H.; Tempel, D. G.; Boixo, S.; Aspuru-Guzik, A.
2014-08-01
Time-dependent density functional theory (TDDFT) is rapidly emerging as a premier method for solving dynamical many-body problems in physics and chemistry. The mathematical foundations of TDDFT are established through the formal existence of a fictitious non-interacting system (known as the Kohn-Sham system), which can reproduce the one-electron reduced probability density of the actual system. We build upon these works and show that on the interior of the domain of existence, the Kohn-Sham system can be efficiently obtained given the time-dependent density. We introduce a V-representability parameter which diverges at the boundary of the existence domain and serves to quantify the numerical difficulty of constructing the Kohn-Sham potential. For bounded values of V-representability, we present a polynomial time quantum algorithm to generate the time-dependent Kohn-Sham potential with controllable error bounds.
Fair and Square Computation of Inverse "Z"-Transforms of Rational Functions
ERIC Educational Resources Information Center
Moreira, M. V.; Basilio, J. C.
2012-01-01
All methods presented in textbooks for computing inverse "Z"-transforms of rational functions have some limitation: 1) the direct division method does not, in general, provide enough information to derive an analytical expression for the time-domain sequence "x"("k") whose "Z"-transform is "X"("z"); 2) computation using the inversion integral…
Effects of Computer versus Paper Administration of an Adult Functional Writing Assessment
ERIC Educational Resources Information Center
Chen, Jing; White, Sheida; McCloskey, Michael; Soroui, Jaleh; Chun, Young
2011-01-01
This study investigated the comparability of paper and computer versions of a functional writing assessment administered to adults 16 and older. Three writing tasks were administered in both paper and computer modes to volunteers in the field test of an assessment of adult literacy in 2008. One set of analyses examined mode effects on scoring by…
Performance of a computer-based assessment of cognitive function measures in two cohorts of seniors
Technology Transfer Automated Retrieval System (TEKTRAN)
Computer-administered assessment of cognitive function is being increasingly incorporated in clinical trials, however its performance in these settings has not been systematically evaluated. The Seniors Health and Activity Research Program (SHARP) pilot trial (N=73) developed a computer-based tool f...
A Systematic Approach for Understanding Slater-Gaussian Functions in Computational Chemistry
ERIC Educational Resources Information Center
Stewart, Brianna; Hylton, Derrick J.; Ravi, Natarajan
2013-01-01
A systematic way to understand the intricacies of quantum mechanical computations done by a software package known as "Gaussian" is undertaken via an undergraduate research project. These computations involve the evaluation of key parameters in a fitting procedure to express a Slater-type orbital (STO) function in terms of the linear…
A Functional Specification for a Programming Language for Computer Aided Learning Applications.
ERIC Educational Resources Information Center
National Research Council of Canada, Ottawa (Ontario).
In 1972 there were at least six different course authoring languages in use in Canada with little exchange of course materials between Computer Assisted Learning (CAL) centers. In order to improve facilities for producing "transportable" computer based course materials, a working panel undertook the definition of functional requirements of a user…
Fineberg, Sarah K.; Steinfeld, Matthew; Brewer, Judson A.; Corlett, Philip R.
2014-01-01
Social dysfunction is a prominent and disabling aspect of borderline personality disorder. We reconsider traditional explanations for this problem, especially early disruption in the way an infant feels physical care from its mother, in terms of recent developments in computational psychiatry. In particular, social learning may depend on reinforcement learning though embodied simulations. Such modeling involves calculations based on structures outside the brain such as face and hands, calculations on one’s own body that are used to make inferences about others. We discuss ways to test the role of embodied simulation in BPD and potential implications for treatment. PMID:25221523
Theoretical and computational studies in protein folding, design, and function
NASA Astrophysics Data System (ADS)
Morrissey, Michael Patrick
2000-10-01
In this work, simplified statistical models are used to understand an array of processes related to protein folding and design. In Part I, lattice models are utilized to test several theories about the statistical properties of protein-like systems. In Part II, sequence analysis and all-atom simulations are used to advance a novel theory for the behavior of a particular protein. Part I is divided into five chapters. In Chapter 2, a method of sequence design for model proteins, based on statistical mechanical first-principles, is developed. The cumulant design method uses a mean-field approximation to expand the free energy of a sequence in temperature. The method successfully designs sequences which fold to a target lattice structure at a specific temperature, a feat which was not possible using previous design methods. The next three chapters are computational studies of the double mutant cycle, which has been used experimentally to predict intra-protein interactions. Complete structure prediction is demonstrated for a model system using exhaustive, and also sub-exhaustive, double mutants. Nonadditivity of enthalpy, rather than of free energy, is proposed and demonstrated to be a superior marker for inter-residue contact. Next, a new double mutant protocol, called exchange mutation, is introduced. Although simple statistical arguments predict exchange mutation to be a more accurate contact predictor than standard mutant cycles, this hypothesis was not upheld in lattice simulations. Reasons for this inconsistency will be discussed. Finally, a multi-chain folding algorithm is introduced. Known as LINKS, this algorithm was developed to test a method of structure prediction which utilizes chain-break mutants. While structure prediction was not successful, LINKS should nevertheless be a useful tool for the study of protein-protein and protein-ligand interactions. The last chapter of Part I utilizes the lattice to explore the differences between standard folding, from
A Computer Program for the Computation of Running Gear Temperatures Using Green's Function
NASA Technical Reports Server (NTRS)
Koshigoe, S.; Murdock, J. W.; Akin, L. S.; Townsend, D. P.
1996-01-01
A new technique has been developed to study two dimensional heat transfer problems in gears. This technique consists of transforming the heat equation into a line integral equation with the use of Green's theorem. The equation is then expressed in terms of eigenfunctions that satisfy the Helmholtz equation, and their corresponding eigenvalues for an arbitrarily shaped region of interest. The eigenfunction are obtalned by solving an intergral equation. Once the eigenfunctions are found, the temperature is expanded in terms of the eigenfunctions with unknown time dependent coefficients that can be solved by using Runge Kutta methods. The time integration is extremely efficient. Therefore, any changes in the time dependent coefficients or source terms in the boundary conditions do not impose a great computational burden on the user. The method is demonstrated by applying it to a sample gear tooth. Temperature histories at representative surface locatons are given.
A mesh-decoupled height function method for computing interface curvature
NASA Astrophysics Data System (ADS)
Owkes, Mark; Desjardins, Olivier
2015-01-01
In this paper, a mesh-decoupled height function method is proposed and tested. The method is based on computing height functions within columns that are not aligned with the underlying mesh and have variable dimensions. Because they are decoupled from the computational mesh, the columns can be aligned with the interface normal vector, which is found to improve the curvature calculation for under-resolved interfaces where the standard height function method often fails. A computational geometry toolbox is used to compute the heights in the complex geometry that is formed at the intersection of the computational mesh and the columns. The toolbox reduces the complexity of the problem to a series of straightforward geometric operations using simplices. The proposed scheme is shown to compute more accurate curvatures than the standard height function method on coarse meshes. A combined method that uses the standard height function where it is well defined and the proposed scheme in under-resolved regions is tested. This approach achieves accurate and robust curvatures for under-resolved interface features and second-order converging curvatures for well-resolved interfaces.
ERIC Educational Resources Information Center
Booth, Josephine N.; Boyle, James M. E.; Kelly, Steve W.
2010-01-01
Research studies have implicated executive functions in reading difficulties (RD). But while some studies have found children with RD to be impaired on tasks of executive function other studies report unimpaired performance. A meta-analysis was carried out to determine whether these discrepant findings can be accounted for by differences in the…
Ghosh, Sreya; Preza, Chrysanthe
2015-07-01
A three-dimensional (3-D) point spread function (PSF) model for wide-field fluorescence microscopy, suitable for imaging samples with variable refractive index (RI) in multilayered media, is presented. This PSF model is a key component for accurate 3-D image restoration of thick biological samples, such as lung tissue. Microscope- and specimen-derived parameters are combined with a rigorous vectorial formulation to obtain a new PSF model that accounts for additional aberrations due to specimen RI variability. Experimental evaluation and verification of the PSF model was accomplished using images from 175-nm fluorescent beads in a controlled test sample. Fundamental experimental validation of the advantage of using improved PSFs in depth-variant restoration was accomplished by restoring experimental data from beads (6 μm in diameter) mounted in a sample with RI variation. In the investigated study, improvement in restoration accuracy in the range of 18 to 35% was observed when PSFs from the proposed model were used over restoration using PSFs from an existing model. The new PSF model was further validated by showing that its prediction compares to an experimental PSF (determined from 175-nm beads located below a thick rat lung slice) with a 42% improved accuracy over the current PSF model prediction. PMID:26154937
The default-mode, ego-functions and free-energy: a neurobiological account of Freudian ideas
Friston, K. J.
2010-01-01
This article explores the notion that Freudian constructs may have neurobiological substrates. Specifically, we propose that Freud’s descriptions of the primary and secondary processes are consistent with self-organized activity in hierarchical cortical systems and that his descriptions of the ego are consistent with the functions of the default-mode and its reciprocal exchanges with subordinate brain systems. This neurobiological account rests on a view of the brain as a hierarchical inference or Helmholtz machine. In this view, large-scale intrinsic networks occupy supraordinate levels of hierarchical brain systems that try to optimize their representation of the sensorium. This optimization has been formulated as minimizing a free-energy; a process that is formally similar to the treatment of energy in Freudian formulations. We substantiate this synthesis by showing that Freud’s descriptions of the primary process are consistent with the phenomenology and neurophysiology of rapid eye movement sleep, the early and acute psychotic state, the aura of temporal lobe epilepsy and hallucinogenic drug states. PMID:20194141
ERIC Educational Resources Information Center
Tumthong, Suwut; Piriyasurawong, Pullop; Jeerangsuwan, Namon
2016-01-01
This research proposes a functional competency development model for academic personnel based on international professional qualification standards in computing field and examines the appropriateness of the model. Specifically, the model consists of three key components which are: 1) functional competency development model, 2) blended training…
Khang, G; Zajac, F E
1989-09-01
We have developed a planar computer model to investigate paraplegic standing induced by functional neuromuscular stimulation. The model consists of nonlinear musculotendon dynamics (pulse train activation dynamics and musculotendon actuator dynamics), nonlinear body-segmental dynamics, and a linear output-feedback control law. The model of activation dynamics is an analytic expression that characterizes the relation between the stimulus parameters (pulse width and interpulse interval) and the muscle activation. Hill's classic two-element muscle model was modified into a musculotendon actuator model in order to account for the effects of submaximal activation and tendon elasticity on development of force by the actuator. The three body-segmental, multijoint model accounts for the anterior-posterior movements of the head and trunk, the thigh, and the shank. We modeled arm movement as an external disturbance and imposed the disturbance to the body-segmental dynamics by means of a quasistatic analysis. Linearization, and at times linear approximation of the computer model, enabled us to compute a constant, linear feedback-gain matrix, whose output is the net activation needed by a dynamical joint-torque actuator. Motivated by an assumption that minimization of energy expenditure lessens muscle fatigue, we developed an algorithm that then computes how to distribute the net activation among all the muscles crossing the joint. In part II, the combined feedback control strategy is applied to the nonlinear model of musculotendon and body-segmental dynamics to study how well the body ought to maintain balance should the feedback control strategy be employed. PMID:2789177
NASA Technical Reports Server (NTRS)
Tezduyar, T. E.; Liou, J.; Ganjoo, D. K.
1990-01-01
Finite element procedures and computations based on the velocity-pressure and vorticity-stream function formulations of incompressible flows are presented. Two new multistep velocity-pressure formulations are proposed and compared with the vorticity-stream function and one-step formulations. The example problems chosen are the standing vortex problem and flow past a circular cylinder. Benchmark quality computations are performed for the cylinder problem. The numerical results indicate that the vorticity-stream function formulation and one of the two new multistep formulations involve much less numerical dissipation than the one-step formulation.
Computation of turbulent boundary layers employing the defect wall-function method. M.S. Thesis
NASA Technical Reports Server (NTRS)
Brown, Douglas L.
1994-01-01
In order to decrease overall computational time requirements of spatially-marching parabolized Navier-Stokes finite-difference computer code when applied to turbulent fluid flow, a wall-function methodology, originally proposed by R. Barnwell, was implemented. This numerical effort increases computational speed and calculates reasonably accurate wall shear stress spatial distributions and boundary-layer profiles. Since the wall shear stress is analytically determined from the wall-function model, the computational grid near the wall is not required to spatially resolve the laminar-viscous sublayer. Consequently, a substantially increased computational integration step size is achieved resulting in a considerable decrease in net computational time. This wall-function technique is demonstrated for adiabatic flat plate test cases from Mach 2 to Mach 8. These test cases are analytically verified employing: (1) Eckert reference method solutions, (2) experimental turbulent boundary-layer data of Mabey, and (3) finite-difference computational code solutions with fully resolved laminar-viscous sublayers. Additionally, results have been obtained for two pressure-gradient cases: (1) an adiabatic expansion corner and (2) an adiabatic compression corner.
ERIC Educational Resources Information Center
Hughes, John; And Others
This report provides a description of a Computer Aided Training System Development and Management (CATSDM) environment based on state-of-the-art hardware and software technology, and including recommendations for off the shelf systems to be utilized as a starting point in addressing the particular systematic training and instruction design and…
Tawhai, Merryn H; Hoffman, Eric A; Lin, Ching-Long
2009-01-01
Global measurements of the lung provided by standard pulmonary function tests do not give insight into the regional basis of lung function and lung disease. Advances in imaging methodologies, computer technologies, and subject-specific simulations are creating new opportunities for studying structure-function relationships in the lung through multi-disciplinary research. The digital Human Lung Atlas is an imaging-based resource compiled from male and female subjects spanning several decades of age. The Atlas comprises both structural and functional measures, and includes computational models derived to match individual subjects for personalized prediction of function. The computational models in the Atlas form part of the Lung Physiome project, which is an international effort to develop integrative models of lung function at all levels of biological organization. The computational models provide mechanistic interpretation of imaging measures; the Atlas provides structural data upon which to base model geometry, and functional data against which to test hypotheses. The example of simulating air flow on a subject-specific basis is considered. Methods for deriving multi-scale models of the airway geometry for individual subjects in the Atlas are outlined, and methods for modeling turbulent flows in the airway are reviewed. PMID:20835982