Accounting & Computing Curriculum Guide.
ERIC Educational Resources Information Center
Avani, Nathan T.; And Others
This curriculum guide consists of materials for use in teaching a competency-based accounting and computing course that is designed to prepare students for employability in the following occupational areas: inventory control clerk, invoice clerk, payroll clerk, traffic clerk, general ledger bookkeeper, accounting clerk, account information clerk,…
Wang, Menghua
2016-05-30
To understand and assess the effect of the sensor spectral response function (SRF) on the accuracy of the top of the atmosphere (TOA) Rayleigh-scattering radiance computation, new TOA Rayleigh radiance lookup tables (LUTs) over global oceans and inland waters have been generated. The new Rayleigh LUTs include spectral coverage of 335-2555 nm, all possible solar-sensor geometries, and surface wind speeds of 0-30 m/s. Using the new Rayleigh LUTs, the sensor SRF effect on the accuracy of the TOA Rayleigh radiance computation has been evaluated for spectral bands of the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (SNPP) satellite and the Joint Polar Satellite System (JPSS)-1, showing some important uncertainties for VIIRS-SNPP particularly for large solar- and/or sensor-zenith angles as well as for large Rayleigh optical thicknesses (i.e., short wavelengths) and bands with broad spectral bandwidths. To accurately account for the sensor SRF effect, a new correction algorithm has been developed for VIIRS spectral bands, which improves the TOA Rayleigh radiance accuracy to ~0.01% even for the large solar-zenith angles of 70°-80°, compared with the error of ~0.7% without applying the correction for the VIIRS-SNPP 410 nm band. The same methodology that accounts for the sensor SRF effect on the Rayleigh radiance computation can be used for other satellite sensors. In addition, with the new Rayleigh LUTs, the effect of surface atmospheric pressure variation on the TOA Rayleigh radiance computation can be calculated precisely, and no specific atmospheric pressure correction algorithm is needed. There are some other important applications and advantages to using the new Rayleigh LUTs for satellite remote sensing, including an efficient and accurate TOA Rayleigh radiance computation for hyperspectral satellite remote sensing, detector-based TOA Rayleigh radiance computation, Rayleigh radiance calculations for high altitude
Vocational Accounting and Computing Programs.
ERIC Educational Resources Information Center
Avani, Nathan T.
1986-01-01
Describes an "Accounting and Computing" program in Michigan that emphasizes computerized accounting procedures. This article describes the program curriculum and duty areas (such as handling accounts receivable), presents a list of sample tasks in each duty area, and specifies components of each task. Computer equipment necessary for this program…
Accounting: The Integration of Computers into the Accounting Class.
ERIC Educational Resources Information Center
Brown, Dorothy Lee
1980-01-01
Since computers are universally accepted in business today, the accounting classroom is the appropriate place to teach their use. A California high school accounting committee's recommendation led to the school's development of a computer processing program within the accounting department. The program's curriculum is described. (CT)
Integrating Computer Concepts into Principles of Accounting.
ERIC Educational Resources Information Center
Beck, Henry J.; Parrish, Roy James, Jr.
A package of instructional materials for an undergraduate principles of accounting course at Danville Community College was developed based upon the following assumptions: (1) the principles of accounting student does not need to be able to write computer programs; (2) computerized accounting concepts should be presented in this course; (3)…
Accounting and the Use of the Computer
ERIC Educational Resources Information Center
Irvin, Donald D.
1969-01-01
The nature, scope, and potential of electronic data processing are discussed as significant factors in the changing function and role of accountants. Decision making, not bookkeeping, is emerging as a realm of professional activity. (CH)
Space shuttle configuration accounting functional design specification
NASA Technical Reports Server (NTRS)
1974-01-01
An analysis is presented of the requirements for an on-line automated system which must be capable of tracking the status of requirements and engineering changes and of providing accurate and timely records. The functional design specification provides the definition, description, and character length of the required data elements and the interrelationship of data elements to adequately track, display, and report the status of active configuration changes. As changes to the space shuttle program levels II and III configuration are proposed, evaluated, and dispositioned, it is the function of the configuration management office to maintain records regarding changes to the baseline and to track and report the status of those changes. The configuration accounting system will consist of a combination of computers, computer terminals, software, and procedures, all of which are designed to store, retrieve, display, and process information required to track proposed and proved engineering changes to maintain baseline documentation of the space shuttle program levels II and III.
Symbolic functions from neural computation.
Smolensky, Paul
2012-07-28
Is thought computation over ideas? Turing, and many cognitive scientists since, have assumed so, and formulated computational systems in which meaningful concepts are encoded by symbols which are the objects of computation. Cognition has been carved into parts, each a function defined over such symbols. This paper reports on a research program aimed at computing these symbolic functions without computing over the symbols. Symbols are encoded as patterns of numerical activation over multiple abstract neurons, each neuron simultaneously contributing to the encoding of multiple symbols. Computation is carried out over the numerical activation values of such neurons, which individually have no conceptual meaning. This is massively parallel numerical computation operating within a continuous computational medium. The paper presents an axiomatic framework for such a computational account of cognition, including a number of formal results. Within the framework, a class of recursive symbolic functions can be computed. Formal languages defined by symbolic rewrite rules can also be specified, the subsymbolic computations producing symbolic outputs that simultaneously display central properties of both facets of human language: universal symbolic grammatical competence and statistical, imperfect performance.
A Computational Account of Bilingual Aphasia Rehabilitation
ERIC Educational Resources Information Center
Kiran, Swathi; Grasemann, Uli; Sandberg, Chaleece; Miikkulainen, Risto
2013-01-01
Current research on bilingual aphasia highlights the paucity in recommendations for optimal rehabilitation for bilingual aphasic patients (Edmonds & Kiran, 2006; Roberts & Kiran, 2007). In this paper, we have developed a computational model to simulate an English-Spanish bilingual language system in which language representations can vary by age…
A Computational Account of Bilingual Aphasia Rehabilitation
Kiran, Swathi; Grasemann, Uli; Sandberg, Chaleece; Miikkulainen, Risto
2012-01-01
Current research on bilingual aphasia highlights the paucity in recommendations for optimal rehabilitation for bilingual aphasic patients (Roberts & Kiran, 2007; Edmonds & Kiran, 2006). In this paper, we have developed a computational model to simulate an English-Spanish bilingual language system in which language representations can vary by age of acquisition (AoA) and relative proficiency in the two languages to model individual participants. This model is subsequently lesioned by varying connection strengths between the semantic and phonological networks and retrained based on individual patient demographic information to evaluate whether or not the model’s prediction of rehabilitation matched the actual treatment outcome. In most cases the model comes close to the target performance subsequent to language therapy in the language trained, indicating the validity of this model in simulating rehabilitation of naming impairment in bilingual aphasia. Additionally, the amount of cross-language transfer is limited both in the patient performance and in the model’s predictions and is dependent on that specific patient’s AoA, language exposure and language impairment. It also suggests how well alternative treatment scenarios would have fared, including some cases where the alternative would have done better. Overall, the study suggests how computational modeling could be used in the future to design customized treatment recipes that result in better recovery than is currently possible. PMID:24600315
Computational complexity of Boolean functions
NASA Astrophysics Data System (ADS)
Korshunov, Aleksei D.
2012-02-01
Boolean functions are among the fundamental objects of discrete mathematics, especially in those of its subdisciplines which fall under mathematical logic and mathematical cybernetics. The language of Boolean functions is convenient for describing the operation of many discrete systems such as contact networks, Boolean circuits, branching programs, and some others. An important parameter of discrete systems of this kind is their complexity. This characteristic has been actively investigated starting from Shannon's works. There is a large body of scientific literature presenting many fundamental results. The purpose of this survey is to give an account of the main results over the last sixty years related to the complexity of computation (realization) of Boolean functions by contact networks, Boolean circuits, and Boolean circuits without branching. Bibliography: 165 titles.
Software For Computing Selected Functions
NASA Technical Reports Server (NTRS)
Grant, David C.
1992-01-01
Technical memorandum presents collection of software packages in Ada implementing mathematical functions used in science and engineering. Provides programmer with function support in Pascal and FORTRAN, plus support for extended-precision arithmetic and complex arithmetic. Valuable for testing new computers, writing computer code, or developing new computer integrated circuits.
Computer-generated fiscal reports for food cost accounting.
Fromm, B; Moore, A N; Hoover, L W
1980-08-01
To optimize resource utilization for the provision of health-care services, well designed food cost accounting systems should facilitate effective decision-making. Fiscal reports reflecting the financial status of an organization at a given time must be current and representative so that managers have adequate data for planning and controlling. The computer-assisted food cost accounting discussed in this article can be integrated with other sub-systems and operations management techniques to provide the information needed to make decisions regarding revenues and expenses. Management information systems must be routinely evaluated and updated to meet the current needs of administrators. Further improvements in the food cost accounting system will be desirable whenever substantial changes occur within the foodservice operation at the University of Missouri-Columbia Medical Center or when advancements in computer technology provide more efficient methods for manipulating data and generating reports. Development of new systems and better applications of present systems could contribute significantly to the efficiency of operations in both health care and commercial foodservices. The computer-assisted food cost accounting system reported here might serve s a prototype for other management cost information systems.
Network Coding for Function Computation
ERIC Educational Resources Information Center
Appuswamy, Rathinakumar
2011-01-01
In this dissertation, the following "network computing problem" is considered. Source nodes in a directed acyclic network generate independent messages and a single receiver node computes a target function f of the messages. The objective is to maximize the average number of times f can be computed per network usage, i.e., the "computing…
Program Computes Thermodynamic Functions
NASA Technical Reports Server (NTRS)
Mcbride, Bonnie J.; Gordon, Sanford
1994-01-01
PAC91 is latest in PAC (Properties and Coefficients) series. Two principal features are to provide means of (1) generating theoretical thermodynamic functions from molecular constants and (2) least-squares fitting of these functions to empirical equations. PAC91 written in FORTRAN 77 to be machine-independent.
Precise accounting of bit errors in floating-point computations
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.
2009-08-01
Floating-point computation generates errors at the bit level through four processes, namely, overflow, underflow, truncation, and rounding. Overflow and underflow can be detected electronically, and represent systematic errors that are not of interest in this study. Truncation occurs during shifting toward the least-significant bit (herein called right-shifting), and rounding error occurs at the least significant bit. Such errors are not easy to track precisely using published means. Statistical error propagation theory typically yields conservative estimates that are grossly inadequate for deep computational cascades. Forward error analysis theory developed for image and signal processing or matrix operations can yield a more realistic typical case, but the error of the estimate tends to be high in relationship to the estimated error. In this paper, we discuss emerging technology for forward error analysis, which allows an algorithm designer to precisely estimate the output error of a given operation within a computational cascade, under a prespecified set of constraints on input error and computational precision. This technique, called bit accounting, precisely tracks the number of rounding and truncation errors in each bit position of interest to the algorithm designer. Because all errors associated with specific bit positions are tracked, and because integer addition only is involved in error estimation, the error of the estimate is zero. The technique of bit accounting is evaluated for its utility in image and signal processing. Complexity analysis emphasizes the relationship between the work and space estimates of the algorithm being analyzed, and its error estimation algorithm. Because of the significant overhead involved in error representation, it is shown that bit accounting is less useful for real-time error estimation, but is well suited to analysis in support of algorithm design.
Computational Models for Neuromuscular Function
Valero-Cuevas, Francisco J.; Hoffmann, Heiko; Kurse, Manish U.; Kutch, Jason J.; Theodorou, Evangelos A.
2011-01-01
Computational models of the neuromuscular system hold the potential to allow us to reach a deeper understanding of neuromuscular function and clinical rehabilitation by complementing experimentation. By serving as a means to distill and explore specific hypotheses, computational models emerge from prior experimental data and motivate future experimental work. Here we review computational tools used to understand neuromuscular function including musculoskeletal modeling, machine learning, control theory, and statistical model analysis. We conclude that these tools, when used in combination, have the potential to further our understanding of neuromuscular function by serving as a rigorous means to test scientific hypotheses in ways that complement and leverage experimental data. PMID:21687779
Computer program for the automated attendance accounting system
NASA Technical Reports Server (NTRS)
Poulson, P.; Rasmusson, C.
1971-01-01
The automated attendance accounting system (AAAS) was developed under the auspices of the Space Technology Applications Program. The task is basically the adaptation of a small digital computer, coupled with specially developed pushbutton terminals located in school classrooms and offices for the purpose of taking daily attendance, maintaining complete attendance records, and producing partial and summary reports. Especially developed for high schools, the system is intended to relieve both teachers and office personnel from the time-consuming and dreary task of recording and analyzing the myriad classroom attendance data collected throughout the semester. In addition, since many school district budgets are related to student attendance, the increase in accounting accuracy is expected to augment district income. A major component of this system is the real-time AAAS software system, which is described.
An APEL Tool Based CPU Usage Accounting Infrastructure for Large Scale Computing Grids
NASA Astrophysics Data System (ADS)
Jiang, Ming; Novales, Cristina Del Cano; Mathieu, Gilles; Casson, John; Rogers, William; Gordon, John
The APEL (Accounting Processor for Event Logs) is the fundamental tool for the CPU usage accounting infrastructure deployed within the WLCG and EGEE Grids. In these Grids, jobs are submitted by users to computing resources via a Grid Resource Broker (e.g. gLite Workload Management System). As a log processing tool, APEL interprets logs of Grid gatekeeper (e.g. globus) and batch system logs (e.g. PBS, LSF, SGE and Condor) to produce CPU job accounting records identified with Grid identities. These records provide a complete description of usage of computing resources by user's jobs. APEL publishes accounting records into an accounting record repository at a Grid Operations Centre (GOC) for the access from a GUI web tool. The functions of log files parsing, records generation and publication are implemented by the APEL Parser, APEL Core, and APEL Publisher component respectively. Within the distributed accounting infrastructure, accounting records are transported from APEL Publishers at Grid sites to either a regionalised accounting system or the central one by choice via a common ActiveMQ message broker network. This provides an open transport layer for other accounting systems to publish relevant accounting data to a central accounting repository via a unified interface provided an APEL Publisher and also will give regional/National Grid Initiatives (NGIs) Grids the flexibility in their choice of accounting system. The robust and secure delivery of accounting record messages at an NGI level and between NGI accounting instances and the central one are achieved by using configurable APEL Publishers and an ActiveMQ message broker network.
Automatic computation of transfer functions
Atcitty, Stanley; Watson, Luke Dale
2015-04-14
Technologies pertaining to the automatic computation of transfer functions for a physical system are described herein. The physical system is one of an electrical system, a mechanical system, an electromechanical system, an electrochemical system, or an electromagnetic system. A netlist in the form of a matrix comprises data that is indicative of elements in the physical system, values for the elements in the physical system, and structure of the physical system. Transfer functions for the physical system are computed based upon the netlist.
Functional Programming in Computer Science
Anderson, Loren James; Davis, Marion Kei
2016-01-19
We explore functional programming through a 16-week internship at Los Alamos National Laboratory. Functional programming is a branch of computer science that has exploded in popularity over the past decade due to its high-level syntax, ease of parallelization, and abundant applications. First, we summarize functional programming by listing the advantages of functional programming languages over the usual imperative languages, and we introduce the concept of parsing. Second, we discuss the importance of lambda calculus in the theory of functional programming. Lambda calculus was invented by Alonzo Church in the 1930s to formalize the concept of effective computability, and every functional language is essentially some implementation of lambda calculus. Finally, we display the lasting products of the internship: additions to a compiler and runtime system for the pure functional language STG, including both a set of tests that indicate the validity of updates to the compiler and a compiler pass that checks for illegal instances of duplicate names.
Computer Experiments for Function Approximations
Chang, A; Izmailov, I; Rizzo, S; Wynter, S; Alexandrov, O; Tong, C
2007-10-15
This research project falls in the domain of response surface methodology, which seeks cost-effective ways to accurately fit an approximate function to experimental data. Modeling and computer simulation are essential tools in modern science and engineering. A computer simulation can be viewed as a function that receives input from a given parameter space and produces an output. Running the simulation repeatedly amounts to an equivalent number of function evaluations, and for complex models, such function evaluations can be very time-consuming. It is then of paramount importance to intelligently choose a relatively small set of sample points in the parameter space at which to evaluate the given function, and then use this information to construct a surrogate function that is close to the original function and takes little time to evaluate. This study was divided into two parts. The first part consisted of comparing four sampling methods and two function approximation methods in terms of efficiency and accuracy for simple test functions. The sampling methods used were Monte Carlo, Quasi-Random LP{sub {tau}}, Maximin Latin Hypercubes, and Orthogonal-Array-Based Latin Hypercubes. The function approximation methods utilized were Multivariate Adaptive Regression Splines (MARS) and Support Vector Machines (SVM). The second part of the study concerned adaptive sampling methods with a focus on creating useful sets of sample points specifically for monotonic functions, functions with a single minimum and functions with a bounded first derivative.
Living through a computer voice: a personal account.
Martin, Alan; Newell, Christopher
2013-10-01
Alan Martin, the first author of this paper, has cerebral palsy and uses a voice output communication aid (VOCA) to speak, and this paper describes the personal experience of living 'through' a computer voice (or VOCA) in the form of an interview of Mr Martin conducted by Dr Newell. The interview focuses on the computerized voice output rather than other features of the VOCA. In presenting a first-hand account of the experience of actually using VOCA, the intention is that both everyday, practical issues of the technology and broader imaginative, philosophical, and sociological implications will be explored. Based upon the interview, the authors offer an informal set of design requirements and recommendations for the development of future VOCAs.
FUNCTION GENERATOR FOR ANALOGUE COMPUTERS
Skramstad, H.K.; Wright, J.H.; Taback, L.
1961-12-12
An improved analogue computer is designed which can be used to determine the final ground position of radioactive fallout particles in an atomic cloud. The computer determines the fallout pattern on the basis of known wind velocity and direction at various altitudes, and intensity of radioactivity in the mushroom cloud as a function of particle size and initial height in the cloud. The output is then displayed on a cathode-ray tube so that the average or total luminance of the tube screen at any point represents the intensity of radioactive fallout at the geographical location represented by that point. (AEC)
Function Learning: An Exemplar Account of Extrapolation Performance
2003-10-29
Function learning An exemplar account of extrapolation performance Peter J Kwantes DRDC Toronto Andrew Neal University of Queensland...Defence R&D Canada – Toronto Technical Report DRDC Toronto TR 2003-138 October 2003 Author Peter J Kwantes Approved by Dr. Lochlan Magee Head...architecture for representing knowledge of functional relationships in a virtual operator. Kwantes , P.J., Neal, A. 2003. Function learning. An
ERIC Educational Resources Information Center
Laing, Gregory Kenneth; Perrin, Ronald William
2012-01-01
This paper presents the findings of a field study conducted to ascertain the perceptions of first year accounting students concerning the integration of computer applications in the accounting curriculum. The results indicate that both student cohorts perceived the computer as a valuable educational tool. The use of computers to enhance the…
Genre Analysis of Tax Computation Letters: How and Why Tax Accountants Write the Way They Do
ERIC Educational Resources Information Center
Flowerdew, John; Wan, Alina
2006-01-01
This study is a genre analysis which explores the specific discourse community of tax accountants. Tax computation letters from one international accounting firm in Hong Kong were analyzed and compared. To probe deeper into the tax accounting discourse community, a group of tax accountants from the same firm was observed and questioned. The texts…
Metacognition: computation, biology and function
Fleming, Stephen M.; Dolan, Raymond J.; Frith, Christopher D.
2012-01-01
Many complex systems maintain a self-referential check and balance. In animals, such reflective monitoring and control processes have been grouped under the rubric of metacognition. In this introductory article to a Theme Issue on metacognition, we review recent and rapidly progressing developments from neuroscience, cognitive psychology, computer science and philosophy of mind. While each of these areas is represented in detail by individual contributions to the volume, we take this opportunity to draw links between disciplines, and highlight areas where further integration is needed. Specifically, we cover the definition, measurement, neurobiology and possible functions of metacognition, and assess the relationship between metacognition and consciousness. We propose a framework in which level of representation, order of behaviour and access consciousness are orthogonal dimensions of the conceptual landscape. PMID:22492746
Metacognition: computation, biology and function.
Fleming, Stephen M; Dolan, Raymond J; Frith, Christopher D
2012-05-19
Many complex systems maintain a self-referential check and balance. In animals, such reflective monitoring and control processes have been grouped under the rubric of metacognition. In this introductory article to a Theme Issue on metacognition, we review recent and rapidly progressing developments from neuroscience, cognitive psychology, computer science and philosophy of mind. While each of these areas is represented in detail by individual contributions to the volume, we take this opportunity to draw links between disciplines, and highlight areas where further integration is needed. Specifically, we cover the definition, measurement, neurobiology and possible functions of metacognition, and assess the relationship between metacognition and consciousness. We propose a framework in which level of representation, order of behaviour and access consciousness are orthogonal dimensions of the conceptual landscape.
Computational Complexity, Efficiency and Accountability in Large Scale Teleprocessing Systems.
1980-12-01
COMPLEXITY, EFFICIENCY AND ACCOUNTABILITY IN LARGE SCALE TELEPROCESSING SYSTEMS DAAG29-78-C-0036 STANFORD UNIVERSITY JOHN T. GILL MARTIN E. BELLMAN...solve but easy to check. Ve have also suggested howy sucb random tapes can be simulated by determin- istically generating "pseudorandom" numbers by a
ERIC Educational Resources Information Center
Lashway, Larry
1999-01-01
This issue reviews publications that provide a starting point for principals looking for a way through the accountability maze. Each publication views accountability differently, but collectively these readings argue that even in an era of state-mandated assessment, principals can pursue proactive strategies that serve students' needs. James A.…
Connecting Neural Coding to Number Cognition: A Computational Account
ERIC Educational Resources Information Center
Prather, Richard W.
2012-01-01
The current study presents a series of computational simulations that demonstrate how the neural coding of numerical magnitude may influence number cognition and development. This includes behavioral phenomena cataloged in cognitive literature such as the development of numerical estimation and operational momentum. Though neural research has…
49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 9 2011-10-01 2011-10-01 false Computers and data processing equipment (account XX-27-46). 1242.46 Section 1242.46 Transportation Other Regulations Relating to Transportation... RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account...
49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 9 2010-10-01 2010-10-01 false Computers and data processing equipment (account XX-27-46). 1242.46 Section 1242.46 Transportation Other Regulations Relating to Transportation... RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account...
49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 9 2013-10-01 2013-10-01 false Computers and data processing equipment (account XX-27-46). 1242.46 Section 1242.46 Transportation Other Regulations Relating to Transportation... RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account...
49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 9 2014-10-01 2014-10-01 false Computers and data processing equipment (account XX-27-46). 1242.46 Section 1242.46 Transportation Other Regulations Relating to Transportation... RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account...
49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 9 2012-10-01 2012-10-01 false Computers and data processing equipment (account XX-27-46). 1242.46 Section 1242.46 Transportation Other Regulations Relating to Transportation... RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account...
Computing Functions by Approximating the Input
ERIC Educational Resources Information Center
Goldberg, Mayer
2012-01-01
In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…
ERIC Educational Resources Information Center
Lenard, Mary Jane; Wessels, Susan; Khanlarian, Cindi
2010-01-01
Using a model developed by Young (2000), this paper explores the relationship between performance in the Accounting Information Systems course, self-assessed computer skills, and attitudes toward computers. Results show that after taking the AIS course, students experience a change in perception about their use of computers. Females'…
Sequential decisions: a computational comparison of observational and reinforcement accounts.
Mohammadi Sepahvand, Nazanin; Stöttinger, Elisabeth; Danckert, James; Anderson, Britt
2014-01-01
Right brain damaged patients show impairments in sequential decision making tasks for which healthy people do not show any difficulty. We hypothesized that this difficulty could be due to the failure of right brain damage patients to develop well-matched models of the world. Our motivation is the idea that to navigate uncertainty, humans use models of the world to direct the decisions they make when interacting with their environment. The better the model is, the better their decisions are. To explore the model building and updating process in humans and the basis for impairment after brain injury, we used a computational model of non-stationary sequence learning. RELPH (Reinforcement and Entropy Learned Pruned Hypothesis space) was able to qualitatively and quantitatively reproduce the results of left and right brain damaged patient groups and healthy controls playing a sequential version of Rock, Paper, Scissors. Our results suggests that, in general, humans employ a sub-optimal reinforcement based learning method rather than an objectively better statistical learning approach, and that differences between right brain damaged and healthy control groups can be explained by different exploration policies, rather than qualitatively different learning mechanisms.
Computing functions by approximating the input
NASA Astrophysics Data System (ADS)
Goldberg, Mayer
2012-12-01
In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their output. Our approach assumes only the most rudimentary knowledge of algebra and trigonometry, and makes no use of calculus.
ERIC Educational Resources Information Center
Owhoso, Vincent; Malgwi, Charles A.; Akpomi, Margaret
2014-01-01
The authors examine whether students who completed a computer-based intervention program, designed to help them develop abilities and skills in introductory accounting, later declared accounting as a major. A sample of 1,341 students participated in the study, of which 74 completed the intervention program (computer-based assisted learning [CBAL])…
On computation of Hough functions
NASA Astrophysics Data System (ADS)
Wang, Houjun; Boyd, John P.; Akmaev, Rashid A.
2016-04-01
Hough functions are the eigenfunctions of the Laplace tidal equation governing fluid motion on a rotating sphere with a resting basic state. Several numerical methods have been used in the past. In this paper, we compare two of those methods: normalized associated Legendre polynomial expansion and Chebyshev collocation. Both methods are not widely used, but both have some advantages over the commonly used unnormalized associated Legendre polynomial expansion method. Comparable results are obtained using both methods. For the first method we note some details on numerical implementation. The Chebyshev collocation method was first used for the Laplace tidal problem by Boyd (1976) and is relatively easy to use. A compact MATLAB code is provided for this method. We also illustrate the importance and effect of including a parity factor in Chebyshev polynomial expansions for modes with odd zonal wave numbers.
Computer-Aided Drug-Discovery Techniques that Account for Receptor Flexibility
Durrant, Jacob D.; McCammon, J. Andrew
2010-01-01
Protein flexibility plays a critical role in ligand binding to both orthosteric and allosteric sites. We here review some of the computer-aided drug-design techniques currently used to account for protein flexibility, ranging from methods that probe local receptor flexibility in the region of the protein immediately adjacent to the binding site, to those that account for general flexibility in all protein regions. PMID:20888294
Computer-aided drug-discovery techniques that account for receptor flexibility.
Durrant, Jacob D; McCammon, J Andrew
2010-12-01
Protein flexibility plays a critical role in ligand binding to both orthosteric and allosteric sites. We here review some of the computer-aided drug-design techniques currently used to account for protein flexibility, ranging from methods that probe local receptor flexibility in the region of the protein immediately adjacent to the binding site, to those that account for general flexibility in all protein regions.
ERIC Educational Resources Information Center
Lai, Ming-Ling
2008-01-01
Purpose: This study aims to assess the state of technology readiness of professional accounting students in Malaysia, to examine their level of internet self-efficacy, to assess their prior computing experience, and to explore if they are satisfied with the professional course that they are pursuing in improving their technology skills.…
ERIC Educational Resources Information Center
Morrison, Robert G.; Doumas, Leonidas A. A.; Richland, Lindsey E.
2011-01-01
Theories accounting for the development of analogical reasoning tend to emphasize either the centrality of relational knowledge accretion or changes in information processing capability. Simulations in LISA (Hummel & Holyoak, 1997, 2003), a neurally inspired computer model of analogical reasoning, allow us to explore how these factors may…
Written and Computer-Mediated Accounting Communication Skills: An Employer Perspective
ERIC Educational Resources Information Center
Jones, Christopher G.
2011-01-01
Communication skills are a fundamental personal competency for a successful career in accounting. What is not so obvious is the specific written communication skill set employers look for and the extent those skills are computer mediated. Using survey research, this article explores the particular skills employers desire and their satisfaction…
Approximate Bayesian computation with functional statistics.
Soubeyrand, Samuel; Carpentier, Florence; Guiton, François; Klein, Etienne K
2013-03-26
Functional statistics are commonly used to characterize spatial patterns in general and spatial genetic structures in population genetics in particular. Such functional statistics also enable the estimation of parameters of spatially explicit (and genetic) models. Recently, Approximate Bayesian Computation (ABC) has been proposed to estimate model parameters from functional statistics. However, applying ABC with functional statistics may be cumbersome because of the high dimension of the set of statistics and the dependences among them. To tackle this difficulty, we propose an ABC procedure which relies on an optimized weighted distance between observed and simulated functional statistics. We applied this procedure to a simple step model, a spatial point process characterized by its pair correlation function and a pollen dispersal model characterized by genetic differentiation as a function of distance. These applications showed how the optimized weighted distance improved estimation accuracy. In the discussion, we consider the application of the proposed ABC procedure to functional statistics characterizing non-spatial processes.
Computer Games Functioning as Motivation Stimulants
ERIC Educational Resources Information Center
Lin, Grace Hui Chin; Tsai, Tony Kung Wan; Chien, Paul Shih Chieh
2011-01-01
Numerous scholars have recommended computer games can function as influential motivation stimulants of English learning, showing benefits as learning tools (Clarke and Dede, 2007; Dede, 2009; Klopfer and Squire, 2009; Liu and Chu, 2010; Mitchell, Dede & Dunleavy, 2009). This study aimed to further test and verify the above suggestion,…
Computer simulation as a teaching aid in pharmacy management--Part 1: Principles of accounting.
Morrison, D J
1987-06-01
The need for pharmacists to develop management expertise through participation in formal courses is now widely acknowledged. Many schools of pharmacy lay the foundations for future management training by providing introductory courses as an integral or elective part of the undergraduate syllabus. The benefit of such courses may, however, be limited by the lack of opportunity for the student to apply the concepts and procedures in a practical working environment. Computer simulations provide a means to overcome this problem, particularly in the field of resource management. In this, the first of two articles, the use of a computer model to demonstrate basic accounting principles is described.
An Embodied Account of Early Executive-Function Development
Gottwald, Janna M.; Achermann, Sheila; Marciszko, Carin; Lindskog, Marcus; Gredebäck, Gustaf
2016-01-01
The importance of executive functioning for later life outcomes, along with its potential to be positively affected by intervention programs, motivates the need to find early markers of executive functioning. In this study, 18-month-olds performed three executive-function tasks—involving simple inhibition, working memory, and more complex inhibition—and a motion-capture task assessing prospective motor control during reaching. We demonstrated that prospective motor control, as measured by the peak velocity of the first movement unit, is related to infants’ performance on simple-inhibition and working memory tasks. The current study provides evidence that motor control and executive functioning are intertwined early in life, which suggests an embodied perspective on executive-functioning development. We argue that executive functions and prospective motor control develop from a common source and a single motive: to control action. This is the first demonstration that low-level movement planning is related to higher-order executive control early in life. PMID:27765900
The intrinsic quasar luminosity function: Accounting for accretion disk anisotropy
DiPompeo, M. A.; Myers, A. D.; Brotherton, M. S.; Runnoe, J. C.; Green, R. F.
2014-05-20
Quasar luminosity functions are a fundamental probe of the growth and evolution of supermassive black holes. Measuring the intrinsic luminosity function is difficult in practice, due to a multitude of observational and systematic effects. As sample sizes increase and measurement errors drop, characterizing the systematic effects is becoming more important. It is well known that the continuum emission from the accretion disk of quasars is anisotropic—in part due to its disk-like structure—but current luminosity function calculations effectively assume isotropy over the range of unobscured lines of sight. Here, we provide the first steps in characterizing the effect of random quasar orientations and simple models of anisotropy on observed luminosity functions. We find that the effect of orientation is not insignificant and exceeds other potential corrections such as those from gravitational lensing of foreground structures. We argue that current observational constraints may overestimate the intrinsic luminosity function by as much as a factor of ∼2 on the bright end. This has implications for models of quasars and their role in the universe, such as quasars' contribution to cosmological backgrounds.
Accounting for a Functional Category: German "Drohen" "to Threaten"
ERIC Educational Resources Information Center
Heine, Bernd; Miyashita, Hiroyuki
2008-01-01
In many languages there are words that behave like lexical verbs and on the one hand and like functional categories expressing distinctions of tense, aspect, modality, etc. on the other. The grammatical status of such words is frequently controversial; while some authors treat them as belonging to one and the same grammatical category, others…
The emerging discipline of Computational Functional Anatomy
Miller, Michael I.; Qiu, Anqi
2010-01-01
Computational Functional Anatomy (CFA) is the study of functional and physiological response variables in anatomical coordinates. For this we focus on two things: (i) the construction of bijections (via diffeomorphisms) between the coordinatized manifolds of human anatomy, and (ii) the transfer (group action and parallel transport) of functional information into anatomical atlases via these bijections. We review advances in the unification of the bijective comparison of anatomical submanifolds via point-sets including points, curves and surface triangulations as well as dense imagery. We examine the transfer via these bijections of functional response variables into anatomical coordinates via group action on scalars and matrices in DTI as well as parallel transport of metric information across multiple templates which preserves the inner product. PMID:19103297
Analysis of Ventricular Function by Computed Tomography
Rizvi, Asim; Deaño, Roderick C.; Bachman, Daniel P.; Xiong, Guanglei; Min, James K.; Truong, Quynh A.
2014-01-01
The assessment of ventricular function, cardiac chamber dimensions and ventricular mass is fundamental for clinical diagnosis, risk assessment, therapeutic decisions, and prognosis in patients with cardiac disease. Although cardiac computed tomography (CT) is a noninvasive imaging technique often used for the assessment of coronary artery disease, it can also be utilized to obtain important data about left and right ventricular function and morphology. In this review, we will discuss the clinical indications for the use of cardiac CT for ventricular analysis, review the evidence on the assessment of ventricular function compared to existing imaging modalities such cardiac MRI and echocardiography, provide a typical cardiac CT protocol for image acquisition and post-processing for ventricular analysis, and provide step-by-step instructions to acquire multiplanar cardiac views for ventricular assessment from the standard axial, coronal, and sagittal planes. Furthermore, both qualitative and quantitative assessments of ventricular function as well as sample reporting are detailed. PMID:25576407
New Computer Simulations of Macular Neural Functioning
NASA Technical Reports Server (NTRS)
Ross, Muriel D.; Doshay, D.; Linton, S.; Parnas, B.; Montgomery, K.; Chimento, T.
1994-01-01
We use high performance graphics workstations and supercomputers to study the functional significance of the three-dimensional (3-D) organization of gravity sensors. These sensors have a prototypic architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scaled-up, 3-D versions run on a Cray Y-MP supercomputer. A semi-automated method of reconstruction of neural tissue from serial sections studied in a transmission electron microscope has been developed to eliminate tedious conventional photography. The reconstructions use a mesh as a step in generating a neural surface for visualization. Two meshes are required to model calyx surfaces. The meshes are connected and the resulting prisms represent the cytoplasm and the bounding membranes. A finite volume analysis method is employed to simulate voltage changes along the calyx in response to synapse activation on the calyx or on calyceal processes. The finite volume method insures that charge is conserved at the calyx-process junction. These and other models indicate that efferent processes act as voltage followers, and that the morphology of some afferent processes affects their functioning. In a final application, morphological information is symbolically represented in three dimensions in a computer. The possible functioning of the connectivities is tested using mathematical interpretations of physiological parameters taken from the literature. Symbolic, 3-D simulations are in progress to probe the functional significance of the connectivities. This research is expected to advance computer-based studies of macular functioning and of synaptic plasticity.
Material reconstruction for spectral computed tomography with detector response function
NASA Astrophysics Data System (ADS)
Liu, Jiulong; Gao, Hao
2016-11-01
Different from conventional computed tomography (CT), spectral CT using energy-resolved photon-counting detectors is able to provide the unprecedented material compositions. However accurate spectral CT needs to account for the detector response function (DRF), which is often distorted by factors such as pulse pileup and charge-sharing. In this work, we propose material reconstruction methods for spectral CT with DRF. The simulation results suggest that the proposed methods reconstructed more accurate material compositions than the conventional method without DRF. Moreover, the proposed linearized method with linear data fidelity from spectral resampling had improved reconstruction quality from the nonlinear method directly based on nonlinear data fidelity.
Computer network defense through radial wave functions
NASA Astrophysics Data System (ADS)
Malloy, Ian J.
The purpose of this research is to synthesize basic and fundamental findings in quantum computing, as applied to the attack and defense of conventional computer networks. The concept focuses on uses of radio waves as a shield for, and attack against traditional computers. A logic bomb is analogous to a landmine in a computer network, and if one was to implement it as non-trivial mitigation, it will aid computer network defense. As has been seen in kinetic warfare, the use of landmines has been devastating to geopolitical regions in that they are severely difficult for a civilian to avoid triggering given the unknown position of a landmine. Thus, the importance of understanding a logic bomb is relevant and has corollaries to quantum mechanics as well. The research synthesizes quantum logic phase shifts in certain respects using the Dynamic Data Exchange protocol in software written for this work, as well as a C-NOT gate applied to a virtual quantum circuit environment by implementing a Quantum Fourier Transform. The research focus applies the principles of coherence and entanglement from quantum physics, the concept of expert systems in artificial intelligence, principles of prime number based cryptography with trapdoor functions, and modeling radio wave propagation against an event from unknown parameters. This comes as a program relying on the artificial intelligence concept of an expert system in conjunction with trigger events for a trapdoor function relying on infinite recursion, as well as system mechanics for elliptic curve cryptography along orbital angular momenta. Here trapdoor both denotes the form of cipher, as well as the implied relationship to logic bombs.
Computational functions in biochemical reaction networks.
Arkin, A; Ross, J
1994-01-01
In prior work we demonstrated the implementation of logic gates, sequential computers (universal Turing machines), and parallel computers by means of the kinetics of chemical reaction mechanisms. In the present article we develop this subject further by first investigating the computational properties of several enzymatic (single and multiple) reaction mechanisms: we show their steady states are analogous to either Boolean or fuzzy logic gates. Nearly perfect digital function is obtained only in the regime in which the enzymes are saturated with their substrates. With these enzymatic gates, we construct combinational chemical networks that execute a given truth-table. The dynamic range of a network's output is strongly affected by "input/output matching" conditions among the internal gate elements. We find a simple mechanism, similar to the interconversion of fructose-6-phosphate between its two bisphosphate forms (fructose-1,6-bisphosphate and fructose-2,6-bisphosphate), that functions analogously to an AND gate. When the simple model is supplanted with one in which the enzyme rate laws are derived from experimental data, the steady state of the mechanism functions as an asymmetric fuzzy aggregation operator with properties akin to a fuzzy AND gate. The qualitative behavior of the mechanism does not change when situated within a large model of glycolysis/gluconeogenesis and the TCA cycle. The mechanism, in this case, switches the pathway's mode from glycolysis to gluconeogenesis in response to chemical signals of low blood glucose (cAMP) and abundant fuel for the TCA cycle (acetyl coenzyme A). Images FIGURE 3 FIGURE 4 FIGURE 5 FIGURE 7 FIGURE 10 FIGURE 12 FIGURE 13 FIGURE 14 FIGURE 15 FIGURE 16 PMID:7948674
Algorithms for Computing the Lag Function.
1981-03-27
and S. J. Giner Subject: Algorithms for Computing the Lag Function References: See p . 27 Abstract: This memorandum provides a scheme for the numerical...highly oscillatory, and with singularities at the end points. j -3- 27 March 1981 GHP:SJG:Ihz TABLE OF CONTENTS P age Abstract...0 -9 16 -9 1) 1 11 1 1 -8 3 -1 -t I -8 8 -1 -1 1i 1 2 -6 2 1 1 2 -6 2 1 1 1 3 -3 -1 1 3 -3 -1 1i 1 4 1 1 4 1 -10- 27 March 1981 (1- P : SJG: 1hz The
Fedorov, Sergey V; Rusakov, Yury Yu; Krivdin, Leonid B
2014-11-01
The main factors affecting the accuracy and computational cost of the calculation of (31)P NMR chemical shifts in the representative series of organophosphorous compounds are examined at the density functional theory (DFT) and second-order Møller-Plesset perturbation theory (MP2) levels. At the DFT level, the best functionals for the calculation of (31)P NMR chemical shifts are those of Keal and Tozer, KT2 and KT3. Both at the DFT and MP2 levels, the most reliable basis sets are those of Jensen, pcS-2 or larger, and those of Pople, 6-311G(d,p) or larger. The reliable basis sets of Dunning's family are those of at least penta-zeta quality that precludes their practical consideration. An encouraging finding is that basically, the locally dense basis set approach resulting in a dramatic decrease in computational cost is justified in the calculation of (31)P NMR chemical shifts within the 1-2-ppm error. Relativistic corrections to (31)P NMR absolute shielding constants are of major importance reaching about 20-30 ppm (ca 7%) improving (not worsening!) the agreement of calculation with experiment. Further better agreement with the experiment by 1-2 ppm can be obtained by taking into account solvent effects within the integral equation formalism polarizable continuum model solvation scheme. We recommend the GIAO-DFT-KT2/pcS-3//pcS-2 scheme with relativistic corrections and solvent effects taken into account as the most versatile computational scheme for the calculation of (31)P NMR chemical shifts characterized by a mean absolute error of ca 9 ppm in the range of 550 ppm.
Zhou, Xinlin; Wei, Wei; Zhang, Yiyun; Cui, Jiaxin; Chen, Chuansheng
2015-01-01
Studies have shown that numerosity processing (e.g., comparison of numbers of dots in two dot arrays) is significantly correlated with arithmetic performance. Researchers have attributed this association to the fact that both tasks share magnitude processing. The current investigation tested an alternative hypothesis, which states that visual perceptual ability (as measured by a figure-matching task) can account for the close relation between numerosity processing and arithmetic performance (computational fluency). Four hundred and twenty four third- to fifth-grade children (220 boys and 204 girls, 8.0–11.0 years old; 120 third graders, 146 fourth graders, and 158 fifth graders) were recruited from two schools (one urban and one suburban) in Beijing, China. Six classes were randomly selected from each school, and all students in each selected class participated in the study. All children were given a series of cognitive and mathematical tests, including numerosity comparison, figure matching, forward verbal working memory, visual tracing, non-verbal matrices reasoning, mental rotation, choice reaction time, arithmetic tests and curriculum-based mathematical achievement test. Results showed that figure-matching ability had higher correlations with numerosity processing and computational fluency than did other cognitive factors (e.g., forward verbal working memory, visual tracing, non-verbal matrix reasoning, mental rotation, and choice reaction time). More important, hierarchical multiple regression showed that figure matching ability accounted for the well-established association between numerosity processing and computational fluency. In support of the visual perception hypothesis, the results suggest that visual perceptual ability, rather than magnitude processing, may be the shared component of numerosity processing and arithmetic performance. PMID:26441740
Computing dispersion interactions in density functional theory
NASA Astrophysics Data System (ADS)
Cooper, V. R.; Kong, L.; Langreth, D. C.
2010-02-01
In this article techniques for including dispersion interactions within density functional theory are examined. In particular comparisons are made between four popular methods: dispersion corrected DFT, pseudopotential correction schemes, symmetry adapted perturbation theory, and a non-local density functional - the so called Rutgers-Chalmers van der Waals density functional (vdW-DF). The S22 benchmark data set is used to evaluate the relative accuracy of these methods and factors such as scalability and transferability are also discussed. We demonstrate that vdW-DF presents an excellent compromise between computational speed and accuracy and lends most easily to full scale application in solid materials. This claim is supported through a brief discussion of a recent large scale application to H2 in a prototype metal organic framework material (MOF), Zn2BDC2TED. The vdW-DF shows overwhelming promise for first-principles studies of physisorbed molecules in porous extended systems; thereby having broad applicability for studies as diverse as molecular adsorption and storage, battery technology, catalysis and gas separations.
DeRobertis, Christopher V.; Lu, Yantian T.
2010-02-23
A method, system, and program storage device for creating a new user account or user group with a unique identification number in a computing environment having multiple user registries is provided. In response to receiving a command to create a new user account or user group, an operating system of a clustered computing environment automatically checks multiple registries configured for the operating system to determine whether a candidate identification number for the new user account or user group has been assigned already to one or more existing user accounts or groups, respectively. The operating system automatically assigns the candidate identification number to the new user account or user group created in a target user registry if the checking indicates that the candidate identification number has not been assigned already to any of the existing user accounts or user groups, respectively.
Computational based functional analysis of Bacillus phytases.
Verma, Anukriti; Singh, Vinay Kumar; Gaur, Smriti
2016-02-01
Phytase is an enzyme which catalyzes the total hydrolysis of phytate to less phosphorylated myo-inositol derivatives and inorganic phosphate and digests the undigestable phytate part present in seeds and grains and therefore provides digestible phosphorus, calcium and other mineral nutrients. Phytases are frequently added to the feed of monogastric animals so that bioavailability of phytic acid-bound phosphate increases, ultimately enhancing the nutritional value of diets. The Bacillus phytase is very suitable to be used in animal feed because of its optimum pH with excellent thermal stability. Present study is aimed to perform an in silico comparative characterization and functional analysis of phytases from Bacillus amyloliquefaciens to explore physico-chemical properties using various bio-computational tools. All proteins are acidic and thermostable and can be used as suitable candidates in the feed industry.
ERIC Educational Resources Information Center
Peng, Jacob C.
2009-01-01
The author investigated whether students' effort in working on homework problems was affected by their need for cognition, their perception of the system, and their computer efficacy when instructors used an online system to collect accounting homework. Results showed that individual intrinsic motivation and computer efficacy are important factors…
ERIC Educational Resources Information Center
Basile, Anthony; D'Aquila, Jill M.
2002-01-01
Accounting students received either traditional instruction (n=46) or used computer-mediated communication and WebCT course management software. There were no significant differences in attitudes about the course. However, computer users were more positive about course delivery and course management tools. (Contains 17 references.) (SK)
Computation of the lattice Green function for a dislocation
NASA Astrophysics Data System (ADS)
Tan, Anne Marie Z.; Trinkle, Dallas R.
2016-08-01
Modeling isolated dislocations is challenging due to their long-ranged strain fields. Flexible boundary condition methods capture the correct long-range strain field of a defect by coupling the defect core to an infinite harmonic bulk through the lattice Green function (LGF). To improve the accuracy and efficiency of flexible boundary condition methods, we develop a numerical method to compute the LGF specifically for a dislocation geometry; in contrast to previous methods, where the LGF was computed for the perfect bulk as an approximation for the dislocation. Our approach directly accounts for the topology of a dislocation, and the errors in the LGF computation converge rapidly for edge dislocations in a simple cubic model system as well as in BCC Fe with an empirical potential. When used within the flexible boundary condition approach, the dislocation LGF relaxes dislocation core geometries in fewer iterations than when the perfect bulk LGF is used as an approximation for the dislocation, making a flexible boundary condition approach more efficient.
Integrating Computers into the Accounting Curriculum Using an IBM PC Network. Final Report.
ERIC Educational Resources Information Center
Shaoul, Jean
Noting the increased use of microcomputers in commerce and the accounting profession, the Department of Accounting and Finance at the University of Manchester recognized the importance of integrating microcomputers into the accounting curriculum and requested and received a grant to develop an integrated study environment in which students would…
A Tutorial on Analog Computation: Computing Functions over the Reals
NASA Astrophysics Data System (ADS)
Campagnolo, Manuel Lameiras
The best known programmable analog computing device is the differential analyser. The concept for the device dates back to Lord Kelvin and his brother James Thomson in 1876, and was constructed in 1932 at MIT under the supervision of Vannevar Bush. The MIT differential analyser used wheel-and-disk mechanical integrators and was able to solve sixth-order differential equations. During the 1930’s, more powerful differential analysers were built. In 1941 Claude Shannon showed that given a sufficient numbers of integrators the machines could, in theory, precisely generate the solutions of all differentially algebraic equations. Shannon’s mathematical model of the differential analyser is known as the GPAC.
Moran, Rani; Teodorescu, Andrei R; Usher, Marius
2015-05-01
Confidence judgments are pivotal in the performance of daily tasks and in many domains of scientific research including the behavioral sciences, psychology and neuroscience. Positive resolution i.e., the positive correlation between choice-correctness and choice-confidence is a critical property of confidence judgments, which justifies their ubiquity. In the current paper, we study the mechanism underlying confidence judgments and their resolution by investigating the source of the inputs for the confidence-calculation. We focus on the intriguing debate between two families of confidence theories. According to single stage theories, confidence is based on the same information that underlies the decision (or on some other aspect of the decision process), whereas according to dual stage theories, confidence is affected by novel information that is collected after the decision was made. In three experiments, we support the case for dual stage theories by showing that post-choice perceptual availability manipulations exert a causal effect on confidence-resolution in the decision followed by confidence paradigm. These finding establish the role of RT2, the duration of the post-choice information-integration stage, as a prime dependent variable that theories of confidence should account for. We then present a novel list of robust empirical patterns ('hurdles') involving RT2 to guide further theorizing about confidence judgments. Finally, we present a unified computational dual stage model for choice, confidence and their latencies namely, the collapsing confidence boundary model (CCB). According to CCB, a diffusion-process choice is followed by a second evidence-integration stage towards a stochastic collapsing confidence boundary. Despite its simplicity, CCB clears the entire list of hurdles.
Saint-Georges, Catherine; Mahdhaoui, Ammar; Chetouani, Mohamed; Cassel, Raquel S.; Laznik, Marie-Christine; Apicella, Fabio; Muratori, Pietro; Maestro, Sandra; Muratori, Filippo; Cohen, David
2011-01-01
Background To assess whether taking into account interaction synchrony would help to better differentiate autism (AD) from intellectual disability (ID) and typical development (TD) in family home movies of infants aged less than 18 months, we used computational methods. Methodology and Principal Findings First, we analyzed interactive sequences extracted from home movies of children with AD (N = 15), ID (N = 12), or TD (N = 15) through the Infant and Caregiver Behavior Scale (ICBS). Second, discrete behaviors between baby (BB) and Care Giver (CG) co-occurring in less than 3 seconds were selected as single interactive patterns (or dyadic events) for analysis of the two directions of interaction (CG→BB and BB→CG) by group and semester. To do so, we used a Markov assumption, a Generalized Linear Mixed Model, and non negative matrix factorization. Compared to TD children, BBs with AD exhibit a growing deviant development of interactive patterns whereas those with ID rather show an initial delay of development. Parents of AD and ID do not differ very much from parents of TD when responding to their child. However, when initiating interaction, parents use more touching and regulation up behaviors as early as the first semester. Conclusion When studying interactive patterns, deviant autistic behaviors appear before 18 months. Parents seem to feel the lack of interactive initiative and responsiveness of their babies and try to increasingly supply soliciting behaviors. Thus we stress that credence should be given to parents' intuition as they recognize, long before diagnosis, the pathological process through the interactive pattern with their child. PMID:21818320
Computational Interpretations of Analysis via Products of Selection Functions
NASA Astrophysics Data System (ADS)
Escardó, Martín; Oliva, Paulo
We show that the computational interpretation of full comprehension via two well-known functional interpretations (dialectica and modified realizability) corresponds to two closely related infinite products of selection functions.
Computer method for identification of boiler transfer functions
NASA Technical Reports Server (NTRS)
Miles, J. H.
1972-01-01
Iterative computer aided procedure was developed which provides for identification of boiler transfer functions using frequency response data. Method uses frequency response data to obtain satisfactory transfer function for both high and low vapor exit quality data.
45 CFR 302.20 - Separation of cash handling and accounting functions.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 45 Public Welfare 2 2011-10-01 2011-10-01 false Separation of cash handling and accounting functions. 302.20 Section 302.20 Public Welfare Regulations Relating to Public Welfare OFFICE OF CHILD..., DEPARTMENT OF HEALTH AND HUMAN SERVICES STATE PLAN REQUIREMENTS § 302.20 Separation of cash handling...
45 CFR 302.20 - Separation of cash handling and accounting functions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 45 Public Welfare 2 2014-10-01 2012-10-01 true Separation of cash handling and accounting functions. 302.20 Section 302.20 Public Welfare Regulations Relating to Public Welfare OFFICE OF CHILD..., DEPARTMENT OF HEALTH AND HUMAN SERVICES STATE PLAN REQUIREMENTS § 302.20 Separation of cash handling...
Computer program for Bessel and Hankel functions
NASA Technical Reports Server (NTRS)
Kreider, Kevin L.; Saule, Arthur V.; Rice, Edward J.; Clark, Bruce J.
1991-01-01
A set of FORTRAN subroutines for calculating Bessel and Hankel functions is presented. The routines calculate Bessel and Hankel functions of the first and second kinds, as well as their derivatives, for wide ranges of integer order and real or complex argument in single or double precision. Depending on the order and argument, one of three evaluation methods is used: the power series definition, an Airy function expansion, or an asymptotic expansion. Routines to calculate Airy functions and their derivatives are also included.
A computational account of the development of the generalization of shape information.
Doumas, Leonidas A A; Hummel, John E
2010-05-01
Abecassis, Sera, Yonas, and Schwade (2001) showed that young children represent shapes more metrically, and perhaps more holistically, than do older children and adults. How does a child transition from representing objects and events as undifferentiated wholes to representing them explicitly in terms of their attributes? According to RBC (Recognition-by-Components theory; Biederman, 1987), objects are represented as collections of categorical geometric parts ("geons") in particular categorical spatial relations. We propose that the transition from holistic to more categorical visual shape processing is a function of the development of geon-like representations via a process of progressive intersection discovery. We present an account of this transition in terms of DORA (Doumas, Hummel, & Sandhofer, 2008), a model of the discovery of relational concepts. We demonstrate that DORA can learn representations of single geons by comparing objects composed of multiple geons. In addition, as DORA is learning it follows the same performance trajectory as children, originally generalizing shape more metrically/holistically and eventually generalizing categorically.
Computing black hole partition functions from quasinormal modes
Arnold, Peter; Szepietowski, Phillip; Vaman, Diana
2016-07-07
We propose a method of computing one-loop determinants in black hole space-times (with emphasis on asymptotically anti-de Sitter black holes) that may be used for numerics when completely-analytic results are unattainable. The method utilizes the expression for one-loop determinants in terms of quasinormal frequencies determined by Denef, Hartnoll and Sachdev in [1]. A numerical evaluation must face the fact that the sum over the quasinormal modes, indexed by momentum and overtone numbers, is divergent. A necessary ingredient is then a regularization scheme to handle the divergent contributions of individual fixed-momentum sectors to the partition function. To this end, we formulate an effective two-dimensional problem in which a natural refinement of standard heat kernel techniques can be used to account for contributions to the partition function at fixed momentum. We test our method in a concrete case by reproducing the scalar one-loop determinant in the BTZ black hole background. Furthermore, we then discuss the application of such techniques to more complicated spacetimes.
Computing black hole partition functions from quasinormal modes
Arnold, Peter; Szepietowski, Phillip; Vaman, Diana
2016-07-07
We propose a method of computing one-loop determinants in black hole space-times (with emphasis on asymptotically anti-de Sitter black holes) that may be used for numerics when completely-analytic results are unattainable. The method utilizes the expression for one-loop determinants in terms of quasinormal frequencies determined by Denef, Hartnoll and Sachdev in [1]. A numerical evaluation must face the fact that the sum over the quasinormal modes, indexed by momentum and overtone numbers, is divergent. A necessary ingredient is then a regularization scheme to handle the divergent contributions of individual fixed-momentum sectors to the partition function. To this end, we formulatemore » an effective two-dimensional problem in which a natural refinement of standard heat kernel techniques can be used to account for contributions to the partition function at fixed momentum. We test our method in a concrete case by reproducing the scalar one-loop determinant in the BTZ black hole background. Furthermore, we then discuss the application of such techniques to more complicated spacetimes.« less
Computing black hole partition functions from quasinormal modes
NASA Astrophysics Data System (ADS)
Arnold, Peter; Szepietowski, Phillip; Vaman, Diana
2016-07-01
We propose a method of computing one-loop determinants in black hole space-times (with emphasis on asymptotically anti-de Sitter black holes) that may be used for numerics when completely-analytic results are unattainable. The method utilizes the expression for one-loop determinants in terms of quasinormal frequencies determined by Denef, Hartnoll and Sachdev in [1]. A numerical evaluation must face the fact that the sum over the quasinormal modes, indexed by momentum and overtone numbers, is divergent. A necessary ingredient is then a regularization scheme to handle the divergent contributions of individual fixed-momentum sectors to the partition function. To this end, we formulate an effective two-dimensional problem in which a natural refinement of standard heat kernel techniques can be used to account for contributions to the partition function at fixed momentum. We test our method in a concrete case by reproducing the scalar one-loop determinant in the BTZ black hole background. We then discuss the application of such techniques to more complicated spacetimes.
Computer Use and the Relation between Age and Cognitive Functioning
ERIC Educational Resources Information Center
Soubelet, Andrea
2012-01-01
This article investigates whether computer use for leisure could mediate or moderate the relations between age and cognitive functioning. Findings supported smaller age differences in measures of cognitive functioning for people who reported spending more hours using a computer. Because of the cross-sectional design of the study, two alternative…
ERIC Educational Resources Information Center
Fox, Janna; Cheng, Liying
2015-01-01
In keeping with the trend to elicit multiple stakeholder responses to operational tests as part of test validation, this exploratory mixed methods study examines test-taker accounts of an Internet-based (i.e., computer-administered) test in the high-stakes context of proficiency testing for university admission. In 2013, as language testing…
Singular Function Integration in Computational Physics
NASA Astrophysics Data System (ADS)
Hasbun, Javier
2009-03-01
In teaching computational methods in the undergraduate physics curriculum, standard integration approaches taught include the rectangular, trapezoidal, Simpson, Romberg, and others. Over time, these techniques have proven to be invaluable and students are encouraged to employ the most efficient method that is expected to perform best when applied to a given problem. However, some physics research applications require techniques that can handle singularities. While decreasing the step size in traditional approaches is an alternative, this may not always work and repetitive processes make this route even more inefficient. Here, I present two existing integration rules designed to handle singular integrals. I compare them to traditional rules as well as to the exact analytic results. I suggest that it is perhaps time to include such approaches in the undergraduate computational physics course.
Pair correlation function integrals: Computation and use
NASA Astrophysics Data System (ADS)
Wedberg, Rasmus; O'Connell, John P.; Peters, Günther H.; Abildskov, Jens
2011-08-01
We describe a method for extending radial distribution functions obtained from molecular simulations of pure and mixed molecular fluids to arbitrary distances. The method allows total correlation function integrals to be reliably calculated from simulations of relatively small systems. The long-distance behavior of radial distribution functions is determined by requiring that the corresponding direct correlation functions follow certain approximations at long distances. We have briefly described the method and tested its performance in previous communications [R. Wedberg, J. P. O'Connell, G. H. Peters, and J. Abildskov, Mol. Simul. 36, 1243 (2010);, 10.1080/08927020903536366 Fluid Phase Equilib. 302, 32 (2011)], 10.1016/j.fluid.2010.10.004, but describe here its theoretical basis more thoroughly and derive long-distance approximations for the direct correlation functions. We describe the numerical implementation of the method in detail, and report numerical tests complementing previous results. Pure molecular fluids are here studied in the isothermal-isobaric ensemble with isothermal compressibilities evaluated from the total correlation function integrals and compared with values derived from volume fluctuations. For systems where the radial distribution function has structure beyond the sampling limit imposed by the system size, the integration is more reliable, and usually more accurate, than simple integral truncation.
2006-11-03
previous functionals. A related example, in particular a case of DFT failing to account for stereoelectronic effects, was provided by Schreiner et...8, 3631. (4) Schreiner , P. R.; Fokin, A. A.; Pascal Jr., R. A.; de Meijere, A. Org. Lett. 2006, 8, 3635. (5) Perdew, J. P.; Burke, K.; Ernzerhof...M. Phys. Rev. Lett 1996, 77, 3865. (6) Staroverov, V. N.; Scuseria, G. E.; Tao , J.; Perdew, J. P. J. Chem. Phys. 2003, 119, 12129. (7) Becke, A
Basic mathematical function libraries for scientific computation
NASA Technical Reports Server (NTRS)
Galant, David C.
1989-01-01
Ada packages implementing selected mathematical functions for the support of scientific and engineering applications were written. The packages provide the Ada programmer with the mathematical function support found in the languages Pascal and FORTRAN as well as an extended precision arithmetic and a complete complex arithmetic. The algorithms used are fully described and analyzed. Implementation assumes that the Ada type FLOAT objects fully conform to the IEEE 754-1985 standard for single binary floating-point arithmetic, and that INTEGER objects are 32-bit entities. Codes for the Ada packages are included as appendixes.
Computing Partial Transposes and Related Entanglement Functions
NASA Astrophysics Data System (ADS)
Maziero, Jonas
2016-12-01
The partial transpose (PT) is an important function for entanglement testing and quantification and also for the study of geometrical aspects of the quantum state space. In this article, considering general bipartite and multipartite discrete systems, explicit formulas ready for the numerical implementation of the PT and of related entanglement functions are presented and the Fortran code produced for that purpose is described. What is more, we obtain an analytical expression for the Hilbert-Schmidt entanglement of two-qudit systems and for the associated closest separable state. In contrast to previous works on this matter, we only use the properties of the PT, not applying Lagrange multipliers.
Enumeration of Bent Boolean Functions by Reconfigurable Computer
2010-05-01
Publishing Company, 1986. [10] D. H. Knuth , The Art of Computer Programming, 2nd Ed., Addison- Wesley Publishing Co., Reading, Menlo Park, London...Enumeration of Bent Boolean Functions by Reconfigurable Computer J. L. Shafer S. W. Schneider J. T. Butler P. Stănică ECE Department Department of ...it yields a new realization of the transeunt triangle that has less complexity and delay. Finally, we show computational results from a
NASA Astrophysics Data System (ADS)
Kulikova, N. V.; Chepurova, V. M.
2009-10-01
So far we investigated the nonperturbation dynamics of meteoroid complexes. The numerical integration of the differential equations of motion in the N-body problem by the Everhart algorithm (N=2-6) and introduction of the intermediate hyperbolic orbits build on the base of the generalized problem of two fixed centers permit to take into account some gravitational perturbations.
A computational account of the production effect: Still playing twenty questions with nature.
Jamieson, Randall K; Mewhort, D J K; Hockley, William E
2016-06-01
People remember words that they read aloud better than words that they read silently, a result known as the production effect. The standing explanation for the production effect is that producing a word renders it distinctive in memory and, thus, memorable at test. By 1 key account, distinctiveness is defined in terms of sensory feedback. We formalize the sensory-feedback account using MINERVA 2, a standard model of memory. The model accommodates the basic result in recognition as well as the fact that the mixed-list production effect is larger than its pure-list counterpart, that the production effect is robust to forgetting, and that the production and generation effects have additive influences on performance. A final simulation addresses the strength-based account and suggests that it will be more difficult to distinguish a strength-based versus distinctiveness-based explanation than is typically thought. We conclude that the production effect is consistent with existing theory and discuss our analysis in relation to Alan Newell's (1973) classic criticism of psychology and call for an analysis of psychological principles instead of laboratory phenomena. (PsycINFO Database Record
Computer-Intensive Algebra and Students' Conceptual Knowledge of Functions.
ERIC Educational Resources Information Center
O'Callaghan, Brian R.
1998-01-01
Describes a research project that examined the effects of the Computer-Intensive Algebra (CIA) and traditional algebra curricula on students' (N=802) understanding of the function concept. Results indicate that CIA students achieved a better understanding of functions and were better at the components of modeling, interpreting, and translating.…
Evaluation of Computer Games for Learning about Mathematical Functions
ERIC Educational Resources Information Center
Tüzün, Hakan; Arkun, Selay; Bayirtepe-Yagiz, Ezgi; Kurt, Funda; Yermeydan-Ugur, Benlihan
2008-01-01
In this study, researchers evaluated the usability of game environments for teaching and learning about mathematical functions. A 3-Dimensional multi-user computer game called as "Quest Atlantis" has been used, and an educational game about mathematical functions has been developed in parallel to the Quest Atlantis' technical and…
Positive Wigner functions render classical simulation of quantum computation efficient.
Mari, A; Eisert, J
2012-12-07
We show that quantum circuits where the initial state and all the following quantum operations can be represented by positive Wigner functions can be classically efficiently simulated. This is true both for continuous-variable as well as discrete variable systems in odd prime dimensions, two cases which will be treated on entirely the same footing. Noting the fact that Clifford and Gaussian operations preserve the positivity of the Wigner function, our result generalizes the Gottesman-Knill theorem. Our algorithm provides a way of sampling from the output distribution of a computation or a simulation, including the efficient sampling from an approximate output distribution in the case of sampling imperfections for initial states, gates, or measurements. In this sense, this work highlights the role of the positive Wigner function as separating classically efficiently simulable systems from those that are potentially universal for quantum computing and simulation, and it emphasizes the role of negativity of the Wigner function as a computational resource.
Computer-based accountability system (Phase I) for special nuclear materials at Argonne-West
Ingermanson, R.S.; Proctor, A.E.
1982-05-01
An automated accountability system for special nuclear materials (SNM) is under development at Argonne National Laboratory-West. Phase I of the development effort has established the following basic features of the system: a unique file organization allows rapid updating or retrieval of the status of various SNM, based on batch numbers, storage location, serial number, or other attributes. Access to the program is controlled by an interactive user interface that can be easily understood by operators who have had no prior background in electronic data processing. Extensive use of structured programming techniques make the software package easy to understand and to modify for specific applications. All routines are written in FORTRAN.
PERFORMANCE OF A COMPUTER-BASED ASSESSMENT OF COGNITIVE FUNCTION MEASURES IN TWO COHORTS OF SENIORS
Espeland, Mark A.; Katula, Jeffrey A.; Rushing, Julia; Kramer, Arthur F.; Jennings, Janine M.; Sink, Kaycee M.; Nadkarni, Neelesh K.; Reid, Kieran F.; Castro, Cynthia M.; Church, Timothy; Kerwin, Diana R.; Williamson, Jeff D.; Marottoli, Richard A.; Rushing, Scott; Marsiske, Michael; Rapp, Stephen R.
2013-01-01
Background Computer-administered assessment of cognitive function is being increasingly incorporated in clinical trials, however its performance in these settings has not been systematically evaluated. Design The Seniors Health and Activity Research Program (SHARP) pilot trial (N=73) developed a computer-based tool for assessing memory performance and executive functioning. The Lifestyle Interventions and Independence for Seniors (LIFE) investigators incorporated this battery in a full scale multicenter clinical trial (N=1635). We describe relationships that test scores have with those from interviewer-administered cognitive function tests and risk factors for cognitive deficits and describe performance measures (completeness, intra-class correlations). Results Computer-based assessments of cognitive function had consistent relationships across the pilot and full scale trial cohorts with interviewer-administered assessments of cognitive function, age, and a measure of physical function. In the LIFE cohort, their external validity was further demonstrated by associations with other risk factors for cognitive dysfunction: education, hypertension, diabetes, and physical function. Acceptable levels of data completeness (>83%) were achieved on all computer-based measures, however rates of missing data were higher among older participants (odds ratio=1.06 for each additional year; p<0.001) and those who reported no current computer use (odds ratio=2.71; p<0.001). Intra-class correlations among clinics were at least as low (ICC≤0.013) as for interviewer measures (ICC≤0.023), reflecting good standardization. All cognitive measures loaded onto the first principal component (global cognitive function), which accounted for 40% of the overall variance. Conclusion Our results support the use of computer-based tools for assessing cognitive function in multicenter clinical trials of older individuals. PMID:23589390
Kling, Daniel; Tillmar, Andreas; Egeland, Thore; Mostad, Petter
2015-09-01
Several applications necessitate an unbiased determination of relatedness, be it in linkage or association studies or in a forensic setting. An appropriate model to compute the joint probability of some genetic data for a set of persons given some hypothesis about the pedigree structure is then required. The increasing number of markers available through high-density SNP microarray typing and NGS technologies intensifies the demand, where using a large number of markers may lead to biased results due to strong dependencies between closely located loci, both within pedigrees (linkage) and in the population (allelic association or linkage disequilibrium (LD)). We present a new general model, based on a Markov chain for inheritance patterns and another Markov chain for founder allele patterns, the latter allowing us to account for LD. We also demonstrate a specific implementation for X chromosomal markers that allows for computation of likelihoods based on hypotheses of alleged relationships and genetic marker data. The algorithm can simultaneously account for linkage, LD, and mutations. We demonstrate its feasibility using simulated examples. The algorithm is implemented in the software FamLinkX, providing a user-friendly GUI for Windows systems (FamLinkX, as well as further usage instructions, is freely available at www.famlink.se ). Our software provides the necessary means to solve cases where no previous implementation exists. In addition, the software has the possibility to perform simulations in order to further study the impact of linkage and LD on computed likelihoods for an arbitrary set of markers.
Ma, Yuanxiao; Ma, Haijing; Chen, Xu; Ran, Guangming; Zhang, Xing
2017-02-07
People tend to respond to rejection and attack with aggression. The present research examined the modulation role of attachment patterns on provoked aggression following punishment and proposed an executive functioning account of attachment patterns' modulating influence based on the General Aggression Model. Attachment style was measured using the Experiences in Close Relationships inventory. Experiments 1a and b and 2 adopted a social rejection task and assessed subsequent unprovoked and provoked aggression with different attachment patterns. Moreover, Experiment 1b and 2 used a Stroop task to examine whether differences in provoked aggression by attachment patterns are due to the amount of executive functioning following social rejection, or after unprovoked punishment, or even before social rejection. Anxiously attached participants displayed significant more provoked aggression than securely and avoidantly attached participants in provoked aggression following unprovoked punishment in Experiments 1 and 2. Meanwhile, subsequent Stroop tests indicated anxiously attached participants experienced more executive functioning depletion after social rejection and unprovoked aggression. The present findings support the General Aggression Model and suggest that provoked aggression is predicted by attachment patterns in the context of social rejection; different provoked aggression may depend on the degree of executive functioning that individuals preserved in aggressive situations. The current study contributes to our understanding of the importance of the role of attachment patterns in modulating aggressive behavior accompanying unfair social encounters.
Local-basis-function approach to computed tomography
NASA Astrophysics Data System (ADS)
Hanson, K. M.; Wecksung, G. W.
1985-12-01
In the local basis-function approach, a reconstruction is represented as a linear expansion of basis functions, which are arranged on a rectangular grid and possess a local region of support. The basis functions considered here are positive and may overlap. It is found that basis functions based on cubic B-splines offer significant improvements in the calculational accuracy that can be achieved with iterative tomographic reconstruction algorithms. By employing repetitive basis functions, the computational effort involved in these algorithms can be minimized through the use of tabulated values for the line or strip integrals over a single-basis function. The local nature of the basis functions reduces the difficulties associated with applying local constraints on reconstruction values, such as upper and lower limits. Since a reconstruction is specified everywhere by a set of coefficients, display of a coarsely represented image does not require an arbitrary choice of an interpolation function.
Context-Specific Proportion Congruency Effects: An Episodic Learning Account and Computational Model
Schmidt, James R.
2016-01-01
In the Stroop task, participants identify the print color of color words. The congruency effect is the observation that response times and errors are increased when the word and color are incongruent (e.g., the word “red” in green ink) relative to when they are congruent (e.g., “red” in red). The proportion congruent (PC) effect is the finding that congruency effects are reduced when trials are mostly incongruent rather than mostly congruent. This PC effect can be context-specific. For instance, if trials are mostly incongruent when presented in one location and mostly congruent when presented in another location, the congruency effect is smaller for the former location. Typically, PC effects are interpreted in terms of strategic control of attention in response to conflict, termed conflict adaptation or conflict monitoring. In the present manuscript, however, an episodic learning account is presented for context-specific proportion congruent (CSPC) effects. In particular, it is argued that context-specific contingency learning can explain part of the effect, and context-specific rhythmic responding can explain the rest. Both contingency-based and temporal-based learning can parsimoniously be conceptualized within an episodic learning framework. An adaptation of the Parallel Episodic Processing model is presented. This model successfully simulates CSPC effects, both for contingency-biased and contingency-unbiased (transfer) items. The same fixed-parameter model can explain a range of other findings from the learning, timing, binding, practice, and attentional control domains. PMID:27899907
Kavanagh, Liam C; Winkielman, Piotr
2016-01-01
There is a broad theoretical and empirical interest in spontaneous mimicry, or the automatic reproduction of a model's behavior. Evidence shows that people mimic models they like, and that mimicry enhances liking for the mimic. Yet, there is no satisfactory account of this phenomenon, especially in terms of its functional significance. While affiliation is often cited as the driver of mimicry, we argue that mimicry is primarily driven by a learning process that helps to produce the appropriate bodily and emotional responses to relevant social situations. Because the learning process and the resulting knowledge is implicit, it cannot easily be rejected, criticized, revised, and employed by the learner in a deliberative or deceptive manner. We argue that these characteristics will lead individuals to preferentially mimic ingroup members, whose implicit information is worth incorporating. Conversely, mimicry of the wrong person is costly because individuals will internalize "bad habits," including emotional reactions and mannerisms indicating wrong group membership. This pattern of mimicry, in turn, means that observed mimicry is an honest signal of group affiliation. We propose that the preferences of models for the mimic stems from this true signal value. Further, just like facial expressions, mimicry communicates a genuine disposition when it is truly spontaneous. Consequently, perceivers are attuned to relevant cues such as appropriate timing, fidelity, and selectivity. Our account, while assuming no previously unknown biological endowments, also explains greater mimicry of powerful people, and why affiliation can be signaled by mimicry of seemingly inconsequential behaviors.
Kavanagh, Liam C.; Winkielman, Piotr
2016-01-01
There is a broad theoretical and empirical interest in spontaneous mimicry, or the automatic reproduction of a model’s behavior. Evidence shows that people mimic models they like, and that mimicry enhances liking for the mimic. Yet, there is no satisfactory account of this phenomenon, especially in terms of its functional significance. While affiliation is often cited as the driver of mimicry, we argue that mimicry is primarily driven by a learning process that helps to produce the appropriate bodily and emotional responses to relevant social situations. Because the learning process and the resulting knowledge is implicit, it cannot easily be rejected, criticized, revised, and employed by the learner in a deliberative or deceptive manner. We argue that these characteristics will lead individuals to preferentially mimic ingroup members, whose implicit information is worth incorporating. Conversely, mimicry of the wrong person is costly because individuals will internalize “bad habits,” including emotional reactions and mannerisms indicating wrong group membership. This pattern of mimicry, in turn, means that observed mimicry is an honest signal of group affiliation. We propose that the preferences of models for the mimic stems from this true signal value. Further, just like facial expressions, mimicry communicates a genuine disposition when it is truly spontaneous. Consequently, perceivers are attuned to relevant cues such as appropriate timing, fidelity, and selectivity. Our account, while assuming no previously unknown biological endowments, also explains greater mimicry of powerful people, and why affiliation can be signaled by mimicry of seemingly inconsequential behaviors. PMID:27064398
Computer method for identification of boiler transfer functions
NASA Technical Reports Server (NTRS)
Miles, J. H.
1971-01-01
An iterative computer method is described for identifying boiler transfer functions using frequency response data. An objective penalized performance measure and a nonlinear minimization technique are used to cause the locus of points generated by a transfer function to resemble the locus of points obtained from frequency response measurements. Different transfer functions can be tried until a satisfactory empirical transfer function to the system is found. To illustrate the method, some examples and some results from a study of a set of data consisting of measurements of the inlet impedance of a single tube forced flow boiler with inserts are given.
Tempel, David G; Aspuru-Guzik, Alán
2012-01-01
We prove that the theorems of TDDFT can be extended to a class of qubit Hamiltonians that are universal for quantum computation. The theorems of TDDFT applied to universal Hamiltonians imply that single-qubit expectation values can be used as the basic variables in quantum computation and information theory, rather than wavefunctions. From a practical standpoint this opens the possibility of approximating observables of interest in quantum computations directly in terms of single-qubit quantities (i.e. as density functionals). Additionally, we also demonstrate that TDDFT provides an exact prescription for simulating universal Hamiltonians with other universal Hamiltonians that have different, and possibly easier-to-realize two-qubit interactions. This establishes the foundations of TDDFT for quantum computation and opens the possibility of developing density functionals for use in quantum algorithms.
Computational approaches for rational design of proteins with novel functionalities
Tiwari, Manish Kumar; Singh, Ranjitha; Singh, Raushan Kumar; Kim, In-Won; Lee, Jung-Kul
2012-01-01
Proteins are the most multifaceted macromolecules in living systems and have various important functions, including structural, catalytic, sensory, and regulatory functions. Rational design of enzymes is a great challenge to our understanding of protein structure and physical chemistry and has numerous potential applications. Protein design algorithms have been applied to design or engineer proteins that fold, fold faster, catalyze, catalyze faster, signal, and adopt preferred conformational states. The field of de novo protein design, although only a few decades old, is beginning to produce exciting results. Developments in this field are already having a significant impact on biotechnology and chemical biology. The application of powerful computational methods for functional protein designing has recently succeeded at engineering target activities. Here, we review recently reported de novo functional proteins that were developed using various protein design approaches, including rational design, computational optimization, and selection from combinatorial libraries, highlighting recent advances and successes. PMID:24688643
A large-scale evaluation of computational protein function prediction.
Radivojac, Predrag; Clark, Wyatt T; Oron, Tal Ronnen; Schnoes, Alexandra M; Wittkop, Tobias; Sokolov, Artem; Graim, Kiley; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa; Pandey, Gaurav; Yunes, Jeffrey M; Talwalkar, Ameet S; Repo, Susanna; Souza, Michael L; Piovesan, Damiano; Casadio, Rita; Wang, Zheng; Cheng, Jianlin; Fang, Hai; Gough, Julian; Koskinen, Patrik; Törönen, Petri; Nokso-Koivisto, Jussi; Holm, Liisa; Cozzetto, Domenico; Buchan, Daniel W A; Bryson, Kevin; Jones, David T; Limaye, Bhakti; Inamdar, Harshal; Datta, Avik; Manjari, Sunitha K; Joshi, Rajendra; Chitale, Meghana; Kihara, Daisuke; Lisewski, Andreas M; Erdin, Serkan; Venner, Eric; Lichtarge, Olivier; Rentzsch, Robert; Yang, Haixuan; Romero, Alfonso E; Bhat, Prajwal; Paccanaro, Alberto; Hamp, Tobias; Kaßner, Rebecca; Seemayer, Stefan; Vicedo, Esmeralda; Schaefer, Christian; Achten, Dominik; Auer, Florian; Boehm, Ariane; Braun, Tatjana; Hecht, Maximilian; Heron, Mark; Hönigschmid, Peter; Hopf, Thomas A; Kaufmann, Stefanie; Kiening, Michael; Krompass, Denis; Landerer, Cedric; Mahlich, Yannick; Roos, Manfred; Björne, Jari; Salakoski, Tapio; Wong, Andrew; Shatkay, Hagit; Gatzmann, Fanny; Sommer, Ingolf; Wass, Mark N; Sternberg, Michael J E; Škunca, Nives; Supek, Fran; Bošnjak, Matko; Panov, Panče; Džeroski, Sašo; Šmuc, Tomislav; Kourmpetis, Yiannis A I; van Dijk, Aalt D J; ter Braak, Cajo J F; Zhou, Yuanpeng; Gong, Qingtian; Dong, Xinran; Tian, Weidong; Falda, Marco; Fontana, Paolo; Lavezzo, Enrico; Di Camillo, Barbara; Toppo, Stefano; Lan, Liang; Djuric, Nemanja; Guo, Yuhong; Vucetic, Slobodan; Bairoch, Amos; Linial, Michal; Babbitt, Patricia C; Brenner, Steven E; Orengo, Christine; Rost, Burkhard; Mooney, Sean D; Friedberg, Iddo
2013-03-01
Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be high. Here we report the results from the first large-scale community-based critical assessment of protein function annotation (CAFA) experiment. Fifty-four methods representing the state of the art for protein function prediction were evaluated on a target set of 866 proteins from 11 organisms. Two findings stand out: (i) today's best protein function prediction algorithms substantially outperform widely used first-generation methods, with large gains on all types of targets; and (ii) although the top methods perform well enough to guide experiments, there is considerable need for improvement of currently available tools.
Efficient and accurate computation of the incomplete Airy functions
NASA Technical Reports Server (NTRS)
Constantinides, E. D.; Marhefka, R. J.
1993-01-01
The incomplete Airy integrals serve as canonical functions for the uniform ray optical solutions to several high-frequency scattering and diffraction problems that involve a class of integrals characterized by two stationary points that are arbitrarily close to one another or to an integration endpoint. Integrals with such analytical properties describe transition region phenomena associated with composite shadow boundaries. An efficient and accurate method for computing the incomplete Airy functions would make the solutions to such problems useful for engineering purposes. In this paper a convergent series solution for the incomplete Airy functions is derived. Asymptotic expansions involving several terms are also developed and serve as large argument approximations. The combination of the series solution with the asymptotic formulae provides for an efficient and accurate computation of the incomplete Airy functions. Validation of accuracy is accomplished using direct numerical integration data.
A large-scale evaluation of computational protein function prediction
Radivojac, Predrag; Clark, Wyatt T; Ronnen Oron, Tal; Schnoes, Alexandra M; Wittkop, Tobias; Sokolov, Artem; Graim, Kiley; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa; Pandey, Gaurav; Yunes, Jeffrey M; Talwalkar, Ameet S; Repo, Susanna; Souza, Michael L; Piovesan, Damiano; Casadio, Rita; Wang, Zheng; Cheng, Jianlin; Fang, Hai; Gough, Julian; Koskinen, Patrik; Törönen, Petri; Nokso-Koivisto, Jussi; Holm, Liisa; Cozzetto, Domenico; Buchan, Daniel W A; Bryson, Kevin; Jones, David T; Limaye, Bhakti; Inamdar, Harshal; Datta, Avik; Manjari, Sunitha K; Joshi, Rajendra; Chitale, Meghana; Kihara, Daisuke; Lisewski, Andreas M; Erdin, Serkan; Venner, Eric; Lichtarge, Olivier; Rentzsch, Robert; Yang, Haixuan; Romero, Alfonso E; Bhat, Prajwal; Paccanaro, Alberto; Hamp, Tobias; Kassner, Rebecca; Seemayer, Stefan; Vicedo, Esmeralda; Schaefer, Christian; Achten, Dominik; Auer, Florian; Böhm, Ariane; Braun, Tatjana; Hecht, Maximilian; Heron, Mark; Hönigschmid, Peter; Hopf, Thomas; Kaufmann, Stefanie; Kiening, Michael; Krompass, Denis; Landerer, Cedric; Mahlich, Yannick; Roos, Manfred; Björne, Jari; Salakoski, Tapio; Wong, Andrew; Shatkay, Hagit; Gatzmann, Fanny; Sommer, Ingolf; Wass, Mark N; Sternberg, Michael J E; Škunca, Nives; Supek, Fran; Bošnjak, Matko; Panov, Panče; Džeroski, Sašo; Šmuc, Tomislav; Kourmpetis, Yiannis A I; van Dijk, Aalt D J; ter Braak, Cajo J F; Zhou, Yuanpeng; Gong, Qingtian; Dong, Xinran; Tian, Weidong; Falda, Marco; Fontana, Paolo; Lavezzo, Enrico; Di Camillo, Barbara; Toppo, Stefano; Lan, Liang; Djuric, Nemanja; Guo, Yuhong; Vucetic, Slobodan; Bairoch, Amos; Linial, Michal; Babbitt, Patricia C; Brenner, Steven E; Orengo, Christine; Rost, Burkhard; Mooney, Sean D; Friedberg, Iddo
2013-01-01
Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be high. Here we report the results from the first large-scale community-based Critical Assessment of protein Function Annotation (CAFA) experiment. Fifty-four methods representing the state-of-the-art for protein function prediction were evaluated on a target set of 866 proteins from eleven organisms. Two findings stand out: (i) today’s best protein function prediction algorithms significantly outperformed widely-used first-generation methods, with large gains on all types of targets; and (ii) although the top methods perform well enough to guide experiments, there is significant need for improvement of currently available tools. PMID:23353650
Robust Computation of Morse-Smale Complexes of Bilinear Functions
Norgard, G; Bremer, P T
2010-11-30
The Morse-Smale (MS) complex has proven to be a useful tool in extracting and visualizing features from scalar-valued data. However, existing algorithms to compute the MS complex are restricted to either piecewise linear or discrete scalar fields. This paper presents a new combinatorial algorithm to compute MS complexes for two dimensional piecewise bilinear functions defined on quadrilateral meshes. We derive a new invariant of the gradient flow within a bilinear cell and use it to develop a provably correct computation which is unaffected by numerical instabilities. This includes a combinatorial algorithm to detect and classify critical points as well as a way to determine the asymptotes of cell-based saddles and their intersection with cell edges. Finally, we introduce a simple data structure to compute and store integral lines on quadrilateral meshes which by construction prevents intersections and enables us to enforce constraints on the gradient flow to preserve known invariants.
Computational design of proteins with novel structure and functions
NASA Astrophysics Data System (ADS)
Wei, Yang; Lu-Hua, Lai
2016-01-01
Computational design of proteins is a relatively new field, where scientists search the enormous sequence space for sequences that can fold into desired structure and perform desired functions. With the computational approach, proteins can be designed, for example, as regulators of biological processes, novel enzymes, or as biotherapeutics. These approaches not only provide valuable information for understanding of sequence-structure-function relations in proteins, but also hold promise for applications to protein engineering and biomedical research. In this review, we briefly introduce the rationale for computational protein design, then summarize the recent progress in this field, including de novo protein design, enzyme design, and design of protein-protein interactions. Challenges and future prospects of this field are also discussed. Project supported by the National Basic Research Program of China (Grant No. 2015CB910300), the National High Technology Research and Development Program of China (Grant No. 2012AA020308), and the National Natural Science Foundation of China (Grant No. 11021463).
Functional Characteristics of Intelligent Computer-Assisted Instruction: Intelligent Features.
ERIC Educational Resources Information Center
Park, Ok-choon
1988-01-01
Examines the functional characteristics of intelligent computer assisted instruction (ICAI) and discusses the requirements of a multidisciplinary cooperative effort of its development. A typical ICAI model is presented and intelligent features of ICAI systems are described, including modeling the student's learning process, qualitative decision…
SNAP: A computer program for generating symbolic network functions
NASA Technical Reports Server (NTRS)
Lin, P. M.; Alderson, G. E.
1970-01-01
The computer program SNAP (symbolic network analysis program) generates symbolic network functions for networks containing R, L, and C type elements and all four types of controlled sources. The program is efficient with respect to program storage and execution time. A discussion of the basic algorithms is presented, together with user's and programmer's guides.
Supporting Executive Functions during Children's Preliteracy Learning with the Computer
ERIC Educational Resources Information Center
Van de Sande, E.; Segers, E.; Verhoeven, L.
2016-01-01
The present study examined how embedded activities to support executive functions helped children to benefit from a computer intervention that targeted preliteracy skills. Three intervention groups were compared on their preliteracy gains in a randomized controlled trial design: an experimental group that worked with software to stimulate early…
Computational Account of Spontaneous Activity as a Signature of Predictive Coding
Koren, Veronika
2017-01-01
Spontaneous activity is commonly observed in a variety of cortical states. Experimental evidence suggested that neural assemblies undergo slow oscillations with Up ad Down states even when the network is isolated from the rest of the brain. Here we show that these spontaneous events can be generated by the recurrent connections within the network and understood as signatures of neural circuits that are correcting their internal representation. A noiseless spiking neural network can represent its input signals most accurately when excitatory and inhibitory currents are as strong and as tightly balanced as possible. However, in the presence of realistic neural noise and synaptic delays, this may result in prohibitively large spike counts. An optimal working regime can be found by considering terms that control firing rates in the objective function from which the network is derived and then minimizing simultaneously the coding error and the cost of neural activity. In biological terms, this is equivalent to tuning neural thresholds and after-spike hyperpolarization. In suboptimal working regimes, we observe spontaneous activity even in the absence of feed-forward inputs. In an all-to-all randomly connected network, the entire population is involved in Up states. In spatially organized networks with local connectivity, Up states spread through local connections between neurons of similar selectivity and take the form of a traveling wave. Up states are observed for a wide range of parameters and have similar statistical properties in both active and quiescent state. In the optimal working regime, Up states are vanishing, leaving place to asynchronous activity, suggesting that this working regime is a signature of maximally efficient coding. Although they result in a massive increase in the firing activity, the read-out of spontaneous Up states is in fact orthogonal to the stimulus representation, therefore interfering minimally with the network function. PMID:28114353
Computer program for calculating and fitting thermodynamic functions
NASA Technical Reports Server (NTRS)
Mcbride, Bonnie J.; Gordon, Sanford
1992-01-01
A computer program is described which (1) calculates thermodynamic functions (heat capacity, enthalpy, entropy, and free energy) for several optional forms of the partition function, (2) fits these functions to empirical equations by means of a least-squares fit, and (3) calculates, as a function of temperture, heats of formation and equilibrium constants. The program provides several methods for calculating ideal gas properties. For monatomic gases, three methods are given which differ in the technique used for truncating the partition function. For diatomic and polyatomic molecules, five methods are given which differ in the corrections to the rigid-rotator harmonic-oscillator approximation. A method for estimating thermodynamic functions for some species is also given.
A survey of computational intelligence techniques in protein function prediction.
Tiwari, Arvind Kumar; Srivastava, Rajeev
2014-01-01
During the past, there was a massive growth of knowledge of unknown proteins with the advancement of high throughput microarray technologies. Protein function prediction is the most challenging problem in bioinformatics. In the past, the homology based approaches were used to predict the protein function, but they failed when a new protein was different from the previous one. Therefore, to alleviate the problems associated with homology based traditional approaches, numerous computational intelligence techniques have been proposed in the recent past. This paper presents a state-of-the-art comprehensive review of various computational intelligence techniques for protein function predictions using sequence, structure, protein-protein interaction network, and gene expression data used in wide areas of applications such as prediction of DNA and RNA binding sites, subcellular localization, enzyme functions, signal peptides, catalytic residues, nuclear/G-protein coupled receptors, membrane proteins, and pathway analysis from gene expression datasets. This paper also summarizes the result obtained by many researchers to solve these problems by using computational intelligence techniques with appropriate datasets to improve the prediction performance. The summary shows that ensemble classifiers and integration of multiple heterogeneous data are useful for protein function prediction.
Environment parameters and basic functions for floating-point computation
NASA Technical Reports Server (NTRS)
Brown, W. S.; Feldman, S. I.
1978-01-01
A language-independent proposal for environment parameters and basic functions for floating-point computation is presented. Basic functions are proposed to analyze, synthesize, and scale floating-point numbers. The model provides a small set of parameters and a small set of axioms along with sharp measures of roundoff error. The parameters and functions can be used to write portable and robust codes that deal intimately with the floating-point representation. Subject to underflow and overflow constraints, a number can be scaled by a power of the floating-point radix inexpensively and without loss of precision. A specific representation for FORTRAN is included.
Computing the hadronic vacuum polarization function by analytic continuation
Feng, Xu; Hashimoto, Shoji; Hotzel, Grit; ...
2013-08-29
We propose a method to compute the hadronic vacuum polarization function on the lattice at continuous values of photon momenta bridging between the spacelike and timelike regions. We provide two independent demonstrations to show that this method leads to the desired hadronic vacuum polarization function in Minkowski spacetime. We present with the example of the leading-order QCD correction to the muon anomalous magnetic moment that this approach can provide a valuable alternative method for calculations of physical quantities where the hadronic vacuum polarization function enters.
Community-Wide Evaluation of Computational Function Prediction.
Friedberg, Iddo; Radivojac, Predrag
2017-01-01
A biological experiment is the most reliable way of assigning function to a protein. However, in the era of high-throughput sequencing, scientists are unable to carry out experiments to determine the function of every single gene product. Therefore, to gain insights into the activity of these molecules and guide experiments, we must rely on computational means to functionally annotate the majority of sequence data. To understand how well these algorithms perform, we have established a challenge involving a broad scientific community in which we evaluate different annotation methods according to their ability to predict the associations between previously unannotated protein sequences and Gene Ontology terms. Here we discuss the rationale, benefits, and issues associated with evaluating computational methods in an ongoing community-wide challenge.
Gillespie, Dirk; Khair, Aditya S; Bardhan, Jaydeep P; Pennathur, Sumita
2011-07-15
The electrokinetic behavior of nanofluidic devices is dominated by the electrical double layers at the device walls. Therefore, accurate, predictive models of double layers are essential for device design and optimization. In this paper, we demonstrate that density functional theory (DFT) of electrolytes is an accurate and computationally efficient method for computing finite ion size effects and the resulting ion-ion correlations that are neglected in classical double layer theories such as Poisson-Boltzmann. Because DFT is derived from liquid-theory thermodynamic principles, it is ideal for nanofluidic systems with small spatial dimensions, high surface charge densities, high ion concentrations, and/or large ions. Ion-ion correlations are expected to be important in these regimes, leading to nonlinear phenomena such as charge inversion, wherein more counterions adsorb at the wall than is necessary to neutralize its surface charge, leading to a second layer of co-ions. We show that DFT, unlike other theories that do not include ion-ion correlations, can predict charge inversion and other nonlinear phenomena that lead to qualitatively different current densities and ion velocities for both pressure-driven and electro-osmotic flows. We therefore propose that DFT can be a valuable modeling and design tool for nanofluidic devices as they become smaller and more highly charged.
Computation of three-dimensional flows using two stream functions
NASA Technical Reports Server (NTRS)
Greywall, Mahesh S.
1991-01-01
An approach to compute 3-D flows using two stream functions is presented. The method generates a boundary fitted grid as part of its solution. Commonly used two steps for computing the flow fields are combined into a single step in the present approach: (1) boundary fitted grid generation; and (2) solution of Navier-Stokes equations on the generated grid. The presented method can be used to directly compute 3-D viscous flows, or the potential flow approximation of this method can be used to generate grids for other algorithms to compute 3-D viscous flows. The independent variables used are chi, a spatial coordinate, and xi and eta, values of stream functions along two sets of suitably chosen intersecting stream surfaces. The dependent variables used are the streamwise velocity, and two functions that describe the stream surfaces. Since for a 3-D flow there is no unique way to define two sets of intersecting stream surfaces to cover the given flow, different types of two sets of intersecting stream surfaces are considered. First, the metric of the (chi, xi, eta) curvilinear coordinate system associated with each type is presented. Next, equations for the steady state transport of mass, momentum, and energy are presented in terms of the metric of the (chi, xi, eta) coordinate system. Also included are the inviscid and the parabolized approximations to the general transport equations.
Optimization of removal function in computer controlled optical surfacing
NASA Astrophysics Data System (ADS)
Chen, Xi; Guo, Peiji; Ren, Jianfeng
2010-10-01
The technical principle of computer controlled optical surfacing (CCOS) and the common method of optimizing removal function that is used in CCOS are introduced in this paper. A new optimizing method time-sharing synthesis of removal function is proposed to solve problems of the removal function being far away from Gaussian type and slow approaching of the removal function error that encountered in the mode of planet motion or translation-rotation. Detailed time-sharing synthesis of using six removal functions is discussed. For a given region on the workpiece, six positions are selected as the centers of the removal function; polishing tool controlled by the executive system of CCOS revolves around each centre to complete a cycle in proper order. The overall removal function obtained by the time-sharing process is the ratio of total material removal in six cycles to time duration of the six cycles, which depends on the arrangement and distribution of the six removal functions. Simulations on the synthesized overall removal functions under two different modes of motion, i.e., planet motion and translation-rotation are performed from which the optimized combination of tool parameters and distribution of time-sharing synthesis removal functions are obtained. The evaluation function when optimizing is determined by an approaching factor which is defined as the ratio of the material removal within the area of half of the polishing tool coverage from the polishing center to the total material removal within the full polishing tool coverage area. After optimization, it is found that the optimized removal function obtained by time-sharing synthesis is closer to the ideal Gaussian type removal function than those by the traditional methods. The time-sharing synthesis method of the removal function provides an efficient way to increase the convergence speed of the surface error in CCOS for the fabrication of aspheric optical surfaces, and to reduce the intermediate- and high
Zendehrouh, Sareh
2015-11-01
Recent work on decision-making field offers an account of dual-system theory for decision-making process. This theory holds that this process is conducted by two main controllers: a goal-directed system and a habitual system. In the reinforcement learning (RL) domain, the habitual behaviors are connected with model-free methods, in which appropriate actions are learned through trial-and-error experiences. However, goal-directed behaviors are associated with model-based methods of RL, in which actions are selected using a model of the environment. Studies on cognitive control also suggest that during processes like decision-making, some cortical and subcortical structures work in concert to monitor the consequences of decisions and to adjust control according to current task demands. Here a computational model is presented based on dual system theory and cognitive control perspective of decision-making. The proposed model is used to simulate human performance on a variant of probabilistic learning task. The basic proposal is that the brain implements a dual controller, while an accompanying monitoring system detects some kinds of conflict including a hypothetical cost-conflict one. The simulation results address existing theories about two event-related potentials, namely error related negativity (ERN) and feedback related negativity (FRN), and explore the best account of them. Based on the results, some testable predictions are also presented.
Accounting for Finite Size of Ions in Nanofluidic Channels Using Density Functional Theory
NASA Astrophysics Data System (ADS)
McCallum, Christopher; Gillespie, Dirk; Pennathur, Sumita
2016-11-01
The physics of nanofluidic devices are dominated by ion-wall interactions within the electric double layer (EDL). A full understanding of the EDL allows for better exploitation of micro and nanofluidic devices for applications such as biologic separations, desalination, and energy conversion, Although continuum theory is generally used to study the fluidics within these channels, in very confined geometries, high surface charge channels, or significant solute concentration systems, continuum theories such as Poisson-Boltzmann cease to be valid because the finite size of ions is not considered. Density functional theory (DFT) provides an accurate and efficient method for predicting the concentration of ions and the electrostatic potential near a charged wall because it accounts for more complex electrostatic and hard-sphere correlations. This subsequently allows for a better model for ion flux, fluid flow, and current in electrokinetic systems at high surface charge, confined geometries, and concentrated systems. In this work, we present a theoretical approach utilizing DFT to predict unique flow phenomena in nanofluidic, electrokinetic systems. CBET-1402736 from the National Science Foundation.
Computational design of receptor and sensor proteins with novel functions
NASA Astrophysics Data System (ADS)
Looger, Loren L.; Dwyer, Mary A.; Smith, James J.; Hellinga, Homme W.
2003-05-01
The formation of complexes between proteins and ligands is fundamental to biological processes at the molecular level. Manipulation of molecular recognition between ligands and proteins is therefore important for basic biological studies and has many biotechnological applications, including the construction of enzymes, biosensors, genetic circuits, signal transduction pathways and chiral separations. The systematic manipulation of binding sites remains a major challenge. Computational design offers enormous generality for engineering protein structure and function. Here we present a structure-based computational method that can drastically redesign protein ligand-binding specificities. This method was used to construct soluble receptors that bind trinitrotoluene, L-lactate or serotonin with high selectivity and affinity. These engineered receptors can function as biosensors for their new ligands; we also incorporated them into synthetic bacterial signal transduction pathways, regulating gene expression in response to extracellular trinitrotoluene or L-lactate. The use of various ligands and proteins shows that a high degree of control over biomolecular recognition has been established computationally. The biological and biosensing activities of the designed receptors illustrate potential applications of computational design.
Efficient Computation of Functional Brain Networks: toward Real-Time Functional Connectivity
García-Prieto, Juan; Bajo, Ricardo; Pereda, Ernesto
2017-01-01
Functional Connectivity has demonstrated to be a key concept for unraveling how the brain balances functional segregation and integration properties while processing information. This work presents a set of open-source tools that significantly increase computational efficiency of some well-known connectivity indices and Graph-Theory measures. PLV, PLI, ImC, and wPLI as Phase Synchronization measures, Mutual Information as an information theory based measure, and Generalized Synchronization indices are computed much more efficiently than prior open-source available implementations. Furthermore, network theory related measures like Strength, Shortest Path Length, Clustering Coefficient, and Betweenness Centrality are also implemented showing computational times up to thousands of times faster than most well-known implementations. Altogether, this work significantly expands what can be computed in feasible times, even enabling whole-head real-time network analysis of brain function. PMID:28220071
Efficient Computation of Functional Brain Networks: toward Real-Time Functional Connectivity.
García-Prieto, Juan; Bajo, Ricardo; Pereda, Ernesto
2017-01-01
Functional Connectivity has demonstrated to be a key concept for unraveling how the brain balances functional segregation and integration properties while processing information. This work presents a set of open-source tools that significantly increase computational efficiency of some well-known connectivity indices and Graph-Theory measures. PLV, PLI, ImC, and wPLI as Phase Synchronization measures, Mutual Information as an information theory based measure, and Generalized Synchronization indices are computed much more efficiently than prior open-source available implementations. Furthermore, network theory related measures like Strength, Shortest Path Length, Clustering Coefficient, and Betweenness Centrality are also implemented showing computational times up to thousands of times faster than most well-known implementations. Altogether, this work significantly expands what can be computed in feasible times, even enabling whole-head real-time network analysis of brain function.
Computations involving differential operators and their actions on functions
NASA Technical Reports Server (NTRS)
Crouch, Peter E.; Grossman, Robert; Larson, Richard
1991-01-01
The algorithms derived by Grossmann and Larson (1989) are further developed for rewriting expressions involving differential operators. The differential operators involved arise in the local analysis of nonlinear dynamical systems. These algorithms are extended in two different directions: the algorithms are generalized so that they apply to differential operators on groups and the data structures and algorithms are developed to compute symbolically the action of differential operators on functions. Both of these generalizations are needed for applications.
Efficient quantum algorithm for computing n-time correlation functions.
Pedernales, J S; Di Candia, R; Egusquiza, I L; Casanova, J; Solano, E
2014-07-11
We propose a method for computing n-time correlation functions of arbitrary spinorial, fermionic, and bosonic operators, consisting of an efficient quantum algorithm that encodes these correlations in an initially added ancillary qubit for probe and control tasks. For spinorial and fermionic systems, the reconstruction of arbitrary n-time correlation functions requires the measurement of two ancilla observables, while for bosonic variables time derivatives of the same observables are needed. Finally, we provide examples applicable to different quantum platforms in the frame of the linear response theory.
Computational prediction of functional abortive RNA in E. coli.
Marcus, Jeremy I; Hassoun, Soha; Nair, Nikhil U
2017-03-24
Failure by RNA polymerase to break contacts with promoter DNA results in release of bound RNA and re-initiation of transcription. These abortive RNAs were assumed to be non-functional but have recently been shown to affect termination in bacteriophage T7. Little is known about the functional role of these RNA in other genetic models. Using a computational approach, we investigated whether abortive RNA could exert function in E. coli. Fragments generated from 3780 transcription units were used as query sequences within their respective transcription units to search for possible binding sites. Sites that fell within known regulatory features were then ranked based upon the free energy of hybridization to the abortive. We further hypothesize about mechanisms of regulatory action for a select number of likely matches. Future experimental validation of these putative abortive-mRNA pairs may confirm our findings and promote exploration of functional abortive RNAs (faRNAs) in natural and synthetic systems.
Helie, Sebastien; Chakravarthy, Srinivasa; Moustafa, Ahmed A
2013-12-06
Many computational models of the basal ganglia (BG) have been proposed over the past twenty-five years. While computational neuroscience models have focused on closely matching the neurobiology of the BG, computational cognitive neuroscience (CCN) models have focused on how the BG can be used to implement cognitive and motor functions. This review article focuses on CCN models of the BG and how they use the neuroanatomy of the BG to account for cognitive and motor functions such as categorization, instrumental conditioning, probabilistic learning, working memory, sequence learning, automaticity, reaching, handwriting, and eye saccades. A total of 19 BG models accounting for one or more of these functions are reviewed and compared. The review concludes with a discussion of the limitations of existing CCN models of the BG and prescriptions for future modeling, including the need for computational models of the BG that can simultaneously account for cognitive and motor functions, and the need for a more complete specification of the role of the BG in behavioral functions.
Pechey, Rachel; Couturier, Dominique-Laurent; Deary, Ian J.; Marteau, Theresa M.
2016-01-01
Objective Executive function, impulsivity, and intelligence are correlated markers of cognitive resource that predict health-related behaviours. It is unknown whether executive function and impulsivity are unique predictors of these behaviours after accounting for intelligence. Methods Data from 6069 participants from the Avon Longitudinal Study of Parents and Children were analysed to investigate whether components of executive function (selective attention, attentional control, working memory, and response inhibition) and impulsivity (parent-rated) measured between ages 8 and 10, predicted having ever drunk alcohol, having ever smoked, fruit and vegetable consumption, physical activity, and overweight at age 13, after accounting for intelligence at age 8 and childhood socioeconomic characteristics. Results Higher intelligence predicted having drunk alcohol, not smoking, greater fruit and vegetable consumption, and not being overweight. After accounting for intelligence, impulsivity predicted alcohol use (odds ratio = 1.10; 99% confidence interval = 1.02, 1.19) and smoking (1.22; 1.11, 1.34). Working memory predicted not being overweight (0.90; 0.81, 0.99). Conclusions After accounting for intelligence, executive function predicts overweight status but not health-related behaviours in early adolescence, whilst impulsivity predicts the onset of alcohol and cigarette use, all with small effects. This suggests overlap between executive function and intelligence as predictors of health behaviour in this cohort, with trait impulsivity accounting for additional variance. PMID:27479488
Cuny, Jérôme; Sykina, Kateryna; Fontaine, Bruno; Le Pollès, Laurent; Pickard, Chris J; Gautier, Régis
2011-11-21
Solid-state (95)Mo nuclear magnetic resonance (NMR) properties of molybdenum hexacarbonyl have been computed using density functional theory (DFT) based methods. Both quadrupolar coupling and chemical shift parameters were evaluated and compared with parameters of high precision determined using single-crystal (95)Mo NMR experiments. Within a molecular approach, the effects of major computational parameters, i.e. basis set, exchange-correlation functional, treatment of relativity, have been evaluated. Except for the isotropic parameter of both chemical shift and chemical shielding, computed NMR parameters are more sensitive to geometrical variations than computational details. Relativistic effects do not play a crucial part in the calculations of such parameters for the 4d transition metal, in particular isotropic chemical shift. Periodic DFT calculations were tackled to measure the influence of neighbouring molecules on the crystal structure. These effects have to be taken into account to compute accurate solid-state (95)Mo NMR parameters even for such an inorganic molecular compound.
Computational approaches for inferring the functions of intrinsically disordered proteins
Varadi, Mihaly; Vranken, Wim; Guharoy, Mainak; Tompa, Peter
2015-01-01
Intrinsically disordered proteins (IDPs) are ubiquitously involved in cellular processes and often implicated in human pathological conditions. The critical biological roles of these proteins, despite not adopting a well-defined fold, encouraged structural biologists to revisit their views on the protein structure-function paradigm. Unfortunately, investigating the characteristics and describing the structural behavior of IDPs is far from trivial, and inferring the function(s) of a disordered protein region remains a major challenge. Computational methods have proven particularly relevant for studying IDPs: on the sequence level their dependence on distinct characteristics determined by the local amino acid context makes sequence-based prediction algorithms viable and reliable tools for large scale analyses, while on the structure level the in silico integration of fundamentally different experimental data types is essential to describe the behavior of a flexible protein chain. Here, we offer an overview of the latest developments and computational techniques that aim to uncover how protein function is connected to intrinsic disorder. PMID:26301226
Computing Green's function of elasticity in a half-plane with impedance boundary condition
NASA Astrophysics Data System (ADS)
Durán, Mario; Godoy, Eduardo; Nédélec, Jean-Claude
2006-12-01
This Note presents an effective and accurate method for numerical calculation of the Green's function G associated with the time harmonic elasticity system in a half-plane, where an impedance boundary condition is considered. The need to compute this function arises when studying wave propagation in underground mining and seismological engineering. To theoretically obtain this Green's function, we have drawn our inspiration from the paper by Durán et al. (2005), where the Green's function for the Helmholtz equation has been computed. The method consists in applying a partial Fourier transform, which allows an explicit calculation of the so-called spectral Green's function. In order to compute its inverse Fourier transform, we separate Gˆ as a sum of two terms. The first is associated with the whole plane, whereas the second takes into account the half-plane and the boundary conditions. The first term corresponds to the Green's function of the well known time-harmonic elasticity system in R (cf. J. Dompierre, Thesis). The second term is separated as a sum of three terms, where two of them contain singularities in the spectral variable (pseudo-poles and poles) and the other is regular and decreasing at infinity. The inverse Fourier transform of the singular terms are analytically computed, whereas the regular one is numerically obtained via an FFT algorithm. We present a numerical result. Moreover, we show that, under some conditions, a fourth additional slowness appears and which could produce a new surface wave. To cite this article: M. Durán et al., C. R. Mecanique 334 (2006).
On the Hydrodynamic Function of Sharkskin: A Computational Investigation
NASA Astrophysics Data System (ADS)
Boomsma, Aaron; Sotiropoulos, Fotis
2014-11-01
Denticles (placoid scales) are small structures that cover the epidermis of some sharks. The hydrodynamic function of denticles is unclear. Because they resemble riblets, they have been thought to passively reduce skin-friction-for which there is some experimental evidence. Others have experimentally shown that denticles increase skin-friction and have hypothesized that denticles act as vortex generators to delay separation. To help clarify their function, we use high-resolution large eddy and direct numerical simulations, with an immersed boundary method, to simulate flow patterns past and calculate the drag force on Mako Short Fin denticles. Simulations are carried out for the denticles placed in a canonical turbulent boundary layer as well as in the vicinity of a separation bubble. The computed results elucidate the three-dimensional structure of the flow around denticles and provide insights into the hydrodynamic function of sharkskin.
Structure-based Methods for Computational Protein Functional Site Prediction
Dukka, B KC
2013-01-01
Due to the advent of high throughput sequencing techniques and structural genomic projects, the number of gene and protein sequences has been ever increasing. Computational methods to annotate these genes and proteins are even more indispensable. Proteins are important macromolecules and study of the function of proteins is an important problem in structural bioinformatics. This paper discusses a number of methods to predict protein functional site especially focusing on protein ligand binding site prediction. Initially, a short overview is presented on recent advances in methods for selection of homologous sequences. Furthermore, a few recent structural based approaches and sequence-and-structure based approaches for protein functional sites are discussed in details. PMID:24688745
21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow...
21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow...
21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow...
21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow...
21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow...
How to Compute the Fukui Matrix and Function for Systems with (Quasi-)Degenerate States.
Bultinck, Patrick; Cardenas, Carlos; Fuentealba, Patricio; Johnson, Paul A; Ayers, Paul W
2014-01-14
A system in a spatially (quasi-)degenerate ground state responds in a qualitatively different way to a change in the external potential. Consequently, the usual method for computing the Fukui function, namely, taking the difference between the electron densities of the N- and N ± 1 electron systems, cannot be applied directly. It is shown how the Fukui matrix, and thus also the Fukui function, depends on the nature of the perturbation. One thus needs to use degenerate perturbation theory for the given perturbing potential to generate the density matrix whose change with respect to a change in the number of electrons equals the Fukui matrix. Accounting for the degeneracy in the case of nitrous oxide reveals that an average over the degenerate states differs significantly from using the proper density matrix. We further show the differences in Fukui functions depending on whether a Dirac delta perturbation is used or an interaction with a true point charge (leading to the Fukui potential).
Computational models of basal-ganglia pathway functions: focus on functional neuroanatomy.
Schroll, Henning; Hamker, Fred H
2013-12-30
Over the past 15 years, computational models have had a considerable impact on basal-ganglia research. Most of these models implement multiple distinct basal-ganglia pathways and assume them to fulfill different functions. As there is now a multitude of different models, it has become complex to keep track of their various, sometimes just marginally different assumptions on pathway functions. Moreover, it has become a challenge to oversee to what extent individual assumptions are corroborated or challenged by empirical data. Focusing on computational, but also considering non-computational models, we review influential concepts of pathway functions and show to what extent they are compatible with or contradict each other. Moreover, we outline how empirical evidence favors or challenges specific model assumptions and propose experiments that allow testing assumptions against each other.
Computational models of basal-ganglia pathway functions: focus on functional neuroanatomy
Schroll, Henning; Hamker, Fred H.
2013-01-01
Over the past 15 years, computational models have had a considerable impact on basal-ganglia research. Most of these models implement multiple distinct basal-ganglia pathways and assume them to fulfill different functions. As there is now a multitude of different models, it has become complex to keep track of their various, sometimes just marginally different assumptions on pathway functions. Moreover, it has become a challenge to oversee to what extent individual assumptions are corroborated or challenged by empirical data. Focusing on computational, but also considering non-computational models, we review influential concepts of pathway functions and show to what extent they are compatible with or contradict each other. Moreover, we outline how empirical evidence favors or challenges specific model assumptions and propose experiments that allow testing assumptions against each other. PMID:24416002
Complete RNA inverse folding: computational design of functional hammerhead ribozymes
Dotu, Ivan; Garcia-Martin, Juan Antonio; Slinger, Betty L.; Mechery, Vinodh; Meyer, Michelle M.; Clote, Peter
2014-01-01
Nanotechnology and synthetic biology currently constitute one of the most innovative, interdisciplinary fields of research, poised to radically transform society in the 21st century. This paper concerns the synthetic design of ribonucleic acid molecules, using our recent algorithm, RNAiFold, which can determine all RNA sequences whose minimum free energy secondary structure is a user-specified target structure. Using RNAiFold, we design ten cis-cleaving hammerhead ribozymes, all of which are shown to be functional by a cleavage assay. We additionally use RNAiFold to design a functional cis-cleaving hammerhead as a modular unit of a synthetic larger RNA. Analysis of kinetics on this small set of hammerheads suggests that cleavage rate of computationally designed ribozymes may be correlated with positional entropy, ensemble defect, structural flexibility/rigidity and related measures. Artificial ribozymes have been designed in the past either manually or by SELEX (Systematic Evolution of Ligands by Exponential Enrichment); however, this appears to be the first purely computational design and experimental validation of novel functional ribozymes. RNAiFold is available at http://bioinformatics.bc.edu/clotelab/RNAiFold/. PMID:25209235
Computer Modeling of the Earliest Cellular Structures and Functions
NASA Technical Reports Server (NTRS)
Pohorille, Andrew; Chipot, Christophe; Schweighofer, Karl
2000-01-01
In the absence of extinct or extant record of protocells (the earliest ancestors of contemporary cells). the most direct way to test our understanding of the origin of cellular life is to construct laboratory models of protocells. Such efforts are currently underway in the NASA Astrobiology Program. They are accompanied by computational studies aimed at explaining self-organization of simple molecules into ordered structures and developing designs for molecules that perform proto-cellular functions. Many of these functions, such as import of nutrients, capture and storage of energy. and response to changes in the environment are carried out by proteins bound to membrane< We will discuss a series of large-scale, molecular-level computer simulations which demonstrate (a) how small proteins (peptides) organize themselves into ordered structures at water-membrane interfaces and insert into membranes, (b) how these peptides aggregate to form membrane-spanning structures (eg. channels), and (c) by what mechanisms such aggregates perform essential proto-cellular functions, such as proton transport of protons across cell walls, a key step in cellular bioenergetics. The simulations were performed using the molecular dynamics method, in which Newton's equations of motion for each item in the system are solved iteratively. The problems of interest required simulations on multi-nanosecond time scales, which corresponded to 10(exp 6)-10(exp 8) time steps.
Non-functioning adrenal adenomas discovered incidentally on computed tomography
Mitnick, J.S.; Bosniak, M.A.; Megibow, A.J.; Naidich, D.P.
1983-08-01
Eighteen patients with unilateral non-metastatic non-functioning adrenal masses were studied with computed tomography (CT). Pathological examination in cases revealed benign adrenal adenomas. The others were followed up with serial CT scans and found to show no change in tumor size over a period of six months to three years. On the basis of these findings, the authors suggest certain criteria of a benign adrenal mass, including (a) diameter less than 5 cm, (b) smooth contour, (c) well-defined margin, and (d) no change in size on follow-up. Serial CT scanning can be used as an alternative to surgery in the management of many of these patients.
Chubar, O.; Couprie, M.-E.
2007-01-19
CPU-efficient method for calculation of the frequency domain electric field of Coherent Synchrotron Radiation (CSR) taking into account 6D phase space distribution of electrons in a bunch is proposed. As an application example, calculation results of the CSR emitted by an electron bunch with small longitudinal and large transverse sizes are presented. Such situation can be realized in storage rings or ERLs by transverse deflection of the electron bunches in special crab-type RF cavities, i.e. using the technique proposed for the generation of femtosecond X-ray pulses (A. Zholents et. al., 1999). The computation, performed for the parameters of the SOLEIL storage ring, shows that if the transverse size of electron bunch is larger than the diffraction limit for single-electron SR at a given wavelength -- this affects the angular distribution of the CSR at this wavelength and reduces the coherent flux. Nevertheless, for transverse bunch dimensions up to several millimeters and a longitudinal bunch size smaller than hundred micrometers, the resulting CSR flux in the far infrared spectral range is still many orders of magnitude higher than the flux of incoherent SR, and therefore can be considered for practical use.
CMB anisotropy in compact hyperbolic universes. I. Computing correlation functions
NASA Astrophysics Data System (ADS)
Bond, J. Richard; Pogosyan, Dmitry; Souradeep, Tarun
2000-08-01
Cosmic microwave background (CMB) anisotropy measurements have brought the issue of global topology of the universe from the realm of theoretical possibility to within the grasp of observations. The global topology of the universe modifies the correlation properties of cosmic fields. In particular, strong correlations are predicted in CMB anisotropy patterns on the largest observable scales if the size of the universe is comparable to the distance to the CMB last scattering surface. We describe in detail our completely general scheme using a regularized method of images for calculating such correlation functions in models with nontrivial topology, and apply it to the computationally challenging compact hyperbolic spaces. Our procedure directly sums over images within a specified radius, ideally many times the diameter of the space, effectively treats more distant images in a continuous approximation, and uses Cesaro resummation to further sharpen the results. At all levels of approximation the symmetries of the space are preserved in the correlation function. This new technique eliminates the need for the difficult task of spatial eigenmode decomposition on these spaces. Although the eigenspectrum can be obtained by this method if desired, at a given level of approximation the correlation functions are more accurately determined. We use the 3-torus example to demonstrate that the method works very well. We apply it to power spectrum as well as correlation function evaluations in a number of compact hyperbolic (CH) spaces. Application to the computation of CMB anisotropy correlations on CH spaces, and the observational constraints following from them, are given in a companion paper.
ERIC Educational Resources Information Center
Ohio State Univ., Columbus. Center on Education and Training for Employment.
This publication identifies 20 subjects appropriate for use in a competency list for the occupation of accounting specialist, 1 of 12 occupations within the business/computer technologies cluster. Each unit consists of a number of competencies; a list of competency builders is provided for each competency. Titles of the 20 units are as follows:…
Ohta, Shinri; Fukui, Naoki; Sakai, Kuniyoshi L.
2013-01-01
The nature of computational principles of syntax remains to be elucidated. One promising approach to this problem would be to construct formal and abstract linguistic models that parametrically predict the activation modulations in the regions specialized for linguistic processes. In this article, we review recent advances in theoretical linguistics and functional neuroimaging in the following respects. First, we introduce the two fundamental linguistic operations: Merge (which combines two words or phrases to form a larger structure) and Search (which searches and establishes a syntactic relation of two words or phrases). We also illustrate certain universal properties of human language, and present hypotheses regarding how sentence structures are processed in the brain. Hypothesis I is that the Degree of Merger (DoM), i.e., the maximum depth of merged subtrees within a given domain, is a key computational concept to properly measure the complexity of tree structures. Hypothesis II is that the basic frame of the syntactic structure of a given linguistic expression is determined essentially by functional elements, which trigger Merge and Search. We then present our recent functional magnetic resonance imaging experiment, demonstrating that the DoM is indeed a key syntactic factor that accounts for syntax-selective activations in the left inferior frontal gyrus and supramarginal gyrus. Hypothesis III is that the DoM domain changes dynamically in accordance with iterative Merge applications, the Search distances, and/or task requirements. We confirm that the DoM accounts for activations in various sentence types. Hypothesis III successfully explains activation differences between object- and subject-relative clauses, as well as activations during explicit syntactic judgment tasks. A future research on the computational principles of syntax will further deepen our understanding of uniquely human mental faculties. PMID:24385957
Ohta, Shinri; Fukui, Naoki; Sakai, Kuniyoshi L
2013-01-01
The nature of computational principles of syntax remains to be elucidated. One promising approach to this problem would be to construct formal and abstract linguistic models that parametrically predict the activation modulations in the regions specialized for linguistic processes. In this article, we review recent advances in theoretical linguistics and functional neuroimaging in the following respects. First, we introduce the two fundamental linguistic operations: Merge (which combines two words or phrases to form a larger structure) and Search (which searches and establishes a syntactic relation of two words or phrases). We also illustrate certain universal properties of human language, and present hypotheses regarding how sentence structures are processed in the brain. Hypothesis I is that the Degree of Merger (DoM), i.e., the maximum depth of merged subtrees within a given domain, is a key computational concept to properly measure the complexity of tree structures. Hypothesis II is that the basic frame of the syntactic structure of a given linguistic expression is determined essentially by functional elements, which trigger Merge and Search. We then present our recent functional magnetic resonance imaging experiment, demonstrating that the DoM is indeed a key syntactic factor that accounts for syntax-selective activations in the left inferior frontal gyrus and supramarginal gyrus. Hypothesis III is that the DoM domain changes dynamically in accordance with iterative Merge applications, the Search distances, and/or task requirements. We confirm that the DoM accounts for activations in various sentence types. Hypothesis III successfully explains activation differences between object- and subject-relative clauses, as well as activations during explicit syntactic judgment tasks. A future research on the computational principles of syntax will further deepen our understanding of uniquely human mental faculties.
Enzymatic Halogenases and Haloperoxidases: Computational Studies on Mechanism and Function.
Timmins, Amy; de Visser, Sam P
2015-01-01
Despite the fact that halogenated compounds are rare in biology, a number of organisms have developed processes to utilize halogens and in recent years, a string of enzymes have been identified that selectively insert halogen atoms into, for instance, a CH aliphatic bond. Thus, a number of natural products, including antibiotics, contain halogenated functional groups. This unusual process has great relevance to the chemical industry for stereoselective and regiospecific synthesis of haloalkanes. Currently, however, industry utilizes few applications of biological haloperoxidases and halogenases, but efforts are being worked on to understand their catalytic mechanism, so that their catalytic function can be upscaled. In this review, we summarize experimental and computational studies on the catalytic mechanism of a range of haloperoxidases and halogenases with structurally very different catalytic features and cofactors. This chapter gives an overview of heme-dependent haloperoxidases, nonheme vanadium-dependent haloperoxidases, and flavin adenine dinucleotide-dependent haloperoxidases. In addition, we discuss the S-adenosyl-l-methionine fluoridase and nonheme iron/α-ketoglutarate-dependent halogenases. In particular, computational efforts have been applied extensively for several of these haloperoxidases and halogenases and have given insight into the essential structural features that enable these enzymes to perform the unusual halogen atom transfer to substrates.
Functional Connectivity’s Degenerate View of Brain Computation
Giron, Alain; Rudrauf, David
2016-01-01
Brain computation relies on effective interactions between ensembles of neurons. In neuroimaging, measures of functional connectivity (FC) aim at statistically quantifying such interactions, often to study normal or pathological cognition. Their capacity to reflect a meaningful variety of patterns as expected from neural computation in relation to cognitive processes remains debated. The relative weights of time-varying local neurophysiological dynamics versus static structural connectivity (SC) in the generation of FC as measured remains unsettled. Empirical evidence features mixed results: from little to significant FC variability and correlation with cognitive functions, within and between participants. We used a unified approach combining multivariate analysis, bootstrap and computational modeling to characterize the potential variety of patterns of FC and SC both qualitatively and quantitatively. Empirical data and simulations from generative models with different dynamical behaviors demonstrated, largely irrespective of FC metrics, that a linear subspace with dimension one or two could explain much of the variability across patterns of FC. On the contrary, the variability across BOLD time-courses could not be reduced to such a small subspace. FC appeared to strongly reflect SC and to be partly governed by a Gaussian process. The main differences between simulated and empirical data related to limitations of DWI-based SC estimation (and SC itself could then be estimated from FC). Above and beyond the limited dynamical range of the BOLD signal itself, measures of FC may offer a degenerate representation of brain interactions, with limited access to the underlying complexity. They feature an invariant common core, reflecting the channel capacity of the network as conditioned by SC, with a limited, though perhaps meaningful residual variability. PMID:27736900
"When Stones Falls": A Conceptual-Functional Account of Subject-Verb Agreement in Persian
ERIC Educational Resources Information Center
Sharifian, Farzad; Lotfi, Ahmad R.
2007-01-01
Most linguistic studies of subject-verb agreement have thus far attempted to account for this phenomenon in terms of either syntax or semantics. Kim (2004) [Kim, J., 2004. Hybrid agreement in English. Linguistics 42 (6), 1105-1128] proposes a "hybrid analysis", which allows for a morphosyntactic agreement and a semantic agreement within the same…
Memory and Generativity in Very High Functioning Autism: A Firsthand Account, and an Interpretation
ERIC Educational Resources Information Center
Boucher, Jill
2007-01-01
JS is a highly able person with Asperger syndrome whose language and intellectual abilities are, and always have been, superior. The first part of this short article consists of JS's analytical account of his atypical memory abilities, and the strategies he uses for memorizing and learning. JS has also described specific difficulties with creative…
Filter design for molecular factor computing using wavelet functions.
Li, Xiaoyong; Xu, Zhihong; Cai, Wensheng; Shao, Xueguang
2015-06-23
Molecular factor computing (MFC) is a new strategy that employs chemometric methods in an optical instrument to obtain analytical results directly using an appropriate filter without data processing. In the present contribution, a method for designing an MFC filter using wavelet functions was proposed for spectroscopic analysis. In this method, the MFC filter is designed as a linear combination of a set of wavelet functions. A multiple linear regression model relating the concentration to the wavelet coefficients is constructed, so that the wavelet coefficients are obtained by projecting the spectra onto the selected wavelet functions. These wavelet functions are selected by optimizing the model using a genetic algorithm (GA). Once the MFC filter is obtained, the concentration of a sample can be calculated directly by projecting the spectrum onto the filter. With three NIR datasets of corn, wheat and blood, it was shown that the performance of the designed filter is better than that of the optimized partial least squares models, and commonly used signal processing methods, such as background correction and variable selection, were not needed. More importantly, the designed filter can be used as an MFC filter in designing MFC-based instruments.
A dual-route account for access to grammatical gender: evidence from functional MRI.
Heim, Stefan; Alter, Kai; Friederici, Angela D
2005-12-01
Research investigating the neural correlates of grammatical gender processing has provided contradictory evidence with respect to activation in the left inferior frontal gyrus (IFG). A possible account for these discrepancies is a dual-route model proposing explicit vs implicit access to the gender information. In this event-related fMRI experiment, we investigated this issue by taking into account different processing strategies reported by the subjects. The participants performed two tasks, a gender judgement of German nouns and a non-lexical baseline task (spacing of consonant letter strings). Depending on the reported strategy (silent production of the definite determiner or direct access to the gender information), different patterns of activation in the left IFG were observed. Direct access to gender information yielded activation in the inferior tip of BA 44, whereas the verbalisation strategy elicited activation in the superior portion of BA 44, BA 45/47, and the fronto-median wall. These results speak in favour of a dual-route account for modelling the access to grammatical gender information during language comprehension.
Assessing executive function using a computer game: computational modeling of cognitive processes.
Hagler, Stuart; Jimison, Holly Brugge; Pavel, Misha
2014-07-01
Early and reliable detection of cognitive decline is one of the most important challenges of current healthcare. In this project, we developed an approach whereby a frequently played computer game can be used to assess a variety of cognitive processes and estimate the results of the pen-and-paper trail making test (TMT)--known to measure executive function, as well as visual pattern recognition, speed of processing, working memory, and set-switching ability. We developed a computational model of the TMT based on a decomposition of the test into several independent processes, each characterized by a set of parameters that can be estimated from play of a computer game designed to resemble the TMT. An empirical evaluation of the model suggests that it is possible to use the game data to estimate the parameters of the underlying cognitive processes and using the values of the parameters to estimate the TMT performance. Cognitive measures and trends in these measures can be used to identify individuals for further assessment, to provide a mechanism for improving the early detection of neurological problems, and to provide feedback and monitoring for cognitive interventions in the home.
Computer Modeling of Protocellular Functions: Peptide Insertion in Membranes
NASA Technical Reports Server (NTRS)
Rodriquez-Gomez, D.; Darve, E.; Pohorille, A.
2006-01-01
Lipid vesicles became the precursors to protocells by acquiring the capabilities needed to survive and reproduce. These include transport of ions, nutrients and waste products across cell walls and capture of energy and its conversion into a chemically usable form. In modem organisms these functions are carried out by membrane-bound proteins (about 30% of the genome codes for this kind of proteins). A number of properties of alpha-helical peptides suggest that their associations are excellent candidates for protobiological precursors of proteins. In particular, some simple a-helical peptides can aggregate spontaneously and form functional channels. This process can be described conceptually by a three-step thermodynamic cycle: 1 - folding of helices at the water-membrane interface, 2 - helix insertion into the lipid bilayer and 3 - specific interactions of these helices that result in functional tertiary structures. Although a crucial step, helix insertion has not been adequately studied because of the insolubility and aggregation of hydrophobic peptides. In this work, we use computer simulation methods (Molecular Dynamics) to characterize the energetics of helix insertion and we discuss its importance in an evolutionary context. Specifically, helices could self-assemble only if their interactions were sufficiently strong to compensate the unfavorable Free Energy of insertion of individual helices into membranes, providing a selection mechanism for protobiological evolution.
An Evolutionary Computation Approach to Examine Functional Brain Plasticity
Roy, Arnab; Campbell, Colin; Bernier, Rachel A.; Hillary, Frank G.
2016-01-01
One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength
Computational Effective Fault Detection by Means of Signature Functions
Baranski, Przemyslaw; Pietrzak, Piotr
2016-01-01
The paper presents a computationally effective method for fault detection. A system’s responses are measured under healthy and ill conditions. These signals are used to calculate so-called signature functions that create a signal space. The current system’s response is projected into this space. The signal location in this space easily allows to determine the fault. No classifier such as a neural network, hidden Markov models, etc. is required. The advantage of this proposed method is its efficiency, as computing projections amount to calculating dot products. Therefore, this method is suitable for real-time embedded systems due to its simplicity and undemanding processing capabilities which permit the use of low-cost hardware and allow rapid implementation. The approach performs well for systems that can be considered linear and stationary. The communication presents an application, whereby an industrial process of moulding is supervised. The machine is composed of forms (dies) whose alignment must be precisely set and maintained during the work. Typically, the process is stopped periodically to manually control the alignment. The applied algorithm allows on-line monitoring of the device by analysing the acceleration signal from a sensor mounted on a die. This enables to detect failures at an early stage thus prolonging the machine’s life. PMID:26949942
Using computational biophysics to understand protein evolution and function
NASA Astrophysics Data System (ADS)
Ytreberg, F. Marty
2010-10-01
Understanding how proteins evolve and function is vital for human health (e.g., developing better drugs, predicting the outbreak of disease, etc.). In spite of its importance, little is known about the underlying molecular mechanisms behind these biological processes. Computational biophysics has emerged as a useful tool in this area due to its unique ability to obtain a detailed, atomistic view of proteins and how they interact. I will give two examples from our studies where computational biophysics has provided valuable insight: (i) Protein evolution in viruses. Our results suggest that the amino acid changes that occur during high temperature evolution of a virus decrease the binding free energy of the capsid, i.e., these changes increase capsid stability. (ii) Determining realistic structural ensembles for intrinsically disordered proteins. Most methods for determining protein structure rely on the protein folding into a single conformation, and thus are not suitable for disordered proteins. I will describe a new approach that combines experiment and simulation to generate structures for disordered proteins.
Optimizing high performance computing workflow for protein functional annotation.
Stanberry, Larissa; Rekepalli, Bhanu; Liu, Yuan; Giblock, Paul; Higdon, Roger; Montague, Elizabeth; Broomall, William; Kolker, Natali; Kolker, Eugene
2014-09-10
Functional annotation of newly sequenced genomes is one of the major challenges in modern biology. With modern sequencing technologies, the protein sequence universe is rapidly expanding. Newly sequenced bacterial genomes alone contain over 7.5 million proteins. The rate of data generation has far surpassed that of protein annotation. The volume of protein data makes manual curation infeasible, whereas a high compute cost limits the utility of existing automated approaches. In this work, we present an improved and optmized automated workflow to enable large-scale protein annotation. The workflow uses high performance computing architectures and a low complexity classification algorithm to assign proteins into existing clusters of orthologous groups of proteins. On the basis of the Position-Specific Iterative Basic Local Alignment Search Tool the algorithm ensures at least 80% specificity and sensitivity of the resulting classifications. The workflow utilizes highly scalable parallel applications for classification and sequence alignment. Using Extreme Science and Engineering Discovery Environment supercomputers, the workflow processed 1,200,000 newly sequenced bacterial proteins. With the rapid expansion of the protein sequence universe, the proposed workflow will enable scientists to annotate big genome data.
Imaging local brain function with emission computed tomography
Kuhl, D.E.
1984-03-01
Positron emission tomography (PET) using /sup 18/F-fluorodeoxyglucose (FDG) was used to map local cerebral glucose utilization in the study of local cerebral function. This information differs fundamentally from structural assessment by means of computed tomography (CT). In normal human volunteers, the FDG scan was used to determine the cerebral metabolic response to conrolled sensory stimulation and the effects of aging. Cerebral metabolic patterns are distinctive among depressed and demented elderly patients. The FDG scan appears normal in the depressed patient, studded with multiple metabolic defects in patients with multiple infarct dementia, and in the patients with Alzheimer disease, metabolism is particularly reduced in the parietal cortex, but only slightly reduced in the caudate and thalamus. The interictal FDG scan effectively detects hypometabolic brain zones that are sites of onset for seizures in patients with partial epilepsy, even though these zones usually appear normal on CT scans. The future prospects of PET are discussed.
ERIC Educational Resources Information Center
East Texas State Univ., Commerce. Occupational Curriculum Lab.
Sixteen units on office occupations are presented in this teacher's guide. The unit topics include the following: related information (e.g., preparing for job interview); accounting and computing (e.g., preparing a payroll and a balance sheet); information communications (e.g., handling appointments, composing correspondence); and stenographic,…
A survey. Financial accounting and internal control functions pursued by hospital boards.
Gavin, T A
1984-09-01
Justification for a board committee's existence is its ability to devote time to issues judged to be important by the full board. This seems to have happened. Multiple committees pursue more functions than the other committee structures. Boards lacking an FA/IC committee pursue significantly fewer functions than their counterparts with committees. Substantial respondent agreement exists on those functions most and least frequently pursued, those perceived to be most and least important, and those perceived to be most and least effectively undertaken. Distinctions between committee structures and the full board, noted in the previous paragraph, hold true with respect to the importance of functions. All board structures identified reviewing the budget and comparing it to actual results as important. Committee structures are generally more inclined to address functions related to the work of the independent auditor and the effectiveness of the hospital's system and controls than are full board structures. Functions related to the internal auditor are pursued least frequently by all FA/IC board structures. The following suggestions are made to help boards pay adequate attention to and obtain objective information about the financial affairs of their hospitals. Those boards that do not have some form of an FA/IC committee should consider starting one. Evidence shows chief financial officers have been a moving force in establishing and strengthening such committees. Boards having a joint or single committee structure should consider upgrading their structure to either a single committee or multiple committees respectively. The complexity of the healthcare environment requires that more FA/IC functions be addressed by the board. The board or its FA/IC committee(s) should meet with their independent CPA's, fiscal intermediary auditors, and internal auditors. Where the hospital lacks an internal audit function a study should be undertaken to determine the feasibility of
Garashchuk, Sophya
2007-04-21
The de Broglie-Bohm formulation of the Schrodinger equation implies conservation of the wave function probability density associated with each quantum trajectory in closed systems. This conservation property greatly simplifies numerical implementations of the quantum trajectory dynamics and increases its accuracy. The reconstruction of a wave function, however, becomes expensive or inaccurate as it requires fitting or interpolation procedures. In this paper we present a method of computing wave packet correlation functions and wave function projections, which typically contain all the desired information about dynamics, without the full knowledge of the wave function by making quadratic expansions of the wave function phase and amplitude near each trajectory similar to expansions used in semiclassical methods. Computation of the quantities of interest in this procedure is linear with respect to the number of trajectories. The introduced approximations are consistent with approximate quantum potential dynamics method. The projection technique is applied to model chemical systems and to the H+H(2) exchange reaction in three dimensions.
Chemical Visualization of Boolean Functions: A Simple Chemical Computer
NASA Astrophysics Data System (ADS)
Blittersdorf, R.; Müller, J.; Schneider, F. W.
1995-08-01
We present a chemical realization of the Boolean functions AND, OR, NAND, and NOR with a neutralization reaction carried out in three coupled continuous flow stirred tank reactors (CSTR). Two of these CSTR's are used as input reactors, the third reactor marks the output. The chemical reaction is the neutralization of hydrochloric acid (HCl) with sodium hydroxide (NaOH) in the presence of phenolphtalein as an indicator, which is red in alkaline solutions and colorless in acidic solutions representing the two binary states 1 and 0, respectively. The time required for a "chemical computation" is determined by the flow rate of reactant solutions into the reactors since the neutralization reaction itself is very fast. While the acid flow to all reactors is equal and constant, the flow rate of NaOH solution controls the states of the input reactors. The connectivities between the input and output reactors determine the flow rate of NaOH solution into the output reactor, according to the chosen Boolean function. Thus the state of the output reactor depends on the states of the input reactors.
A computer vision based candidate for functional balance test.
Nalci, Alican; Khodamoradi, Alireza; Balkan, Ozgur; Nahab, Fatta; Garudadri, Harinath
2015-08-01
Balance in humans is a motor skill based on complex multimodal sensing, processing and control. Ability to maintain balance in activities of daily living (ADL) is compromised due to aging, diseases, injuries and environmental factors. Center for Disease Control and Prevention (CDC) estimate of the costs of falls among older adults was $34 billion in 2013 and is expected to reach $54.9 billion in 2020. In this paper, we present a brief review of balance impairments followed by subjective and objective tools currently used in clinical settings for human balance assessment. We propose a novel computer vision (CV) based approach as a candidate for functional balance test. The test will take less than a minute to administer and expected to be objective, repeatable and highly discriminative in quantifying ability to maintain posture and balance. We present an informal study with preliminary data from 10 healthy volunteers, and compare performance with a balance assessment system called BTrackS Balance Assessment Board. Our results show high degree of correlation with BTrackS. The proposed system promises to be a good candidate for objective functional balance tests and warrants further investigations to assess validity in clinical settings, including acute care, long term care and assisted living care facilities. Our long term goals include non-intrusive approaches to assess balance competence during ADL in independent living environments.
Werneck, Araken S; Filho, Tarcísio M Rocha; Dardenne, Laurent E
2008-01-17
We developed a methodology to optimize exponential damping functions to account for charge penetration effects when computing molecular electrostatic properties using the multicentered multipolar expansion method (MME). This methodology is based in the optimization of a damping parameter set using a two-step fast local fitting procedure and the ab initio (Hartree-Fock/6-31G** and 6-31G**+) electrostatic potential calculated in a set of concentric grid of points as reference. The principal aspect of the methodology is a first local fitting step which generates a focused initial guess to improve the performance of a simplex method avoiding the use of multiple runs and the choice of initial guesses. Three different strategies for the determination of optimized damping parameters were tested in the following studies: (1) investigation of the error in the calculation of the electrostatic interaction energy for five hydrogen-bonded dimers at standard and nonstandard hydrogen-bonded geometries and at nonequilibrium geometries; (2) calculation of the electrostatic molecular properties (potential and electric field) for eight small molecular systems (methanol, ammonia, water, formamide, dichloromethane, acetone, dimethyl sulfoxide, and acetonitrile) and for the 20 amino acids. Our results show that the methodology performs well not only for small molecules but also for relatively larger molecular systems. The analysis of the distinct parameter sets associated with different optimization strategies show that (i) a specific parameter set is more suitable and more general for electrostatic interaction energy calculations, with an average absolute error of 0.46 kcal/mol at hydrogen-bond geometries; (ii) a second parameter set is more suitable for electrostatic potential and electric field calculations at and outside the van der Waals (vdW) envelope, with an average error decrease >72% at the vdW surface. A more general amino acid damping parameter set was constructed from the
45 CFR 302.20 - Separation of cash handling and accounting functions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... functions have been delegated. (2) Employees of a court or law enforcement official performing under a cooperative agreement with the IV-D agency. (3) Employees of any private or governmental entity from which...
Paternal effects on functional gender account for cryptic dioecy in a perennial plant.
Verdú, Miguel; Montilla, Ana I.; Pannell, John R.
2004-01-01
Natural selection operates on the mating strategies of hermaphrodites through their functional gender, i.e. their relative success as male versus female parents. Because functional gender will tend to be strongly influenced by sex allocation, it is often estimated in plants by counting seeds and pollen grains. However, a plant's functional gender must also depend on the fate of the seeds and pollen grains it produces. We provide clear evidence of a paternal effect on the functional gender of a plant that is independent of the resources invested in pollen. In the Mediterranean tree Fraxinus ornus, males coexist with hermaphrodites that disperse viable pollen and that sire seeds; the population would thus appear to be functionally androdioecious. However, we found that seedlings sired by hermaphrodites grew significantly less well than those sired by males, suggesting that hermaphrodites may be functionally less male than they seem. The observed 1 : 1 sex ratios in F. ornus, which have hitherto been difficult to explain in the light of the seed-siring ability of hermaphrodites, support our interpretation that this species is cryptically dioecious. Our results underscore the importance of considering progeny quality when estimating gender, and caution against inferring androdioecy on the basis of a siring ability of hermaphrodites alone. PMID:15451691
Astrocytes, Synapses and Brain Function: A Computational Approach
NASA Astrophysics Data System (ADS)
Nadkarni, Suhita
2006-03-01
Modulation of synaptic reliability is one of the leading mechanisms involved in long- term potentiation (LTP) and long-term depression (LTD) and therefore has implications in information processing in the brain. A recently discovered mechanism for modulating synaptic reliability critically involves recruitments of astrocytes - star- shaped cells that outnumber the neurons in most parts of the central nervous system. Astrocytes until recently were thought to be subordinate cells merely participating in supporting neuronal functions. New evidence, however, made available by advances in imaging technology has changed the way we envision the role of these cells in synaptic transmission and as modulator of neuronal excitability. We put forward a novel mathematical framework based on the biophysics of the bidirectional neuron-astrocyte interactions that quantitatively accounts for two distinct experimental manifestation of recruitment of astrocytes in synaptic transmission: a) transformation of a low fidelity synapse transforms into a high fidelity synapse and b) enhanced postsynaptic spontaneous currents when astrocytes are activated. Such a framework is not only useful for modeling neuronal dynamics in a realistic environment but also provides a conceptual basis for interpreting experiments. Based on this modeling framework, we explore the role of astrocytes for neuronal network behavior such as synchrony and correlations and compare with experimental data from cultured networks.
A functional account of verb use in the early stages of English multiword development.
Cameron-Faulkner, Thea
2012-09-01
The present study investigates flexibility of verb use in the early stages of English multiword development, and its relationship with patterns attested in the input. The data is taken from a case study of a monolingual English-speaking boy aged 2 ; 5-2 ; 9 and his mother while engaged in daily activities in the home. Data were coded according to Halliday's (1975) functional system. The findings suggest that early multiword verb use is functionally restricted and closely tied to verb use in the input.
Point spread function computation in normal incidence for rough optical surfaces
NASA Astrophysics Data System (ADS)
Tayabaly, Kashmira; Spiga, Daniele; Sironi, Giorgia; Canestrari, Rodolfo; Lavagna, Michele; Pareschi, Giovanni
2016-08-01
The Point Spread Function (PSF) allows for specifying the angular resolution of optical systems which is a key parameter used to define the performances of most optics. A prediction of the system's PSF is therefore a powerful tool to assess the design and manufacture requirements of complex optical systems. Currently, well-established ray-tracing routines based on a geometrical optics are used for this purpose. However, those ray-tracing routines either lack real surface defect considerations (figure errors or micro-roughness) in their computation, or they include a scattering effect modeled separately that requires assumptions difficult to verify. Since there is an increasing demand for tighter angular resolution, the problem of surface finishing could drastically damage the optical performances of a system, including optical telescopes systems. A purely physical optics approach is more effective as it remains valid regardless of the shape and size of the defects appearing on the optical surface. However, a computation when performed in the two-dimensional space is time consuming since it requires processing a surface map with a few micron resolution which sometimes extends the propagation to multiple-reflections. The computation is significantly simplified in the far-field configuration as it involves only a sequence of Fourier Transforms. We show how to account for measured surface defects and roughness in order to predict the performances of the optics in single reflection, which can be applied and validated for real case studies.
Gibbons, Laura E; Crane, Paul K; Mehta, Kala M; Pedraza, Otto; Tang, Yuxiao; Manly, Jennifer J; Narasimhalu, Kaavya; Teresi, Jeanne; Jones, Richard N; Mungas, Dan
2011-04-28
Differential item functioning (DIF) occurs when a test item has different statistical properties in subgroups, controlling for the underlying ability measured by the test. DIF assessment is necessary when evaluating measurement bias in tests used across different language groups. However, other factors such as educational attainment can differ across language groups, and DIF due to these other factors may also exist. How to conduct DIF analyses in the presence of multiple, correlated factors remains largely unexplored. This study assessed DIF related to Spanish versus English language in a 44-item object naming test. Data come from a community-based sample of 1,755 Spanish- and English-speaking older adults. We compared simultaneous accounting, a new strategy for handling differences in educational attainment across language groups, with existing methods. Compared to other methods, simultaneously accounting for language- and education-related DIF yielded salient differences in some object naming scores, particularly for Spanish speakers with at least 9 years of education. Accounting for factors that vary across language groups can be important when assessing language DIF. The use of simultaneous accounting will be relevant to other cross-cultural studies in cognition and in other fields, including health-related quality of life.
pH-Regulated Mechanisms Account for Pigment-Type Differences in Epidermal Barrier Function
Gunathilake, Roshan; Schurer, Nanna Y.; Shoo, Brenda A.; Celli, Anna; Hachem, Jean-Pierre; Crumrine, Debra; Sirimanna, Ganga; Feingold, Kenneth R.; Mauro, Theodora M.; Elias, Peter M.
2009-01-01
To determine whether pigment type determines differences in epidermal function, we studied stratum corneum (SC) pH, permeability barrier homeostasis, and SC integrity in three geographically disparate populations with pigment type I–II versus IV–V skin (Fitzpatrick I–VI scale). Type IV–V subjects showed: (i) lower surface pH (≈0.5 U); (ii) enhanced SC integrity (transepidermal water loss change with sequential tape strippings); and (iii) more rapid barrier recovery than type I–II subjects. Enhanced barrier function could be ascribed to increased epidermal lipid content, increased lamellar body production, and reduced acidity, leading to enhanced lipid processing. Compromised SC integrity in type I–II subjects could be ascribed to increased serine protease activity, resulting in accelerated desmoglein-1 (DSG-1)/corneodesmosome degradation. In contrast, DSG-1-positive CDs persisted in type IV–V subjects, but due to enhanced cathepsin-D activity, SC thickness did not increase. Adjustment of pH of type I–II SC to type IV–V levels improved epidermal function. Finally, dendrites from type IV–V melanocytes were more acidic than those from type I–II subjects, and they transfer more melanosomes to the SC, suggesting that melanosome secretion could contribute to the more acidic pH of type IV–V skin. These studies show marked pigment-type differences in epidermal structure and function that are pH driven. PMID:19177137
Calibration function for the Orbitrap FTMS accounting for the space charge effect.
Gorshkov, Mikhail V; Good, David M; Lyutvinskiy, Yaroslav; Yang, Hongqian; Zubarev, Roman A
2010-11-01
Ion storage in an electrostatic trap has been implemented with the introduction of the Orbitrap Fourier transform mass spectrometer (FTMS), which demonstrates performance similar to high-field ion cyclotron resonance MS. High mass spectral characteristics resulted in rapid acceptance of the Orbitrap FTMS for Life Sciences applications. The basics of Orbitrap operation are well documented; however, like in any ion trap MS technology, its performance is limited by interactions between the ion clouds. These interactions result in ion cloud couplings, systematic errors in measured masses, interference between ion clouds of different size yet with close m/z ratios, etc. In this work, we have characterized the space-charge effect on the measured frequency for the Orbitrap FTMS, looking for the possibility to achieve sub-ppm levels of mass measurement accuracy (MMA) for peptides in a wide range of total ion population. As a result of this characterization, we proposed an m/z calibration law for the Orbitrap FTMS that accounts for the total ion population present in the trap during a data acquisition event. Using this law, we were able to achieve a zero-space charge MMA limit of 80 ppb for the commercial Orbitrap FTMS system and sub-ppm level of MMA over a wide range of total ion populations with the automatic gain control values varying from 10 to 10(7).
Losh, Molly; Capps, Lisa
2006-09-01
In this study, the authors investigate emotional understanding in autism through a discourse analytic framework to provide a window into children's strategies for interpreting emotional versus nonemotional encounters and consider the implications for the mechanisms underlying emotional understanding in typical development. Accounts were analyzed for thematic content and discourse structure. Whereas high-functioning children with autism were able to discuss contextually appropriate accounts of simple emotions, their strategies for interpreting all types of emotional (but not nonemotional) experiences differed from those used by typically developing children. High-functioning children with autism were less inclined to organize their emotional accounts in personalized causal-explanatory frameworks and displayed a tendency to describe visually salient elements of experiences seldom observed among comparison children. Findings suggest that children with autism possess less coherent representations of emotional experiences and use alternative strategies for interpreting emotionally evocative encounters. Discussion focuses on the significance of these findings for informing the nature of emotional dysfunction in autism as well as implications for theories of emotional understanding in typical development.
Advances in understanding ventromedial prefrontal function: the accountant joins the executive.
Fellows, Lesley K
2007-03-27
Studies of the brain basis of decision-making and economic behavior are providing a new perspective on the organization and functions of human prefrontal cortex. This line of inquiry has focused particularly on the ventral and medial portions of prefrontal cortex, arguably the most enigmatic regions of the "enigmatic frontal lobes." This review highlights recent advances in the cognitive neuroscience of decision making and neuroeconomics and discusses how these findings can inform clinical thinking about frontal lobe dysfunction.
NASA Astrophysics Data System (ADS)
Wang, Huihui; Sukhomlinov, Vladimir S.; Kaganovich, Igor D.; Mustafaev, Alexander S.
2017-02-01
Based on accurate representation of the He+-He angular differential scattering cross sections consisting of both elastic and charge exchange collisions, we performed detailed numerical simulations of the ion velocity distribution functions (IVDF) by Monte Carlo collision method (MCC). The results of simulations are validated by comparison with the experimental data of the ion mobility and the transverse diffusion. The IVDF simulation study shows that due to significant effect of scattering in elastic collisions IVDF cannot be separated into product of two independent IVDFs in the transverse and parallel to the electric field directions.
'A Leg to Stand On' by Oliver Sacks: a unique autobiographical account of functional paralysis.
Stone, Jon; Perthen, Jo; Carson, Alan J
2012-09-01
Oliver Sacks, the well known neurologist and writer, published his fourth book, 'A Leg to Stand On', in 1984 following an earlier essay 'The Leg' in 1982. The book described his recovery after a fall in a remote region of Norway in which he injured his leg. Following surgery to reattach his quadriceps muscle, he experienced an emotional period in which his leg no longer felt a part of his body, and he struggled to regain his ability to walk. Sacks attributed the experience to a neurologically determined disorder of body-image and bodyego induced by peripheral injury. In the first edition of his book Sacks explicitly rejected the diagnosis of 'hysterical paralysis' as it was then understood, although he approached this diagnosis more closely in subsequent revisions. In this article we propose that, in the light of better understanding of functional neurological symptoms, Sacks' experiences deserve to be reappraised as a unique insight in to a genuinely experienced functional/psychogenic leg paralysis following injury.
NASA Astrophysics Data System (ADS)
Daswani, Ujla; Sharma, Pratibha; Kumar, Ashok
2015-01-01
Benzothiazole moiety is found to play an important role in medicinal chemistry with a wide range of pharmacological activities. Herein, a simple, benzothiazole derivative viz., 2-chlorobenzothiazole (2CBT) has been analyzed. The spectroscopic properties of the target compound were examined by FT-IR (4400-450 cm-1), FT-Raman (4000-50 cm-1), and NMR techniques. The 1H and 13C NMR spectra were recorded in DMSO. Theoretical calculations were performed by ab initio Hartree Fock and Density Functional Theory (DFT)/B3LYP method using varied basis sets combination. The scaled B3LYP/6-311++G(d,p) results precisely complements with the experimental findings. Electronic absorption spectra along with energy and oscillator strength were obtained by TDDFT method. Atomic charges have also been reported. Total density isosurface and total density mapped with electrostatic potential surface (MESP) has been shown.
Autobiographical accounts of sensing in Asperger syndrome and high-functioning autism.
Elwin, Marie; Ek, Lena; Schröder, Agneta; Kjellin, Lars
2012-10-01
Sensory experiences in Asperger syndrome (AS) or high-functioning autism (HFA) were explored by qualitative content analysis of autobiographical texts by persons with AS/HFA. Predetermined categories of hyper- and hyposensitivity were applied to texts. Hypersensitivity consists of strong reactions and heightened apprehension in reaction to external stimuli, sometimes together with overfocused or unselective attention. It was common in vision, hearing, and touch. In contrast, hyposensitivity was frequent in reaction to internal and body stimuli such as interoception, proprioception, and pain. It consists of less registration, discrimination, and recognition of stimuli as well as cravings for specific stimuli. Awareness of the strong impact of sensitivity is essential for creating good environments and encounters in the context of psychiatric and other health care.
Automated attendance accounting system
NASA Technical Reports Server (NTRS)
Chapman, C. P. (Inventor)
1973-01-01
An automated accounting system useful for applying data to a computer from any or all of a multiplicity of data terminals is disclosed. The system essentially includes a preselected number of data terminals which are each adapted to convert data words of decimal form to another form, i.e., binary, usable with the computer. Each data terminal may take the form of a keyboard unit having a number of depressable buttons or switches corresponding to selected data digits and/or function digits. A bank of data buffers, one of which is associated with each data terminal, is provided as a temporary storage. Data from the terminals is applied to the data buffers on a digit by digit basis for transfer via a multiplexer to the computer.
Hall, Matthew L.; Ferreira, Victor S.; Mayberry, Rachel I.
2014-01-01
One of the most basic functions of human language is to convey who did what to whom. In the world’s languages, the order of these three constituents (subject (S), verb (V), and object (O)) is uneven, with SOV and SVO being most common. Recent experiments using experimentally-elicited pantomime provide a possible explanation for the prevalence of SOV, but extant explanations for the prevalence of SVO could benefit from further empirical support. Here, we test whether SVO might emerge because (a) SOV is not well suited for describing reversible events (a woman pushing a boy), and (b) pressures to be efficient and mention subjects before objects conspire to rule out many other alternatives. We tested this by asking participants to describe reversible and non-reversible events in pantomime, and instructed some participants to be consistent in the form of their gestures and to teach them to the experimenter. These manipulations led to the emergence of SVO in speakers of both English (SVO) and Turkish (SOV). PMID:24641486
Hsu, Yung-Fong; Doble, Christopher W
2015-02-01
The study of thresholds for discriminability has been of long-standing interest in psychophysics. While threshold theories embrace the concept of discrete-state thresholds, signal detection theory discounts such a concept. In this paper we concern ourselves with the concept of thresholds from the discrete-state modelling viewpoint. In doing so, we find it necessary to clarify some fundamental issues germane to the psychometric function (PF), which is customarily constructed using psychophysical methods with a binary-response format. We challenge this response format and argue that response confidence also plays an important role in the construction of PFs, and thus should have some impact on threshold estimation. We motivate the discussion by adopting a three-state threshold theory for response confidence proposed by Krantz (1969, Psychol. Rev., 76, 308-324), which is a modification of Luce's (1963, Psychol. Rev., 70, 61-79) low-threshold theory. In particular, we discuss the case in which the practice of averaging over order (or position) is enforced in data collection. Finally, we illustrate the fit of the Luce-Krantz model to data from a line-discrimination task with response confidence.
A method to account for outliers in the development of safety performance functions.
El-Basyouny, Karim; Sayed, Tarek
2010-07-01
Accident data sets can include some unusual data points that are not typical of the rest of the data. The presence of these data points (usually termed outliers) can have a significant impact on the estimates of the parameters of safety performance functions (SPFs). Few studies have considered outliers analysis in the development of SPFs. In these studies, the practice has been to identify and then exclude outliers from further analysis. This paper introduces alternative mixture models based on the multivariate Poisson lognormal (MVPLN) regression. The proposed approach presents outlier resistance modeling techniques that provide robust safety inferences by down-weighting the outlying observations rather than rejecting them. The first proposed model is a scale-mixture model that is obtained by replacing the normal distribution in the Poisson-lognormal hierarchy by the Student t distribution, which has heavier tails. The second model is a two-component mixture (contaminated normal model) where it is assumed that most of the observations come from a basic distribution, whereas the remaining few outliers arise from an alternative distribution that has a larger variance. The results indicate that the estimates of the extra-Poisson variation parameters were considerably smaller under the mixture models leading to higher precision. Also, both mixture models have identified the same set of outliers. In terms of goodness-of-fit, both mixture models have outperformed the MVPLN. The outlier rejecting MVPLN model provided a superior fit in terms of a much smaller DIC and standard deviations for the parameter estimates. However, this approach tends to underestimate uncertainty by producing too small standard deviations for the parameter estimates, which may lead to incorrect conclusions. It is recommended that the proposed outlier resistance modeling techniques be used unless the exclusion of the outlying observations can be justified because of data related reasons (e
Keeping Accountability Systems Accountable
ERIC Educational Resources Information Center
Foote, Martha
2007-01-01
The standards and accountability movement in education has undeniably transformed schooling throughout the United States. Even before President Bush signed the No Child Left Behind (NCLB) Act into law in January 2002, mandating annual public school testing in English and math for grades 3-8 and once in high school, most states had already…
NASA Technical Reports Server (NTRS)
Kennedy, J. R.; Fitzpatrick, W. S.
1971-01-01
The computer executive functional system design concepts derived from study of the Space Station/Base are presented. Information Management System hardware configuration as directly influencing the executive design is reviewed. The hardware configuration and generic executive design requirements are considered in detail in a previous report (System Configuration and Executive Requirements Specifications for Reusable Shuttle and Space Station/Base, 9/25/70). This report defines basic system primitives and delineates processes and process control. Supervisor states are considered for describing basic multiprogramming and multiprocessing systems. A high-level computer executive including control of scheduling, allocation of resources, system interactions, and real-time supervisory functions is defined. The description is oriented to provide a baseline for a functional simulation of the computer executive system.
Anderson, Rachel M.; Cosme, Caitlin V.; Glanz, Ryan M.; Miller, Mary C.; Romig-Martin, Sara A.; LaLumiere, Ryan T.
2015-01-01
The prelimbic region (PL) of the medial prefrontal cortex (mPFC) is implicated in the relapse of drug-seeking behavior. Optimal mPFC functioning relies on synaptic connections involving dendritic spines in pyramidal neurons, whereas prefrontal dysfunction resulting from elevated glucocorticoids, stress, aging, and mental illness are each linked to decreased apical dendritic branching and spine density in pyramidal neurons in these cortical fields. The fact that cocaine use induces activation of the stress-responsive hypothalamo-pituitary-adrenal axis raises the possibility that cocaine-related impairments in mPFC functioning may be manifested by similar changes in neuronal architecture in mPFC. Nevertheless, previous studies have generally identified increases, rather than decreases, in structural plasticity in mPFC after cocaine self-administration. Here, we use 3D imaging and analysis of dendritic spine morphometry to show that chronic cocaine self-administration leads to mild decreases of apical dendritic branching, prominent dendritic spine attrition in PL pyramidal neurons, and working memory deficits. Importantly, these impairments were largely accounted for in groups of rats that self-administered cocaine compared with yoked-cocaine- and saline-matched counterparts. Follow-up experiments failed to demonstrate any effects of either experimenter-administered cocaine or food self-administration on structural alterations in PL neurons. Finally, we verified that the cocaine self-administration group was distinguished by more protracted increases in adrenocortical activity compared with yoked-cocaine- and saline-matched controls. These studies suggest a mechanism whereby increased adrenocortical activity resulting from chronic cocaine self-administration may contribute to regressive prefrontal structural and functional plasticity. SIGNIFICANCE STATEMENT Stress, aging, and mental illness are each linked to decreased prefrontal plasticity. Here, we show that chronic
NASA Technical Reports Server (NTRS)
Curran, R. T.; Hornfeck, W. A.
1972-01-01
The functional requirements for the design of an interpretive simulator for the space ultrareliable modular computer (SUMC) are presented. A review of applicable existing computer simulations is included along with constraints on the SUMC simulator functional design. Input requirements, output requirements, and language requirements for the simulator are discussed in terms of a SUMC configuration which may vary according to the application.
Recursive Definitions of Partial Functions and Their Computations
1972-03-01
allows ist fa iliir limplifIcetion rules, such as: fa tor the iipquentiel ’it-then- elee ’ connective! ’ :J_ 1 then A...B If »ic Now, if*then* elee ’ only has one x-sct in g, namelv ;1,’j. This means intuitively that computing in
Nieuwenhuizen, Niels J; Green, Sol A; Chen, Xiuyin; Bailleul, Estelle J D; Matich, Adam J; Wang, Mindy Y; Atkinson, Ross G
2013-02-01
Terpenes are specialized plant metabolites that act as attractants to pollinators and as defensive compounds against pathogens and herbivores, but they also play an important role in determining the quality of horticultural food products. We show that the genome of cultivated apple (Malus domestica) contains 55 putative terpene synthase (TPS) genes, of which only 10 are predicted to be functional. This low number of predicted functional TPS genes compared with other plant species was supported by the identification of only eight potentially functional TPS enzymes in apple 'Royal Gala' expressed sequence tag databases, including the previously characterized apple (E,E)-α-farnesene synthase. In planta functional characterization of these TPS enzymes showed that they could account for the majority of terpene volatiles produced in cv Royal Gala, including the sesquiterpenes germacrene-D and (E)-β-caryophyllene, the monoterpenes linalool and α-pinene, and the homoterpene (E)-4,8-dimethyl-1,3,7-nonatriene. Relative expression analysis of the TPS genes indicated that floral and vegetative tissues were the primary sites of terpene production in cv Royal Gala. However, production of cv Royal Gala floral-specific terpenes and TPS genes was observed in the fruit of some heritage apple cultivars. Our results suggest that the apple TPS gene family has been shaped by a combination of ancestral and more recent genome-wide duplication events. The relatively small number of functional enzymes suggests that the remaining terpenes produced in floral and vegetative and fruit tissues are maintained under a positive selective pressure, while the small number of terpenes found in the fruit of modern cultivars may be related to commercial breeding strategies.
Fukui, Miho; Goda, Akiko; Komamura, Kazuo; Nakabo, Ayumi; Masaki, Mitsuru; Yoshida, Chikako; Hirotani, Shinichi; Lee-Kawabata, Masaaki; Tsujino, Takeshi; Mano, Toshiaki; Masuyama, Tohru
2016-02-01
While beta blockade improves left ventricular (LV) function in patients with chronic heart failure (CHF), the mechanisms are not well known. This study aimed to examine whether changes in myocardial collagen metabolism account for LV functional recovery following beta-blocker therapy in 62 CHF patients with reduced ejection fraction (EF). LV function was echocardiographically measured at baseline and 1, 6, and 12 months after bisoprolol therapy along with serum markers of collagen metabolism including C-terminal telopeptide of collagen type I (CITP) and matrix metalloproteinase (MMP)-2. Deceleration time of mitral early velocity (DcT) increased even in the early phase, but LVEF gradually improved throughout the study period. Heart rate (HR) was reduced from the early stage, and CITP gradually decreased. LVEF and DcT increased more so in patients with the larger decreases in CITP (r = -0.33, p < 0.05; r = -0.28, p < 0.05, respectively), and HR (r = -0.31, p < 0.05; r = -0.38, p < 0.05, respectively). In addition, there were greater decreases in CITP, MMP-2 and HR from baseline to 1, 6, or 12 months in patients with above-average improvement in LVEF than in those with below-average improvement in LVEF. Similar results were obtained in terms of DcT. There was no significant correlation between the changes in HR and CITP. In conclusion, improvement in LV systolic/diastolic function was greatest in patients with the larger inhibition of collagen degradation. Changes in myocardial collagen metabolism are closely related to LV functional recovery somewhat independently from HR reduction.
Nieuwenhuizen, Niels J.; Green, Sol A.; Chen, Xiuyin; Bailleul, Estelle J.D.; Matich, Adam J.; Wang, Mindy Y.; Atkinson, Ross G.
2013-01-01
Terpenes are specialized plant metabolites that act as attractants to pollinators and as defensive compounds against pathogens and herbivores, but they also play an important role in determining the quality of horticultural food products. We show that the genome of cultivated apple (Malus domestica) contains 55 putative terpene synthase (TPS) genes, of which only 10 are predicted to be functional. This low number of predicted functional TPS genes compared with other plant species was supported by the identification of only eight potentially functional TPS enzymes in apple ‘Royal Gala’ expressed sequence tag databases, including the previously characterized apple (E,E)-α-farnesene synthase. In planta functional characterization of these TPS enzymes showed that they could account for the majority of terpene volatiles produced in cv Royal Gala, including the sesquiterpenes germacrene-D and (E)-β-caryophyllene, the monoterpenes linalool and α-pinene, and the homoterpene (E)-4,8-dimethyl-1,3,7-nonatriene. Relative expression analysis of the TPS genes indicated that floral and vegetative tissues were the primary sites of terpene production in cv Royal Gala. However, production of cv Royal Gala floral-specific terpenes and TPS genes was observed in the fruit of some heritage apple cultivars. Our results suggest that the apple TPS gene family has been shaped by a combination of ancestral and more recent genome-wide duplication events. The relatively small number of functional enzymes suggests that the remaining terpenes produced in floral and vegetative and fruit tissues are maintained under a positive selective pressure, while the small number of terpenes found in the fruit of modern cultivars may be related to commercial breeding strategies. PMID:23256150
Fineberg, Sarah K; Steinfeld, Matthew; Brewer, Judson A; Corlett, Philip R
2014-01-01
Social dysfunction is a prominent and disabling aspect of borderline personality disorder. We reconsider traditional explanations for this problem, especially early disruption in the way an infant feels physical care from its mother, in terms of recent developments in computational psychiatry. In particular, social learning may depend on reinforcement learning though embodied simulations. Such modeling involves calculations based on structures outside the brain such as face and hands, calculations on one's own body that are used to make inferences about others. We discuss ways to test the role of embodied simulation in BPD and potential implications for treatment.
Fineberg, Sarah K.; Steinfeld, Matthew; Brewer, Judson A.; Corlett, Philip R.
2014-01-01
Social dysfunction is a prominent and disabling aspect of borderline personality disorder. We reconsider traditional explanations for this problem, especially early disruption in the way an infant feels physical care from its mother, in terms of recent developments in computational psychiatry. In particular, social learning may depend on reinforcement learning though embodied simulations. Such modeling involves calculations based on structures outside the brain such as face and hands, calculations on one’s own body that are used to make inferences about others. We discuss ways to test the role of embodied simulation in BPD and potential implications for treatment. PMID:25221523
NASA Astrophysics Data System (ADS)
Gudoshnikov, A. N.; Migrov, Yu. A.
2008-11-01
Calculations to verify the Russian computer code KORSAR were carried out for the B4.1 experimental operating conditions, in which nitrogen was supplied to the reactor coolant (primary) circuit of a reactor plant model, and which were simulated at the PKL III integral test facility. It is shown that dissolution of gases in coolant has an essential effect on the thermal-hydraulic processes during long-term passive removal of heat from the primary to secondary coolant circuit of the reactor plant model under the conditions of natural circulation.
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.H.
1980-01-01
Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.
1983-01-01
Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
NASA Astrophysics Data System (ADS)
Happ, Fabian; Brüns, Heinz-D.; Mavraj, Gazmend; Gronwald, Frank
2016-09-01
A formalism for the computation of lightning transfer functions by the method of moments, which involves shielding structures that may consist of layered, anisotropically conducting composite materials, is presented in this contribution. The composite materials, being of a type that is widely used in space- and aircraft design, are electrically characterized by an equivalent conductivity. As basis for the quantitative analysis the method of moments is used where shielding surfaces can be treated by a thin layer technique which utilizes analytical solutions inside the layer. Also the effect of an extended lightning channel can be taken into account. The method is applied to geometries that resemble an actual airplane fuselage.
NASA Astrophysics Data System (ADS)
Francisco, E.; Pendás, A. Martín; Blanco, M. A.
2008-04-01
Given an N-electron molecule and an exhaustive partition of the real space ( R) into m arbitrary regions Ω,Ω,…,Ω ( ⋃i=1mΩ=R), the edf program computes all the probabilities P(n,n,…,n) of having exactly n electrons in Ω, n electrons in Ω,…, and n electrons ( n+n+⋯+n=N) in Ω. Each Ω may correspond to a single basin (atomic domain) or several such basins (functional group). In the later case, each atomic domain must belong to a single Ω. The program can manage both single- and multi-determinant wave functions which are read in from an aimpac-like wave function description ( .wfn) file (T.A. Keith et al., The AIMPAC95 programs, http://www.chemistry.mcmaster.ca/aimpac, 1995). For multi-determinantal wave functions a generalization of the original .wfn file has been introduced. The new format is completely backwards compatible, adding to the previous structure a description of the configuration interaction (CI) coefficients and the determinants of correlated wave functions. Besides the .wfn file, edf only needs the overlap integrals over all the atomic domains between the molecular orbitals (MO). After the P(n,n,…,n) probabilities are computed, edf obtains from them several magnitudes relevant to chemical bonding theory, such as average electronic populations and localization/delocalization indices. Regarding spin, edf may be used in two ways: with or without a splitting of the P(n,n,…,n) probabilities into α and β spin components. Program summaryProgram title: edf Catalogue identifier: AEAJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5387 No. of bytes in distributed program, including test data, etc.: 52 381 Distribution format: tar.gz Programming language: Fortran 77 Computer
Introduction to Classical Density Functional Theory by a Computational Experiment
ERIC Educational Resources Information Center
Jeanmairet, Guillaume; Levy, Nicolas; Levesque, Maximilien; Borgis, Daniel
2014-01-01
We propose an in silico experiment to introduce the classical density functional theory (cDFT). Density functional theories, whether quantum or classical, rely on abstract concepts that are nonintuitive; however, they are at the heart of powerful tools and active fields of research in both physics and chemistry. They led to the 1998 Nobel Prize in…
Computer Corner: Spreadsheets, Power Series, Generating Functions, and Integers.
ERIC Educational Resources Information Center
Snow, Donald R.
1989-01-01
Implements a table algorithm on a spreadsheet program and obtains functions for several number sequences such as the Fibonacci and Catalan numbers. Considers other applications of the table algorithm to integers represented in various number bases. (YP)
Multiple multiresolution representation of functions and calculus for fast computation
Fann, George I; Harrison, Robert J; Hill, Judith C; Jia, Jun; Galindo, Diego A
2010-01-01
We describe the mathematical representations, data structure and the implementation of the numerical calculus of functions in the software environment multiresolution analysis environment for scientific simulations, MADNESS. In MADNESS, each smooth function is represented using an adaptive pseudo-spectral expansion using the multiwavelet basis to a arbitrary but finite precision. This is an extension of the capabilities of most of the existing net, mesh and spectral based methods where the discretization is based on a single adaptive mesh, or expansions.
Evaluation of computing systems using functionals of a Stochastic process
NASA Technical Reports Server (NTRS)
Meyer, J. F.; Wu, L. T.
1980-01-01
An intermediate model was used to represent the probabilistic nature of a total system at a level which is higher than the base model and thus closer to the performance variable. A class of intermediate models, which are generally referred to as functionals of a Markov process, were considered. A closed form solution of performability for the case where performance is identified with the minimum value of a functional was developed.
Computational strategies for the design of new enzymatic functions.
Świderek, K; Tuñón, I; Moliner, V; Bertran, J
2015-09-15
In this contribution, recent developments in the design of biocatalysts are reviewed with particular emphasis in the de novo strategy. Studies based on three different reactions, Kemp elimination, Diels-Alder and Retro-Aldolase, are used to illustrate different success achieved during the last years. Finally, a section is devoted to the particular case of designed metalloenzymes. As a general conclusion, the interplay between new and more sophisticated engineering protocols and computational methods, based on molecular dynamics simulations with Quantum Mechanics/Molecular Mechanics potentials and fully flexible models, seems to constitute the bed rock for present and future successful design strategies.
COMPUTATIONAL STRATEGIES FOR THE DESIGN OF NEW ENZYMATIC FUNCTIONS
Świderek, K; Tuñón, I.; Moliner, V.; Bertran, J.
2015-01-01
In this contribution, recent developments in the design of biocatalysts are reviewed with particular emphasis in the de novo strategy. Studies based on three different reactions, Kemp elimination, Diels-Alder and retro-aldolase, are used to illustrate different success achieved during the last years. Finally, a section is devoted to the particular case of designed metalloenzymes. As a general conclusion, the interplay between new and more sophisticated engineering protocols and computational methods, based on molecular dynamics simulations with Quantum Mechanics/Molecular Mechanics potentials and fully flexible models, seems to constitute the bed rock for present and future successful design strategies. PMID:25797438
A Functional Level Preprocessor for Computer Aided Digital Design.
1980-12-01
the parsing of ucer input, is based on that for the computer language, PASCAL [J2,1J. The procedure is tle author’s original design Each line of input...NIKLAUS WIR~iN. PASCAL -USER:I MANUAL AmD REPORT. NEW YORK, NY: SPRINGER-VERLAG 1978 Li LANCAST17R, DOIN. CMOS CoORBOLK(A. IND)IANAPOLIS, IND): HOWAI(D...34flGS 0151, OtTAll me:;genera ted by SISL, duri -ne iti; last run. Each message is of the foriiat: SutiWur Uk ;L ATLNG M1:SSA(;lE-, )’URIAT NUMBELR, and
Bread dough rheology: Computing with a damage function model
NASA Astrophysics Data System (ADS)
Tanner, Roger I.; Qi, Fuzhong; Dai, Shaocong
2015-01-01
We describe an improved damage function model for bread dough rheology. The model has relatively few parameters, all of which can easily be found from simple experiments. Small deformations in the linear region are described by a gel-like power-law memory function. A set of large non-reversing deformations - stress relaxation after a step of shear, steady shearing and elongation beginning from rest, and biaxial stretching, is used to test the model. With the introduction of a revised strain measure which includes a Mooney-Rivlin term, all of these motions can be well described by the damage function described in previous papers. For reversing step strains, larger amplitude oscillatory shearing and recoil reasonable predictions have been found. The numerical methods used are discussed and we give some examples.
Accurate Computation of Divided Differences of the Exponential Function,
1983-06-01
differences are not for arbitrary smooth functions f but for well known analytic functions such as exp. sin and cos. Thus we can exploit their properties in...have a bad name in practice. However in a number of applications the functional form of f is known (e.g. exp) and can be exploited to obtain accurate...n do X =s(1) s(1)=d(i) For j=2.....-1 do11=t, (j) z=Y next j next i SS7 . (Shift back and stop.] ,-tt+77. d(i).-e"d(i), s(i-1)’e~ s(i-i) for i=2
Efficient and Flexible Computation of Many-Electron Wave Function Overlaps.
Plasser, Felix; Ruckenbauer, Matthias; Mai, Sebastian; Oppel, Markus; Marquetand, Philipp; González, Leticia
2016-03-08
A new algorithm for the computation of the overlap between many-electron wave functions is described. This algorithm allows for the extensive use of recurring intermediates and thus provides high computational efficiency. Because of the general formalism employed, overlaps can be computed for varying wave function types, molecular orbitals, basis sets, and molecular geometries. This paves the way for efficiently computing nonadiabatic interaction terms for dynamics simulations. In addition, other application areas can be envisaged, such as the comparison of wave functions constructed at different levels of theory. Aside from explaining the algorithm and evaluating the performance, a detailed analysis of the numerical stability of wave function overlaps is carried out, and strategies for overcoming potential severe pitfalls due to displaced atoms and truncated wave functions are presented.
Efficient and Flexible Computation of Many-Electron Wave Function Overlaps
2016-01-01
A new algorithm for the computation of the overlap between many-electron wave functions is described. This algorithm allows for the extensive use of recurring intermediates and thus provides high computational efficiency. Because of the general formalism employed, overlaps can be computed for varying wave function types, molecular orbitals, basis sets, and molecular geometries. This paves the way for efficiently computing nonadiabatic interaction terms for dynamics simulations. In addition, other application areas can be envisaged, such as the comparison of wave functions constructed at different levels of theory. Aside from explaining the algorithm and evaluating the performance, a detailed analysis of the numerical stability of wave function overlaps is carried out, and strategies for overcoming potential severe pitfalls due to displaced atoms and truncated wave functions are presented. PMID:26854874
Linger, Richard C; Pleszkoch, Mark G; Prowell, Stacy J; Sayre, Kirk D; Ankrum, Scott
2013-01-01
Organizations maintaining mainframe legacy software can benefit from code modernization and incorporation of security capabilities to address the current threat environment. Oak Ridge National Laboratory is developing the Hyperion system to compute the behavior of software as a means to gain understanding of software functionality and security properties. Computation of functionality is critical to revealing security attributes, which are in fact specialized functional behaviors of software. Oak Ridge is collaborating with MITRE Corporation to conduct a demonstration project to compute behavior of legacy IBM Assembly Language code for a federal agency. The ultimate goal is to understand functionality and security vulnerabilities as a basis for code modernization. This paper reports on the first phase, to define functional semantics for IBM Assembly instructions and conduct behavior computation experiments.
Computed versus measured ion velocity distribution functions in a Hall effect thruster
Garrigues, L.; Mazouffre, S.; Bourgeois, G.
2012-06-01
We compare time-averaged and time-varying measured and computed ion velocity distribution functions in a Hall effect thruster for typical operating conditions. The ion properties are measured by means of laser induced fluorescence spectroscopy. Simulations of the plasma properties are performed with a two-dimensional hybrid model. In the electron fluid description of the hybrid model, the anomalous transport responsible for the electron diffusion across the magnetic field barrier is deduced from the experimental profile of the time-averaged electric field. The use of a steady state anomalous mobility profile allows the hybrid model to capture some properties like the time-averaged ion mean velocity. Yet, the model fails at reproducing the time evolution of the ion velocity. This fact reveals a complex underlying physics that necessitates to account for the electron dynamics over a short time-scale. This study also shows the necessity for electron temperature measurements. Moreover, the strength of the self-magnetic field due to the rotating Hall current is found negligible.
Utility functions and resource management in an oversubscribed heterogeneous computing environment
Khemka, Bhavesh; Friese, Ryan; Briceno, Luis Diego; Siegel, Howard Jay; Maciejewski, Anthony A.; Koenig, Gregory A.; Groer, Christopher S.; Hilton, Marcia M.; Poole, Stephen W.; Okonski, G.; Rambharos, R.
2014-09-26
We model an oversubscribed heterogeneous computing system where tasks arrive dynamically and a scheduler maps the tasks to machines for execution. The environment and workloads are based on those being investigated by the Extreme Scale Systems Center at Oak Ridge National Laboratory. Utility functions that are designed based on specifications from the system owner and users are used to create a metric for the performance of resource allocation heuristics. Each task has a time-varying utility (importance) that the enterprise will earn based on when the task successfully completes execution. We design multiple heuristics, which include a technique to drop low utility-earning tasks, to maximize the total utility that can be earned by completing tasks. The heuristics are evaluated using simulation experiments with two levels of oversubscription. The results show the benefit of having fast heuristics that account for the importance of a task and the heterogeneity of the environment when making allocation decisions in an oversubscribed environment. Furthermore, the ability to drop low utility-earning tasks allow the heuristics to tolerate the high oversubscription as well as earn significant utility.
Utility functions and resource management in an oversubscribed heterogeneous computing environment
Khemka, Bhavesh; Friese, Ryan; Briceno, Luis Diego; ...
2014-09-26
We model an oversubscribed heterogeneous computing system where tasks arrive dynamically and a scheduler maps the tasks to machines for execution. The environment and workloads are based on those being investigated by the Extreme Scale Systems Center at Oak Ridge National Laboratory. Utility functions that are designed based on specifications from the system owner and users are used to create a metric for the performance of resource allocation heuristics. Each task has a time-varying utility (importance) that the enterprise will earn based on when the task successfully completes execution. We design multiple heuristics, which include a technique to drop lowmore » utility-earning tasks, to maximize the total utility that can be earned by completing tasks. The heuristics are evaluated using simulation experiments with two levels of oversubscription. The results show the benefit of having fast heuristics that account for the importance of a task and the heterogeneity of the environment when making allocation decisions in an oversubscribed environment. Furthermore, the ability to drop low utility-earning tasks allow the heuristics to tolerate the high oversubscription as well as earn significant utility.« less
Determining Roots of Complex Functions with Computer Graphics.
ERIC Educational Resources Information Center
Skala, Helen; Kowalski, Robert
1990-01-01
Describes a graphical method of approximating roots of complex functions that uses the multicolor display capabilities of microcomputers. Theorems and proofs are presented that illustrate the method, and uses in undergraduate mathematics courses are suggested, including numerical analysis and complex variables. (six references) (LRW)
Mahmud, Zabed; Malik, Syeda Umme Fahmida; Ahmed, Jahed
2016-01-01
Single-nucleotide polymorphisms (SNPs) associated with complex disorders can create, destroy, or modify protein coding sites. Single amino acid substitutions in the insulin receptor (INSR) are the most common forms of genetic variations that account for various diseases like Donohue syndrome or Leprechaunism, Rabson-Mendenhall syndrome, and type A insulin resistance. We analyzed the deleterious nonsynonymous SNPs (nsSNPs) in INSR gene based on different computational methods. Analysis of INSR was initiated with PROVEAN followed by PolyPhen and I-Mutant servers to investigate the effects of 57 nsSNPs retrieved from database of SNP (dbSNP). A total of 18 mutations that were found to exert damaging effects on the INSR protein structure and function were chosen for further analysis. Among these mutations, our computational analysis suggested that 13 nsSNPs decreased protein stability and might have resulted in loss of function. Therefore, the probability of their involvement in disease predisposition increases. In the lack of adequate prior reports on the possible deleterious effects of nsSNPs, we have systematically analyzed and characterized the functional variants in coding region that can alter the expression and function of INSR gene. In silico characterization of nsSNPs affecting INSR gene function can aid in better understanding of genetic differences in disease susceptibility. PMID:27840822
A general computational framework for modeling cellular structure and function.
Schaff, J; Fink, C C; Slepchenko, B; Carson, J H; Loew, L M
1997-01-01
The "Virtual Cell" provides a general system for testing cell biological mechanisms and creates a framework for encapsulating the burgeoning knowledge base comprising the distribution and dynamics of intracellular biochemical processes. It approaches the problem by associating biochemical and electrophysiological data describing individual reactions with experimental microscopic image data describing their subcellular localizations. Individual processes are collected within a physical and computational infrastructure that accommodates any molecular mechanism expressible as rate equations or membrane fluxes. An illustration of the method is provided by a dynamic simulation of IP3-mediated Ca2+ release from endoplasmic reticulum in a neuronal cell. The results can be directly compared to experimental observations and provide insight into the role of experimentally inaccessible components of the overall mechanism. Images FIGURE 1 FIGURE 2 FIGURE 4 FIGURE 5 PMID:9284281
ERIC Educational Resources Information Center
Prickett, Charlotte
This curriculum guide describes the accounting curriculum in the following three areas: accounting clerk, bookkeeper, and nondegreed accountant. The competencies and tasks complement the Arizona validated listing in these areas. The guide lists 24 competencies for nondegreed accountants, 10 competencies for accounting clerks, and 11 competencies…
NASA Technical Reports Server (NTRS)
Curran, R. T.
1971-01-01
A flight computer functional executive design for the reusable shuttle is presented. The design is given in the form of functional flowcharts and prose description. Techniques utilized in the regulation of process flow to accomplish activation, resource allocation, suspension, termination, and error masking based on process primitives are considered. Preliminary estimates of main storage utilization by the Executive are furnished. Conclusions and recommendations for timely, effective software-hardware integration in the reusable shuttle avionics system are proposed.
ERIC Educational Resources Information Center
Sarfo, Frederick Kwaku; Amankwah, Francis; Konin, Daniel
2017-01-01
The study is aimed at investigating 1) the level of computer self-efficacy among public senior high school (SHS) teachers in Ghana and 2) the functionality of teachers' age, gender, and computer experiences on their computer self-efficacy. Four hundred and Seven (407) SHS teachers were used for the study. The "Computer Self-Efficacy"…
Frequency domain transfer function identification using the computer program SYSFIT
Trudnowski, D.J.
1992-12-01
Because the primary application of SYSFIT for BPA involves studying power system dynamics, this investigation was geared toward simulating the effects that might be encountered in studying electromechanical oscillations in power systems. Although the intended focus of this work is power system oscillations, the studies are sufficiently genetic that the results can be applied to many types of oscillatory systems with closely-spaced modes. In general, there are two possible ways of solving the optimization problem. One is to use a least-squares optimization function and to write the system in such a form that the problem becomes one of linear least-squares. The solution can then be obtained using a standard least-squares technique. The other method involves using a search method to obtain the optimal model. This method allows considerably more freedom in forming the optimization function and model, but it requires an initial guess of the system parameters. SYSFIT employs this second approach. Detailed investigations were conducted into three main areas: (1) fitting to exact frequency response data of a linear system; (2) fitting to the discrete Fourier transformation of noisy data; and (3) fitting to multi-path systems. The first area consisted of investigating the effects of alternative optimization cost function options; using different optimization search methods; incorrect model order, missing response data; closely-spaced poles; and closely-spaced pole-zero pairs. Within the second area, different noise colorations and levels were studied. In the third area, methods were investigated for improving fitting results by incorporating more than one system path. The following is a list of guidelines and properties developed from the study for fitting a transfer function to the frequency response of a system using optimization search methods.
A computational interactome and functional annotation for the human proteome
Garzón, José Ignacio; Deng, Lei; Murray, Diana; Shapira, Sagi; Petrey, Donald; Honig, Barry
2016-01-01
We present a database, PrePPI (Predicting Protein-Protein Interactions), of more than 1.35 million predicted protein-protein interactions (PPIs). Of these at least 127,000 are expected to constitute direct physical interactions although the actual number may be much larger (~500,000). The current PrePPI, which contains predicted interactions for about 85% of the human proteome, is related to an earlier version but is based on additional sources of interaction evidence and is far larger in scope. The use of structural relationships allows PrePPI to infer numerous previously unreported interactions. PrePPI has been subjected to a series of validation tests including reproducing known interactions, recapitulating multi-protein complexes, analysis of disease associated SNPs, and identifying functional relationships between interacting proteins. We show, using Gene Set Enrichment Analysis (GSEA), that predicted interaction partners can be used to annotate a protein’s function. We provide annotations for most human proteins, including many annotated as having unknown function. DOI: http://dx.doi.org/10.7554/eLife.18715.001 PMID:27770567
Toward high-resolution computational design of helical membrane protein structure and function
Barth, Patrick; Senes, Alessandro
2016-01-01
The computational design of α-helical membrane proteins is still in its infancy but has made important progress. De novo design has produced stable, specific and active minimalistic oligomeric systems. Computational re-engineering can improve stability and modulate the function of natural membrane proteins. Currently, the major hurdle for the field is not computational, but the experimental characterization of the designs. The emergence of new structural methods for membrane proteins will accelerate progress PMID:27273630
The default-mode, ego-functions and free-energy: a neurobiological account of Freudian ideas
Friston, K. J.
2010-01-01
This article explores the notion that Freudian constructs may have neurobiological substrates. Specifically, we propose that Freud’s descriptions of the primary and secondary processes are consistent with self-organized activity in hierarchical cortical systems and that his descriptions of the ego are consistent with the functions of the default-mode and its reciprocal exchanges with subordinate brain systems. This neurobiological account rests on a view of the brain as a hierarchical inference or Helmholtz machine. In this view, large-scale intrinsic networks occupy supraordinate levels of hierarchical brain systems that try to optimize their representation of the sensorium. This optimization has been formulated as minimizing a free-energy; a process that is formally similar to the treatment of energy in Freudian formulations. We substantiate this synthesis by showing that Freud’s descriptions of the primary process are consistent with the phenomenology and neurophysiology of rapid eye movement sleep, the early and acute psychotic state, the aura of temporal lobe epilepsy and hallucinogenic drug states. PMID:20194141
DeVer, E.A.
1987-01-01
Safeguards had its beginning in the early 1940s and has continued to grow through the stormy years in dealing with nuclear materials. MC and A Plans have been developed for each facility which includes requirements for containment, surveillance, internal controls, measurements, statistics, records and report systems, and inventory certification of its nuclear materials, in the context of how precisely the inventory is known at stated risk or confidence levels. The I and E Regulations, the newest document affecting the control system, are used for testing the current MC and A plan in place at each facility. Nuclear Materials Management activities also have reporting requirements that include: (1) Annual Forecast, (2) Materials Management Plan, (3) Quarterly Status Report, (4) Assessment Report, and (5) Scrap and Excess Material Management. Data used to generate reports for both functions come from the same data base and source documents at most facilities. The separation of sponsoring groups at the DOE for NM Accountability and NM Management can and does pose problems for contractors. In this paper, we will try to separate and identify these overlaps at the Facility and DOE level.
Performance of a computer-based assessment of cognitive function measures in two cohorts of seniors
Technology Transfer Automated Retrieval System (TEKTRAN)
Computer-administered assessment of cognitive function is being increasingly incorporated in clinical trials, however its performance in these settings has not been systematically evaluated. The Seniors Health and Activity Research Program (SHARP) pilot trial (N=73) developed a computer-based tool f...
A Systematic Approach for Understanding Slater-Gaussian Functions in Computational Chemistry
ERIC Educational Resources Information Center
Stewart, Brianna; Hylton, Derrick J.; Ravi, Natarajan
2013-01-01
A systematic way to understand the intricacies of quantum mechanical computations done by a software package known as "Gaussian" is undertaken via an undergraduate research project. These computations involve the evaluation of key parameters in a fitting procedure to express a Slater-type orbital (STO) function in terms of the linear…
Fair and Square Computation of Inverse "Z"-Transforms of Rational Functions
ERIC Educational Resources Information Center
Moreira, M. V.; Basilio, J. C.
2012-01-01
All methods presented in textbooks for computing inverse "Z"-transforms of rational functions have some limitation: 1) the direct division method does not, in general, provide enough information to derive an analytical expression for the time-domain sequence "x"("k") whose "Z"-transform is "X"("z"); 2) computation using the inversion integral…
Effects of Computer versus Paper Administration of an Adult Functional Writing Assessment
ERIC Educational Resources Information Center
Chen, Jing; White, Sheida; McCloskey, Michael; Soroui, Jaleh; Chun, Young
2011-01-01
This study investigated the comparability of paper and computer versions of a functional writing assessment administered to adults 16 and older. Three writing tasks were administered in both paper and computer modes to volunteers in the field test of an assessment of adult literacy in 2008. One set of analyses examined mode effects on scoring by…
Computer programs for calculation of thermodynamic functions of mixing in crystalline solutions
NASA Technical Reports Server (NTRS)
Comella, P. A.; Saxena, S. K.
1972-01-01
The computer programs Beta, GEGIM, REGSOL1, REGSOL2, Matrix, and Quasi are presented. The programs are useful in various calculations for the thermodynamic functions of mixing and the activity-composition relations in rock forming minerals.
Density functional computations for inner-shell excitation spectroscopy
NASA Astrophysics Data System (ADS)
Hu, Ching-Han; Chong, Delano P.
1996-11-01
The 1 s → π ∗ inner-shell excitation spectra of seven molecules have been studied using density functional theory along with the unrestricted generalized transition state (uGTS) approach. The exchange-correlation potential is based on a combined functional of Becke's exchange (B88) and Perdew's correlation (P86). A scaling procedure based on Clementi and Raimondi's rules for atomic screening is applied to the cc-pVTZ basis set of atoms where a partial core-hole is created in the uGTS calculations. The average absolute deviation between our predicted 1 s → π ∗ excitations eneergies and experimental values is only 0.16 eV. Singlet-triplet splittings of C 1 s → π ∗ transitions of CO, C 2H 2, C 2H 4, and C 6H 6 also agree with experimental observations. The average absolute deviation of our predicted core-electron binding energies and term values is 0.23 and 0.29 eV, respectively.
Computational characterization of sodium selenite using density functional theory.
Barraza-Jiménez, Diana; Flores-Hidalgo, Manuel Alberto; Galvan, Donald H; Sánchez, Esteban; Glossman-Mitnik, Daniel
2011-04-01
In this theoretical study we used density functional theory to calculate the molecular and crystalline structures of sodium selenite. Our structural results were compared with experimental data. From the molecular structure we determined the ionization potential, electronic affinity, and global reactivity parameters like electronegativity, hardness, softness and global electrophilic index. A significant difference in the IP and EA values was observed, and this difference was dependent on the calculation method used (employing either vertical or adiabatic energies). Thus, values obtained for the electrophilic index (2.186 eV from vertical energies and 2.188 eV from adiabatic energies) were not significantly different. Selectivity was calculated using the Fukui functions. Since the Mulliken charge study predicted a negative value, it is recommended that AIM should be used in selectivity characterization. It was evident from the selectivity index that sodium atoms are the most sensitive sites to nucleophilic attack. The results obtained in this work provide data that will aid the characterization of compounds used in crop biofortification.
Computation of Schenberg response function by using finite element modelling
NASA Astrophysics Data System (ADS)
Frajuca, C.; Bortoli, F. S.; Magalhaes, N. S.
2016-05-01
Schenberg is a detector of gravitational waves resonant mass type, with a central frequency of operation of 3200 Hz. Transducers located on the surface of the resonating sphere, according to a distribution half-dodecahedron, are used to monitor a strain amplitude. The development of mechanical impedance matchers that act by increasing the coupling of the transducers with the sphere is a major challenge because of the high frequency and small in size. The objective of this work is to study the Schenberg response function obtained by finite element modeling (FEM). Finnaly, the result is compared with the result of the simplified model for mass spring type system modeling verifying if that is suitable for the determination of sensitivity detector, as the conclusion the both modeling give the same results.
A Computer Program for the Computation of Running Gear Temperatures Using Green's Function
NASA Technical Reports Server (NTRS)
Koshigoe, S.; Murdock, J. W.; Akin, L. S.; Townsend, D. P.
1996-01-01
A new technique has been developed to study two dimensional heat transfer problems in gears. This technique consists of transforming the heat equation into a line integral equation with the use of Green's theorem. The equation is then expressed in terms of eigenfunctions that satisfy the Helmholtz equation, and their corresponding eigenvalues for an arbitrarily shaped region of interest. The eigenfunction are obtalned by solving an intergral equation. Once the eigenfunctions are found, the temperature is expanded in terms of the eigenfunctions with unknown time dependent coefficients that can be solved by using Runge Kutta methods. The time integration is extremely efficient. Therefore, any changes in the time dependent coefficients or source terms in the boundary conditions do not impose a great computational burden on the user. The method is demonstrated by applying it to a sample gear tooth. Temperature histories at representative surface locatons are given.
NASA Astrophysics Data System (ADS)
Borgis, Daniel; Assaraf, Roland; Rotenberg, Benjamin; Vuilleumier, Rodolphe
2013-12-01
No fancy statistical objects here, we go back to the computation of one of the most basic and fundamental quantities in the statistical mechanics of fluids, namely the pair distribution functions. Those functions are usually computed in molecular simulations by using histogram techniques. We show here that they can be estimated using a global information on the instantaneous forces acting on the particles, and that this leads to a reduced variance compared to the standard histogram estimators. The technique is extended successfully to the computation of three-dimensional solvent densities around tagged molecular solutes, quantities that are noisy and very long to converge, using histograms.
2013-01-01
Background District level health system governance is recognised as an important but challenging element of health system development in low and middle-income countries. Accountability is a more recent focus in health system debates. Accountability mechanisms are governance tools that seek to regulate answerability between the health system and the community (external accountability) and/or between different levels of the health system (bureaucratic accountability). External accountability has attracted significant attention in recent years, but bureaucratic accountability mechanisms, and the interactions between the two forms of accountability, have been relatively neglected. This is an important gap given that webs of accountability relationships exist within every health system. There is a need to strike a balance between achieving accountability upwards within the health system (for example through information reporting arrangements) while at the same time allowing for the local level innovation that could improve quality of care and patient responsiveness. Methods Using a descriptive literature review, this paper examines the factors that influence the functioning of accountability mechanisms and relationships within the district health system, and draws out the implications for responsiveness to patients and communities. We also seek to understand the practices that might strengthen accountability in ways that improve responsiveness – of the health system to citizens’ needs and rights, and of providers to patients. Results The review highlights the ways in which bureaucratic accountability mechanisms often constrain the functioning of external accountability mechanisms. For example, meeting the expectations of relatively powerful managers further up the system may crowd out efforts to respond to citizens and patients. Organisational cultures characterized by supervision and management systems focused on compliance to centrally defined outputs and targets
Fast computation of functional networks from fMRI activity: a multi-platform comparison
NASA Astrophysics Data System (ADS)
Rao, A. Ravishankar; Bordawekar, Rajesh; Cecchi, Guillermo
2011-03-01
The recent deployment of functional networks to analyze fMRI images has been very promising. In this method, the spatio-temporal fMRI data is converted to a graph-based representation, where the nodes are voxels and edges indicate the relationship between the nodes, such as the strength of correlation or causality. Graph-theoretic measures can then be used to compare different fMRI scans. However, there is a significant computational bottleneck, as the computation of functional networks with directed links takes several hours on conventional machines with single CPUs. The study in this paper shows that a GPU can be advantageously used to accelerate the computation, such that the network computation takes a few minutes. Though GPUs have been used for the purposes of displaying fMRI images, their use in computing functional networks is novel. We describe specific techniques such as load balancing, and the use of a large number of threads to achieve the desired speedup. Our experience in utilizing the GPU for functional network computations should prove useful to the scientific community investigating fMRI as GPUs are a low-cost platform for addressing the computational bottleneck.
Khang, G; Zajac, F E
1989-09-01
We have developed a planar computer model to investigate paraplegic standing induced by functional neuromuscular stimulation. The model consists of nonlinear musculotendon dynamics (pulse train activation dynamics and musculotendon actuator dynamics), nonlinear body-segmental dynamics, and a linear output-feedback control law. The model of activation dynamics is an analytic expression that characterizes the relation between the stimulus parameters (pulse width and interpulse interval) and the muscle activation. Hill's classic two-element muscle model was modified into a musculotendon actuator model in order to account for the effects of submaximal activation and tendon elasticity on development of force by the actuator. The three body-segmental, multijoint model accounts for the anterior-posterior movements of the head and trunk, the thigh, and the shank. We modeled arm movement as an external disturbance and imposed the disturbance to the body-segmental dynamics by means of a quasistatic analysis. Linearization, and at times linear approximation of the computer model, enabled us to compute a constant, linear feedback-gain matrix, whose output is the net activation needed by a dynamical joint-torque actuator. Motivated by an assumption that minimization of energy expenditure lessens muscle fatigue, we developed an algorithm that then computes how to distribute the net activation among all the muscles crossing the joint. In part II, the combined feedback control strategy is applied to the nonlinear model of musculotendon and body-segmental dynamics to study how well the body ought to maintain balance should the feedback control strategy be employed.
A mesh-decoupled height function method for computing interface curvature
NASA Astrophysics Data System (ADS)
Owkes, Mark; Desjardins, Olivier
2015-01-01
In this paper, a mesh-decoupled height function method is proposed and tested. The method is based on computing height functions within columns that are not aligned with the underlying mesh and have variable dimensions. Because they are decoupled from the computational mesh, the columns can be aligned with the interface normal vector, which is found to improve the curvature calculation for under-resolved interfaces where the standard height function method often fails. A computational geometry toolbox is used to compute the heights in the complex geometry that is formed at the intersection of the computational mesh and the columns. The toolbox reduces the complexity of the problem to a series of straightforward geometric operations using simplices. The proposed scheme is shown to compute more accurate curvatures than the standard height function method on coarse meshes. A combined method that uses the standard height function where it is well defined and the proposed scheme in under-resolved regions is tested. This approach achieves accurate and robust curvatures for under-resolved interface features and second-order converging curvatures for well-resolved interfaces.
Ansah, John P.; Malhotra, Rahul; Lew, Nicola; Chiu, Chi-Tsun; Chan, Angelique; Bayer, Steffen; Matchar, David B.
2015-01-01
This study compares projections, up to year 2040, of young-old (aged 60-79) and old-old (aged 80+) with functional disability in Singapore with and without accounting for the changing educational composition of the Singaporean elderly. Two multi-state population models, with and without accounting for educational composition respectively, were developed, parameterized with age-gender-(education)-specific transition probabilities (between active, functional disability and death states) estimated from two waves (2009 and 2011) of a nationally representative survey of community-dwelling Singaporeans aged ≥60 years (N=4,990). Probabilistic sensitivity analysis with the bootstrap method was used to obtain the 95% confidence interval of the transition probabilities. Not accounting for educational composition overestimated the young-old with functional disability by 65 percent and underestimated the old-old by 20 percent in 2040. Accounting for educational composition, the proportion of old-old with functional disability increased from 40.8 percent in 2000 to 64.4 percent by 2040; not accounting for educational composition, the proportion in 2040 was 49.4 percent. Since the health profiles, and hence care needs, of the old-old differ from those of the young-old, health care service utilization and expenditure and the demand for formal and informal caregiving will be affected, impacting health and long-term care policy. PMID:25974069
Ansah, John P; Malhotra, Rahul; Lew, Nicola; Chiu, Chi-Tsun; Chan, Angelique; Bayer, Steffen; Matchar, David B
2015-01-01
This study compares projections, up to year 2040, of young-old (aged 60-79) and old-old (aged 80+) with functional disability in Singapore with and without accounting for the changing educational composition of the Singaporean elderly. Two multi-state population models, with and without accounting for educational composition respectively, were developed, parameterized with age-gender-(education)-specific transition probabilities (between active, functional disability and death states) estimated from two waves (2009 and 2011) of a nationally representative survey of community-dwelling Singaporeans aged ≥ 60 years (N=4,990). Probabilistic sensitivity analysis with the bootstrap method was used to obtain the 95% confidence interval of the transition probabilities. Not accounting for educational composition overestimated the young-old with functional disability by 65 percent and underestimated the old-old by 20 percent in 2040. Accounting for educational composition, the proportion of old-old with functional disability increased from 40.8 percent in 2000 to 64.4 percent by 2040; not accounting for educational composition, the proportion in 2040 was 49.4 percent. Since the health profiles, and hence care needs, of the old-old differ from those of the young-old, health care service utilization and expenditure and the demand for formal and informal caregiving will be affected, impacting health and long-term care policy.
Burns, A.W.
1988-01-01
This report describes an interactive-accounting model used to simulate streamflow, chemical-constituent concentrations and loads, and water-supply operations in a river basin. The model uses regression equations to compute flow from incremental (internode) drainage areas. Conservative chemical constituents (typically dissolved solids) also are computed from regression equations. Both flow and water quality loads are accumulated downstream. Optionally, the model simulates the water use and the simplified groundwater systems of a basin. Water users include agricultural, municipal, industrial, and in-stream users , and reservoir operators. Water users list their potential water sources, including direct diversions, groundwater pumpage, interbasin imports, or reservoir releases, in the order in which they will be used. Direct diversions conform to basinwide water law priorities. The model is interactive, and although the input data exist in files, the user can modify them interactively. A major feature of the model is its color-graphic-output options. This report includes a description of the model, organizational charts of subroutines, and examples of the graphics. Detailed format instructions for the input data, example files of input data, definitions of program variables, and listing of the FORTRAN source code are Attachments to the report. (USGS)
ERIC Educational Resources Information Center
Tumthong, Suwut; Piriyasurawong, Pullop; Jeerangsuwan, Namon
2016-01-01
This research proposes a functional competency development model for academic personnel based on international professional qualification standards in computing field and examines the appropriateness of the model. Specifically, the model consists of three key components which are: 1) functional competency development model, 2) blended training…
Computation of turbulent boundary layers employing the defect wall-function method. M.S. Thesis
NASA Technical Reports Server (NTRS)
Brown, Douglas L.
1994-01-01
In order to decrease overall computational time requirements of spatially-marching parabolized Navier-Stokes finite-difference computer code when applied to turbulent fluid flow, a wall-function methodology, originally proposed by R. Barnwell, was implemented. This numerical effort increases computational speed and calculates reasonably accurate wall shear stress spatial distributions and boundary-layer profiles. Since the wall shear stress is analytically determined from the wall-function model, the computational grid near the wall is not required to spatially resolve the laminar-viscous sublayer. Consequently, a substantially increased computational integration step size is achieved resulting in a considerable decrease in net computational time. This wall-function technique is demonstrated for adiabatic flat plate test cases from Mach 2 to Mach 8. These test cases are analytically verified employing: (1) Eckert reference method solutions, (2) experimental turbulent boundary-layer data of Mabey, and (3) finite-difference computational code solutions with fully resolved laminar-viscous sublayers. Additionally, results have been obtained for two pressure-gradient cases: (1) an adiabatic expansion corner and (2) an adiabatic compression corner.
ERIC Educational Resources Information Center
Pondy, Dorothy, Comp.
The catalog was compiled to assist instructors in planning community college and university curricula using the 48 computer-assisted accountancy lessons available on PLATO IV (Programmed Logic for Automatic Teaching Operation) for first semester accounting courses. It contains information on lesson access, lists of acceptable abbreviations for…
Computation of fractional integrals via functions of hypergeometric and Bessel type
NASA Astrophysics Data System (ADS)
Kilbas, A. A.; Trujillo, J. J.
2000-06-01
The paper is devoted to computation of the fractional integrals of power exponential functions. It is considered a function [lambda][gamma],[sigma]([beta])(z) defined bywith positive [beta] and complex [gamma], [sigma] and z such that Re([gamma])>(1/[beta])-1 and Re(z)>0. The special cases are discussed when [lambda][gamma],[sigma]([beta])(z) is expressed in terms of the Tricomi confluent hypergeometric function [Psi](a,c;x) and of modified Bessel function of the third kind K[gamma](x). Representations of these functions via fractional integrals are proved. The results obtained apply to compute fractional integrals of power exponential functions in terms of [lambda][gamma],[sigma]([beta])(x), [Psi](a,c;x) and K[gamma](x). Examples are considered.
ERIC Educational Resources Information Center
Kane, Thomas J.; Staiger, Douglas O.; Geppert, Jeffrey
2002-01-01
The accountability debate tends to devolve into a battle between the pro-testing and anti-testing crowds. When it comes to the design of a school accountability system, the devil is truly in the details. A well-designed accountability plan may go a long way toward giving school personnel the kinds of signals they need to improve performance.…
Locating and computing in parallel all the simple roots of special functions using PVM
NASA Astrophysics Data System (ADS)
Plagianakos, V. P.; Nousis, N. K.; Vrahatis, M. N.
2001-08-01
An algorithm is proposed for locating and computing in parallel and with certainty all the simple roots of any twice continuously differentiable function in any specific interval. To compute with certainty all the roots, the proposed method is heavily based on the knowledge of the total number of roots within the given interval. To obtain this information we use results from topological degree theory and, in particular, the Kronecker-Picard approach. This theory gives a formula for the computation of the total number of roots of a system of equations within a given region, which can be computed in parallel. With this tool in hand, we construct a parallel procedure for the localization and isolation of all the roots by dividing the given region successively and applying the above formula to these subregions until the final domains contain at the most one root. The subregions with no roots are discarded, while for the rest a modification of the well-known bisection method is employed for the computation of the contained root. The new aspect of the present contribution is that the computation of the total number of zeros using the Kronecker-Picard integral as well as the localization and computation of all the roots is performed in parallel using the parallel virtual machine (PVM). PVM is an integrated set of software tools and libraries that emulates a general-purpose, flexible, heterogeneous concurrent computing framework on interconnected computers of varied architectures. The proposed algorithm has large granularity and low synchronization, and is robust. It has been implemented and tested and our experience is that it can massively compute with certainty all the roots in a certain interval. Performance information from massive computations related to a recently proposed conjecture due to Elbert (this issue, J. Comput. Appl. Math. 133 (2001) 65-83) is reported.
NASA Astrophysics Data System (ADS)
Simanovska, J.; Šteina, Māra; Valters, K.; Bažbauers, G.
2009-01-01
The pollution prevention during the design phase of products and processes in environmental policy gains its importance over the other, more historically known principle - pollution reduction in the end-of-pipe. This approach requires prediction of potential environmental impacts to be avoided or reduced and a prioritisation of the most efficient areas for action. Currently the most appropriate method for this purpose is life cycle assessment (LCA)- a method for accounting and attributing all environmental impacts which arise during the life time of a product, starting with the production of raw materials and ending with the disposal, or recycling of the wasted product at the end of life. The LCA, however, can be misleading if the performers of the study disregard gaps of information and the limitations of the chosen methodology. During the study we researched the environmental impact of desktop computers, using a simplified LCA method - Indicators' 99, and by developing various scenarios (changing service life, user behaviour, energy supply etc). The study demonstrates that actions for improvements lie in very different areas. The study also concludes that the approach of defining functional unit must be sufficiently flexible in order to avoid discounting areas of potential actions. Therefore, with regard to computers we agree with other authors using the functional unit "one computer" but suggest not to bind this to service life or usage time, but to develop several scenarios varying these parameters. The study also demonstrates the importance of a systemic approach when assessing complex product systems - as more complex the system is, the more broad the scope for potential actions. We conclude that, regarding computers, which belong to energy using and material- intensive products, the measures to reduce environmental impacts lie not only with the producer and user of the particular product, but also with the whole national energy supply and waste management
NASA Technical Reports Server (NTRS)
Cohen, N. S.
1980-01-01
The paper presents theoretical models developed to account for the heterogeneity of composite propellants in expressing the pressure-coupled combustion response function. It is noted that the model of Lengelle and Williams (1968) furnishes a viable basis to explain the effects of heterogeneity.
Druskin, V.; Lee, Ping; Knizhnerman, L.
1996-12-31
There is now a growing interest in the area of using Krylov subspace approximations to compute the actions of matrix functions. The main application of this approach is the solution of ODE systems, obtained after discretization of partial differential equations by method of lines. In the event that the cost of computing the matrix inverse is relatively inexpensive, it is sometimes attractive to solve the ODE using the extended Krylov subspaces, originated by actions of both positive and negative matrix powers. Examples of such problems can be found frequently in computational electromagnetics.
NASA Technical Reports Server (NTRS)
King, H. F.; Komornicki, A.
1986-01-01
Formulas are presented relating Taylor series expansion coefficients of three functions of several variables, the energy of the trial wave function (W), the energy computed using the optimized variational wave function (E), and the response function (lambda), under certain conditions. Partial derivatives of lambda are obtained through solution of a recursive system of linear equations, and solution through order n yields derivatives of E through order 2n + 1, extending Puley's application of Wigner's 2n + 1 rule to partial derivatives in couple perturbation theory. An examination of numerical accuracy shows that the usual two-term second derivative formula is less stable than an alternative four-term formula, and that previous claims that energy derivatives are stationary properties of the wave function are fallacious. The results have application to quantum theoretical methods for the computation of derivative properties such as infrared frequencies and intensities.
Code of Federal Regulations, 2014 CFR
2014-04-01
.... (2) Credit balances must have sufficient digits to accommodate the design of the game. (3) Accounting data displayed to the player may be incremented or decremented using visual effects, but the internal..., during entertaining displays of game results. (2) Progressive prizes may be added to the player's...
Code of Federal Regulations, 2013 CFR
2013-04-01
.... (2) Credit balances must have sufficient digits to accommodate the design of the game. (3) Accounting data displayed to the player may be incremented or decremented using visual effects, but the internal..., during entertaining displays of game results. (2) Progressive prizes may be added to the player's...
Code of Federal Regulations, 2011 CFR
2011-04-01
... OF CLASS II GAMES § 547.9 What are the minimum technical standards for Class II gaming system... digits to accommodate the design of the game. (3) Accounting data displayed to the player may be... audit, configuration, recall and test modes; or (ii) Temporarily, during entertaining displays of...
Code of Federal Regulations, 2010 CFR
2010-04-01
... OF CLASS II GAMES § 547.9 What are the minimum technical standards for Class II gaming system... digits to accommodate the design of the game. (3) Accounting data displayed to the player may be... audit, configuration, recall and test modes; or (ii) Temporarily, during entertaining displays of...
Code of Federal Regulations, 2012 CFR
2012-04-01
... OF CLASS II GAMES § 547.9 What are the minimum technical standards for Class II gaming system... digits to accommodate the design of the game. (3) Accounting data displayed to the player may be... audit, configuration, recall and test modes; or (ii) Temporarily, during entertaining displays of...
Liu, Jia; Yan, Zhengzheng; Pu, Yuehua; Shiu, Wen-Shin; Wu, Jianhuang; Chen, Rongliang; Leng, Xinyi; Qin, Haiqiang; Liu, Xin; Jia, Baixue; Song, Ligang; Wang, Yilong; Miao, Zhongrong; Wang, Yongjun; Liu, Liping; Cai, Xiao-Chuan
2016-10-04
The fractional pressure ratio is introduced to quantitatively assess the hemodynamic significance of severe intracranial stenosis. A computational fluid dynamics-based method is proposed to non-invasively compute the FPRCFD and compared against fractional pressure ratio measured by an invasive technique. Eleven patients with severe intracranial stenosis considered for endovascular intervention were recruited and an invasive procedure was performed to measure the distal and the aortic pressure (Pd and Pa). The fractional pressure ratio was calculated as [Formula: see text] The computed tomography angiography was used to reconstruct three-dimensional (3D) arteries for each patient. Cerebral hemodynamics was then computed for the arteries using a mathematical model governed by Navier-Stokes equations and with the outflow conditions imposed by a model of distal resistance and compliance. The non-invasive [Formula: see text], [Formula: see text], and FPRCFD were then obtained from the computational fluid dynamics calculation using a 16-core parallel computer. The invasive and non-invasive parameters were tested by statistical analysis. For this group of patients, the computational fluid dynamics method achieved comparable results with the invasive measurements. The fractional pressure ratio and FPRCFD are very close and highly correlated, but not linearly proportional, with the percentage of stenosis. The proposed computational fluid dynamics method can potentially be useful in assessing the functional alteration of cerebral stenosis.
Computation of determinant expansion coefficients within the graphically contracted function method.
Gidofalvi, Gergely; Shepard, Ron
2009-11-30
Most electronic structure methods express the wavefunction as an expansion of N-electron basis functions that are chosen to be either Slater determinants or configuration state functions. Although the expansion coefficient of a single determinant may be readily computed from configuration state function coefficients for small wavefunction expansions, traditional algorithms are impractical for systems with a large number of electrons and spatial orbitals. In this work, we describe an efficient algorithm for the evaluation of a single determinant expansion coefficient for wavefunctions expanded as a linear combination of graphically contracted functions. Each graphically contracted function has significant multiconfigurational character and depends on a relatively small number of variational parameters called arc factors. Because the graphically contracted function approach expresses the configuration state function coefficients as products of arc factors, a determinant expansion coefficient may be computed recursively more efficiently than with traditional configuration interaction methods. Although the cost of computing determinant coefficients scales exponentially with the number of spatial orbitals for traditional methods, the algorithm presented here exploits two levels of recursion and scales polynomially with system size. Hence, as demonstrated through applications to systems with hundreds of electrons and orbitals, it may readily be applied to very large systems.
Analysis and selection of optimal function implementations in massively parallel computer
Archer, Charles Jens; Peters, Amanda; Ratterman, Joseph D.
2011-05-31
An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.
NASA Technical Reports Server (NTRS)
Almroth, B. O.; Stehlin, P.; Brogan, F. A.
1981-01-01
A method for improving the efficiency of nonlinear structural analysis by the use of global displacement functions is presented. The computer programs include options to define the global functions as input or let the program automatically select and update these functions. The program was applied to a number of structures: (1) 'pear-shaped cylinder' in compression, (2) bending of a long cylinder, (3) spherical shell subjected to point force, (4) panel with initial imperfections, (5) cylinder with cutouts. The sample cases indicate the usefulness of the procedure in the solution of nonlinear structural shell problems by the finite element method. It is concluded that the use of global functions for extrapolation will lead to savings in computer time.
Networks of spiking neurons that compute linear functions using action potential timing
NASA Astrophysics Data System (ADS)
Ruf, Berthold
1999-03-01
For fast neural computations within the brain it is very likely that the timing of single firing events is relevant. Recently Maass has shown that under certain weak assumptions a weighted sum can be computed in temporal coding by leaky integrate-and-fire neurons. This construction can be extended to approximate arbitrary functions. In comparison to integrate-and-fire neurons there are several sources in biologically more realistic neurons for additional nonlinear effects like e.g. the spatial and temporal interaction of postsynaptic potentials or voltage-gated ion channels at the soma. Here we demonstrate with the help of computer simulations using GENESIS that despite of these nonlinearities such neurons can compute linear functions in a natural and straightforward way based on the main principles of the construction given by Maass. One only has to assume that a neuron receives all its inputs in a time interval of approximately the length of the rising segment of its excitatory postsynaptic potentials. We also show that under certain assumptions there exists within this construction some type of activation function being computed by such neurons. Finally we demonstrate that on the basis of these results it is possible to realize in a simple way pattern analysis with spiking neurons. It allows the analysis of a mixture of several learned patterns within a few milliseconds.
NASA Technical Reports Server (NTRS)
Trosset, Michael W.
1999-01-01
Comprehensive computational experiments to assess the performance of algorithms for numerical optimization require (among other things) a practical procedure for generating pseudorandom nonlinear objective functions. We propose a procedure that is based on the convenient fiction that objective functions are realizations of stochastic processes. This report details the calculations necessary to implement our procedure for the case of certain stationary Gaussian processes and presents a specific implementation in the statistical programming language S-PLUS.
MRIVIEW: An interactive computational tool for investigation of brain structure and function
Ranken, D.; George, J.
1993-12-31
MRIVIEW is a software system which uses image processing and visualization to provide neuroscience researchers with an integrated environment for combining functional and anatomical information. Key features of the software include semi-automated segmentation of volumetric head data and an interactive coordinate reconciliation method which utilizes surface visualization. The current system is a precursor to a computational brain atlas. We describe features this atlas will incorporate, including methods under development for visualizing brain functional data obtained from several different research modalities.
A fast computation method for MUSIC spectrum function based on circular arrays
NASA Astrophysics Data System (ADS)
Du, Zhengdong; Wei, Ping
2015-02-01
The large computation amount of multiple signal classification (MUSIC) spectrum function seriously affects the timeliness of direction finding system using MUSIC algorithm, especially in the two-dimensional directions of arrival (DOA) estimation of azimuth and elevation with a large antenna array. This paper proposes a fast computation method for MUSIC spectrum. It is suitable for any circular array. First, the circular array is transformed into a virtual uniform circular array, in the process of calculating MUSIC spectrum, for the cyclic characteristics of steering vector, the inner product in the calculation of spatial spectrum is realised by cyclic convolution. The computational amount of MUSIC spectrum is obviously less than that of the conventional method. It is a very practical way for MUSIC spectrum computation in circular arrays.
ERIC Educational Resources Information Center
Moskovis, L. Michael; McKitrick, Max O.
Outlined in this two-part document is a model for the implementation of a business-industry oriented program designed to provide high school seniors with updated training in the skills and concepts necessary for developing competencies in entry-level and second-level accounting jobs that involve accounts receivable, accounts payable, and payroll…
Implementation of linear-scaling plane wave density functional theory on parallel computers
NASA Astrophysics Data System (ADS)
Skylaris, Chris-Kriton; Haynes, Peter D.; Mostofi, Arash A.; Payne, Mike C.
We describe the algorithms we have developed for linear-scaling plane wave density functional calculations on parallel computers as implemented in the onetep program. We outline how onetep achieves plane wave accuracy with a computational cost which increases only linearly with the number of atoms by optimising directly the single-particle density matrix expressed in a psinc basis set. We describe in detail the novel algorithms we have developed for computing with the psinc basis set the quantities needed in the evaluation and optimisation of the total energy within our approach. For our parallel computations we use the general Message Passing Interface (MPI) library of subroutines to exchange data between processors. Accordingly, we have developed efficient schemes for distributing data and computational load to processors in a balanced manner. We describe these schemes in detail and in relation to our algorithms for computations with a psinc basis. Results of tests on different materials show that onetep is an efficient parallel code that should be able to take advantage of a wide range of parallel computer architectures.
ERIC Educational Resources Information Center
Man, Yiu-Kwong
2012-01-01
In this note, a new method for computing the partial fraction decomposition of rational functions with irreducible quadratic factors in the denominators is presented. This method involves polynomial divisions and substitutions only, without having to solve for the complex roots of the irreducible quadratic polynomial or to solve a system of linear…
A Computational Model Quantifies the Effect of Anatomical Variability on Velopharyngeal Function
ERIC Educational Resources Information Center
Inouye, Joshua M.; Perry, Jamie L.; Lin, Kant Y.; Blemker, Silvia S.
2015-01-01
Purpose: This study predicted the effects of velopharyngeal (VP) anatomical parameters on VP function to provide a greater understanding of speech mechanics and aid in the treatment of speech disorders. Method: We created a computational model of the VP mechanism using dimensions obtained from magnetic resonance imaging measurements of 10 healthy…
Identifying Differential Item Functioning in Multi-Stage Computer Adaptive Testing
ERIC Educational Resources Information Center
Gierl, Mark J.; Lai, Hollis; Li, Johnson
2013-01-01
The purpose of this study is to evaluate the performance of CATSIB (Computer Adaptive Testing-Simultaneous Item Bias Test) for detecting differential item functioning (DIF) when items in the matching and studied subtest are administered adaptively in the context of a realistic multi-stage adaptive test (MST). MST was simulated using a 4-item…
The nonverbal communication functions of emoticons in computer-mediated communication.
Lo, Shao-Kang
2008-10-01
Most past studies assume that computer-mediated communication (CMC) lacks nonverbal communication cues. However, Internet users have devised and learned to use emoticons to assist their communications. This study examined emoticons as a communication tool that, although presented as verbal cues, perform nonverbal communication functions. We therefore termed emoticons quasi-nonverbal cues.
Maple (Computer Algebra System) in Teaching Pre-Calculus: Example of Absolute Value Function
ERIC Educational Resources Information Center
Tuluk, Güler
2014-01-01
Modules in Computer Algebra Systems (CAS) make Mathematics interesting and easy to understand. The present study focused on the implementation of the algebraic, tabular (numerical), and graphical approaches used for the construction of the concept of absolute value function in teaching mathematical content knowledge along with Maple 9. The study…
ERIC Educational Resources Information Center
Hetzroni, Orit E.; Tannous, Juman
2004-01-01
This study investigated the use of computer-based intervention for enhancing communication functions of children with autism. The software program was developed based on daily life activities in the areas of play, food, and hygiene. The following variables were investigated: delayed echolalia, immediate echolalia, irrelevant speech, relevant…
ERIC Educational Resources Information Center
Zwick, Rebecca; And Others
Simulated data were used to investigate the performance of modified versions of the Mantel-Haenszel and standardization methods of differential item functioning (DIF) analysis in computer-adaptive tests (CATs). Each "examinee" received 25 items out of a 75-item pool. A three-parameter logistic item response model was assumed, and…
PuFT: Computer-Assisted Program for Pulmonary Function Tests.
ERIC Educational Resources Information Center
Boyle, Joseph
1983-01-01
PuFT computer program (Microsoft Basic) is designed to help in understanding/interpreting pulmonary function tests (PFT). The program provides predicted values for common PFT after entry of patient data, calculates/plots graph simulating force vital capacity (FVC), and allows observations of effects on predicted PFT values and FVC curve when…
ERIC Educational Resources Information Center
Chalmers, R. Philip; Counsell, Alyssa; Flora, David B.
2016-01-01
Differential test functioning, or DTF, occurs when one or more items in a test demonstrate differential item functioning (DIF) and the aggregate of these effects are witnessed at the test level. In many applications, DTF can be more important than DIF when the overall effects of DIF at the test level can be quantified. However, optimal statistical…
A new Fortran 90 program to compute regular and irregular associated Legendre functions
NASA Astrophysics Data System (ADS)
Schneider, Barry I.; Segura, Javier; Gil, Amparo; Guan, Xiaoxu; Bartschat, Klaus
2010-12-01
We present a modern Fortran 90 code to compute the regular Plm(x) and irregular Qlm(x) associated Legendre functions for all x∈(-1,+1) (on the cut) and |x|>1 and integer degree ( l) and order ( m). The code applies either forward or backward recursion in ( l) and ( m) in the stable direction, starting with analytically known values for forward recursion and considering both a Wronskian based and a modified Miller's method for backward recursion. While some Fortran 77 codes existed for computing the functions off the cut, no Fortran 90 code was available for accurately computing the functions for all real values of x different from x=±1 where the irregular functions are not defined. Program summaryProgram title: Associated Legendre Functions Catalogue identifier: AEHE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 6722 No. of bytes in distributed program, including test data, etc.: 310 210 Distribution format: tar.gz Programming language: Fortran 90 Computer: Linux systems Operating system: Linux RAM: bytes Classification: 4.7 Nature of problem: Compute the regular and irregular associated Legendre functions for integer values of the degree and order and for all real arguments. The computation of the interaction of two electrons, 1/|r-r|, in prolate spheroidal coordinates is used as one example where these functions are required for all values of the argument and we are able to easily compare the series expansion in associated Legendre functions and the exact value. Solution method: The code evaluates the regular and irregular associated Legendre functions using forward recursion when |x|<1 starting the recursion with the analytically known values of the first two members of the sequence. For values of
On computation and use of Fourier coefficients for associated Legendre functions
NASA Astrophysics Data System (ADS)
Gruber, Christian; Abrykosov, Oleh
2016-06-01
The computation of spherical harmonic series in very high resolution is known to be delicate in terms of performance and numerical stability. A major problem is to keep results inside a numerical range of the used data type during calculations as under-/overflow arises. Extended data types are currently not desirable since the arithmetic complexity will grow exponentially with higher resolution levels. If the associated Legendre functions are computed in the spectral domain, then regular grid transformations can be applied to be highly efficient and convenient for derived quantities as well. In this article, we compare three recursive computations of the associated Legendre functions as trigonometric series, thereby ensuring a defined numerical range for each constituent wave number, separately. The results to a high degree and order show the numerical strength of the proposed method. First, the evaluation of Fourier coefficients of the associated Legendre functions has been done with respect to the floating-point precision requirements. Secondly, the numerical accuracy in the cases of standard double and long double precision arithmetic is demonstrated. Following Bessel's inequality the obtained accuracy estimates of the Fourier coefficients are directly transferable to the associated Legendre functions themselves and to derived functionals as well. Therefore, they can provide an essential insight to modern geodetic applications that depend on efficient spherical harmonic analysis and synthesis beyond [5~× ~5] arcmin resolution.
Computer generation of symbolic network functions - A new theory and implementation.
NASA Technical Reports Server (NTRS)
Alderson, G. E.; Lin, P.-M.
1972-01-01
A new method is presented for obtaining network functions in which some, none, or all of the network elements are represented by symbolic parameters (i.e., symbolic network functions). Unlike the topological tree enumeration or signal flow graph methods generally used to derive symbolic network functions, the proposed procedure employs fast, efficient, numerical-type algorithms to determine the contribution of those network branches that are not represented by symbolic parameters. A computer program called NAPPE (for Network Analysis Program using Parameter Extractions) and incorporating all of the concepts discussed has been written. Several examples illustrating the usefulness and efficiency of NAPPE are presented.
Efficient algorithm for computing exact partition functions of lattice polymer models
NASA Astrophysics Data System (ADS)
Hsieh, Yu-Hsin; Chen, Chi-Ning; Hu, Chin-Kun
2016-12-01
Polymers are important macromolecules in many physical, chemical, biological and industrial problems. Studies on simple lattice polymer models are very helpful for understanding behaviors of polymers. We develop an efficient algorithm for computing exact partition functions of lattice polymer models, and we use this algorithm and personal computers to obtain exact partition functions of the interacting self-avoiding walks with N monomers on the simple cubic lattice up to N = 28 and on the square lattice up to N = 40. Our algorithm can be extended to study other lattice polymer models, such as the HP model for protein folding and the charged HP model for protein aggregation. It also provides references for checking accuracy of numerical partition functions obtained by simulations.
Algorithms for Efficient Computation of Transfer Functions for Large Order Flexible Systems
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Giesy, Daniel P.
1998-01-01
An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, still-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open- and closed-loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, the present method was up to two orders of magnitude faster than a traditional method. The present method generally showed good to excellent accuracy throughout the range of test frequencies, while traditional methods gave adequate accuracy for lower frequencies, but generally deteriorated in performance at higher frequencies with worst case errors being many orders of magnitude times the correct values.
Basire, Marie; Borgis, Daniel; Vuilleumier, Rodolphe
2013-08-14
Langevin dynamics coupled to a quantum thermal bath (QTB) allows for the inclusion of vibrational quantum effects in molecular dynamics simulations at virtually no additional computer cost. We investigate here the ability of the QTB method to reproduce the quantum Wigner distribution of a variety of model potentials, designed to assess the performances and limits of the method. We further compute the infrared spectrum of a multidimensional model of proton transfer in the gas phase and in solution, using classical trajectories sampled initially from the Wigner distribution. It is shown that for this type of system involving large anharmonicities and strong nonlinear coupling to the environment, the quantum thermal bath is able to sample the Wigner distribution satisfactorily and to account for both zero point energy and tunneling effects. It leads to quantum time correlation functions having the correct short-time behavior, and the correct associated spectral frequencies, but that are slightly too overdamped. This is attributed to the classical propagation approximation rather than the generation of the quantized initial conditions themselves.
Distributed Accounting on the Grid
NASA Technical Reports Server (NTRS)
Thigpen, William; Hacker, Thomas J.; McGinnis, Laura F.; Athey, Brian D.
2001-01-01
By the late 1990s, the Internet was adequately equipped to move vast amounts of data between HPC (High Performance Computing) systems, and efforts were initiated to link together the national infrastructure of high performance computational and data storage resources together into a general computational utility 'grid', analogous to the national electrical power grid infrastructure. The purpose of the Computational grid is to provide dependable, consistent, pervasive, and inexpensive access to computational resources for the computing community in the form of a computing utility. This paper presents a fully distributed view of Grid usage accounting and a methodology for allocating Grid computational resources for use on a Grid computing system.
Redox Biology: Computational Approaches to the Investigation of Functional Cysteine Residues
Marino, Stefano M.
2011-01-01
Abstract Cysteine (Cys) residues serve many functions, such as catalysis, stabilization of protein structure through disulfides, metal binding, and regulation of protein function. Cys residues are also subject to numerous post-translational modifications. In recent years, various computational tools aiming at classifying and predicting different functional categories of Cys have been developed, particularly for structural and catalytic Cys. On the other hand, given complexity of the subject, bioinformatics approaches have been less successful for the investigation of regulatory Cys sites. In this review, we introduce different functional categories of Cys residues. For each category, an overview of state-of-the-art bioinformatics methods and tools is provided, along with examples of successful applications and potential limitations associated with each approach. Finally, we discuss Cys-based redox switches, which modify the view of distinct functional categories of Cys in proteins. Antioxid. Redox Signal. 15, 135–146. PMID:20812876
Mandonnet, Emmanuel; Duffau, Hugues
2014-01-01
Historically, cerebral processing has been conceptualized as a framework based on statically localized functions. However, a growing amount of evidence supports a hodotopical (delocalized) and flexible organization. A number of studies have reported absence of a permanent neurological deficit after massive surgical resections of eloquent brain tissue. These results highlight the tremendous plastic potential of the brain. Understanding anatomo-functional correlates underlying this cerebral reorganization is a prerequisite to restore brain functions through brain-computer interfaces (BCIs) in patients with cerebral diseases, or even to potentiate brain functions in healthy individuals. Here, we review current knowledge of neural networks that could be utilized in the BCIs that enable movements and language. To this end, intraoperative electrical stimulation in awake patients provides valuable information on the cerebral functional maps, their connectomics and plasticity. Overall, these studies indicate that the complex cerebral circuitry that underpins interactions between action, cognition and behavior should be throughly investigated before progress in BCI approaches can be achieved.
NASA Astrophysics Data System (ADS)
Venturi, Daniele
2016-11-01
The fundamental importance of functional differential equations has been recognized in many areas of mathematical physics, such as fluid dynamics, quantum field theory and statistical physics. For example, in the context of fluid dynamics, the Hopf characteristic functional equation was deemed by Monin and Yaglom to be "the most compact formulation of the turbulence problem", which is the problem of determining the statistical properties of the velocity and pressure fields of Navier-Stokes equations given statistical information on the initial state. However, no effective numerical method has yet been developed to compute the solution to functional differential equations. In this talk I will provide a new perspective on this general problem, and discuss recent progresses in approximation theory for nonlinear functionals and functional equations. The proposed methods will be demonstrated through various examples.
NASA Astrophysics Data System (ADS)
Raimondi, L.; Spiga, D.
2015-01-01
Context. The imaging sharpness of an X-ray telescope is chiefly determined by the optical quality of its focusing optics, which in turn mostly depends on the shape accuracy and the surface finishing of the grazing-incidence X-ray mirrors that compose the optical modules. To ensure the imaging performance during the mirror manufacturing, a fundamental step is predicting the mirror point spread function (PSF) from the metrology of its surface. Traditionally, the PSF computation in X-rays is assumed to be different depending on whether the surface defects are classified as figure errors or roughness. This classical approach, however, requires setting a boundary between these two asymptotic regimes, which is not known a priori. Aims: The aim of this work is to overcome this limit by providing analytical formulae that are valid at any light wavelength, for computing the PSF of an X-ray mirror shell from the measured longitudinal profiles and the roughness power spectral density, without distinguishing spectral ranges with different treatments. Methods: The method we adopted is based on the Huygens-Fresnel principle for computing the diffracted intensity from measured or modeled profiles. In particular, we have simplified the computation of the surface integral to only one dimension, owing to the grazing incidence that reduces the influence of the azimuthal errors by orders of magnitude. The method can be extended to optical systems with an arbitrary number of reflections - in particular the Wolter-I, which is frequently used in X-ray astronomy - and can be used in both near- and far-field approximation. Finally, it accounts simultaneously for profile, roughness, and aperture diffraction. Results: We describe the formalism with which one can self-consistently compute the PSF of grazing-incidence mirrors, and we show some PSF simulations including the UV band, where the aperture diffraction dominates the PSF, and hard X-rays where the X-ray scattering has a major impact
Storing files in a parallel computing system based on user-specified parser function
Faibish, Sorin; Bent, John M; Tzelnic, Percy; Grider, Gary; Manzanares, Adam; Torres, Aaron
2014-10-21
Techniques are provided for storing files in a parallel computing system based on a user-specified parser function. A plurality of files generated by a distributed application in a parallel computing system are stored by obtaining a parser from the distributed application for processing the plurality of files prior to storage; and storing one or more of the plurality of files in one or more storage nodes of the parallel computing system based on the processing by the parser. The plurality of files comprise one or more of a plurality of complete files and a plurality of sub-files. The parser can optionally store only those files that satisfy one or more semantic requirements of the parser. The parser can also extract metadata from one or more of the files and the extracted metadata can be stored with one or more of the plurality of files and used for searching for files.
Gil L, Alejandro; Valiente, Pedro A; Pascutti, Pedro G; Pons, Tirso
2011-01-01
The development of efficient and selective antimalariais remains a challenge for the pharmaceutical industry. The aspartic proteases plasmepsins, whose inhibition leads to parasite death, are classified as targets for the design of potent drugs. Combinatorial synthesis is currently being used to generate inhibitor libraries for these enzymes, and together with computational methodologies have been demonstrated capable for the selection of lead compounds. The high structural flexibility of plasmepsins, revealed by their X-ray structures and molecular dynamics simulations, made even more complicated the prediction of putative binding modes, and therefore, the use of common computational tools, like docking and free-energy calculations. In this review, we revised the computational strategies utilized so far, for the structure-function relationship studies concerning the plasmepsin family, with special focus on the recent advances in the improvement of the linear interaction estimation (LIE) method, which is one of the most successful methodologies in the evaluation of plasmepsin-inhibitor binding affinity.
Cortina, Belén; Torregrosa, Germán; Castelló-Ruiz, María; Burguete, María C; Moscardó, Antonio; Latorre, Ana; Salom, Juan B; Vallés, Juana; Santos, María T; Alborch, Enrique
2013-05-15
We tested the hypothesis that the phytoestrogen genistein protects the brain against ischemic stroke by improving the circulatory function in terms of reduced production of thromboxane A2 and leukocyte-platelet aggregates, and of preserved vascular reactivity. Ischemia-reperfusion (90 min-3 days, intraluminal filament) was induced in male Wistar rats, and functional score and cerebral infarct volume were the end points examined. Genistein (10mg/kg/day) or vehicle (β-cyclodextrin) was administered at 30 min after ischemia or sham-operation. Production of thromboxane A2 and leukocyte-platelet aggregates, as well as reactivity of carotid artery to U-46619 (thromboxane A2 analogue) and to platelet releasate was measured. At 3 days post-ischemia, both improvement in the functional examination and reduction in the total infarct volume were shown in the ischemic genistein-treated group. Genistein significantly reverted both the increased thromboxane A2 concentration and the increased leukocyte-platelet aggregates production found in samples from the ischemic vehicle-treated group. Both U-46619 and platelet releasate elicited contractions of the carotid artery, which were significantly lower in the ischemic vehicle-treated group. Genistein significantly restored both the decreased U-46619- and the decreased platelet releasate-elicited contractile responses. In conclusion, genistein protects the brain against an ischemia-reperfusion challenge, at least in part, by its beneficial effects on the circulatory function.
The goal of this study is to estimate an unbiased exposure effect of environmental tobacco smoke (ETS) exposure on children's continuous lung function. A majority of the evidence from health studies suggests that ETS exposure in early life contributes significantly to childhood ...
ERIC Educational Resources Information Center
Bindman, Samantha W.; Pomerantz, Eva M.; Roisman, Glenn I.
2015-01-01
This study evaluated whether the positive association between early autonomy-supportive parenting and children's subsequent achievement is mediated by children's executive functions. Using observations of mothers' parenting from the National Institute of Child Health and Human Development (NICHD) Study of Early Child Care and Youth Development (N…
Hasbrouck, W.P.
1983-01-01
Processing of data taken with the U.S. Geological Survey's coal-seismic system is done with a desktop, stand-alone computer. Programs for this computer are written in the extended BASIC language utilized by the Tektronix 4051 Graphic System. This report presents computer programs used to develop rms velocity functions and apply mute and normal moveout to a 12-trace seismogram.
Martínez, Carlos Alberto; Khare, Kshitij; Banerjee, Arunava; Elzo, Mauricio A
2016-12-31
It is important to consider heterogeneity of marker effects and allelic frequencies in across population genome-wide prediction studies. Moreover, all regression models used in genome-wide prediction overlook randomness of genotypes. In this study, a family of hierarchical Bayesian models to perform across population genome-wide prediction modeling genotypes as random variables and allowing population-specific effects for each marker was developed. Models shared a common structure and differed in the priors used and the assumption about residual variances (homogeneous or heterogeneous). Randomness of genotypes was accounted for by deriving the joint probability mass function of marker genotypes conditional on allelic frequencies and pedigree information. As a consequence, these models incorporated kinship and genotypic information that not only permitted to account for heterogeneity of allelic frequencies, but also to include individuals with missing genotypes at some or all loci without the need for previous imputation. This was possible because the non-observed fraction of the design matrix was treated as an unknown model parameter. For each model, a simpler version ignoring population structure, but still accounting for randomness of genotypes was proposed. Implementation of these models and computation of some criteria for model comparison were illustrated using two simulated datasets. Theoretical and computational issues along with possible applications, extensions and refinements were discussed. Some features of the models developed in this study make them promising for genome-wide prediction, the use of information contained in the probability distribution of genotypes is perhaps the most appealing. Further studies to assess the performance of the models proposed here and also to compare them with conventional models used in genome-wide prediction are needed.
Response functions for computing absorbed dose to skeletal tissues from photon irradiation.
Eckerman, K F; Bolch, W E; Zankl, M; Petoussi-Henss, N
2007-01-01
The calculation of absorbed dose in skeletal tissues at radiogenic risk has been a difficult problem because the relevant structures cannot be represented in conventional geometric terms nor can they be visualised in the tomographic image data used to define the computational models of the human body. The active marrow, the tissue of concern in leukaemia induction, is present within the spongiosa regions of trabecular bone, whereas the osteoprogenitor cells at risk for bone cancer induction are considered to be within the soft tissues adjacent to the mineral surfaces. The International Commission on Radiological Protection (ICRP) recommends averaging the absorbed energy over the active marrow within the spongiosa and over the soft tissues within 10 microm of the mineral surface for leukaemia and bone cancer induction, respectively. In its forthcoming recommendation, it is expected that the latter guidance will be changed to include soft tissues within 50 microm of the mineral surfaces. To address the computational problems, the skeleton of the proposed ICRP reference computational phantom has been subdivided to identify those voxels associated with cortical shell, spongiosa and the medullary cavity of the long bones. It is further proposed that the Monte Carlo calculations with these phantoms compute the energy deposition in the skeletal target tissues as the product of the particle fluence in the skeletal subdivisions and applicable fluence-to-dose-response functions. This paper outlines the development of such response functions for photons.
Centroids computation and point spread function analysis for reverse Hartmann test
NASA Astrophysics Data System (ADS)
Zhao, Zhu; Hui, Mei; Liu, Ming; Dong, Liquan; Kong, Linqqin; Zhao, Yuejin
2017-03-01
This paper studies the point spread function (PSF) and centroids computation methods to improve the performance of reverse Hartmann test (RHT) in poor conditions, such as defocus, background noise, etc. In the RHT, we evaluate the PSF in terms of Lommel function and classify it as circle of confusion (CoC) instead of Airy disk. Approximation of a CoC spot with Gaussian or super-Gaussian profile to identify its centroid forms the basis of centroids algorithm. It is also effective for fringe pattern while the segmental fringe is served as a 'spot' with an infinite diameter in one direction. RHT experiments are conducted to test the fitting effects and centroiding performances of the methods with Gaussian and super-Gaussian approximations. The fitting results show that the super-Gaussian obtains more reasonable fitting effects. The super-Gauss orders are only slightly larger than 2 means that the CoC has a similar profile with Airy disk in certain conditions. The results of centroids computation demonstrate that when the signal to noise ratio (SNR) is falling, the centroid computed by super-Gaussian method has a less shift and the shift grows at a slower pace. It implies that the super-Gaussian has a better anti-noise capability in centroid computation.
Model Accounting Program. Adopters Guide.
ERIC Educational Resources Information Center
Beaverton School District 48, OR.
The accounting cluster demonstration project conducted at Aloha High School in the Beaverton, Oregon, school district developed a model curriculum for high school accounting. The curriculum is based on interviews with professionals in the accounting field and emphasizes the use of computers. It is suitable for use with special needs students as…
NASA Astrophysics Data System (ADS)
Roccatano, Danilo
2015-07-01
The monooxygenase P450 BM-3 is a NADPH-dependent fatty acid hydroxylase enzyme isolated from soil bacterium Bacillus megaterium. As a pivotal member of cytochrome P450 superfamily, it has been intensely studied for the comprehension of structure-dynamics-function relationships in this class of enzymes. In addition, due to its peculiar properties, it is also a promising enzyme for biochemical and biomedical applications. However, despite the efforts, the full understanding of the enzyme structure and dynamics is not yet achieved. Computational studies, particularly molecular dynamics (MD) simulations, have importantly contributed to this endeavor by providing new insights at an atomic level regarding the correlations between structure, dynamics, and function of the protein. This topical review summarizes computational studies based on MD simulations of the cytochrome P450 BM-3 and gives an outlook on future directions.
Two algorithms to compute projected correlation functions in molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Carof, Antoine; Vuilleumier, Rodolphe; Rotenberg, Benjamin
2014-03-01
An explicit derivation of the Mori-Zwanzig orthogonal dynamics of observables is presented and leads to two practical algorithms to compute exactly projected observables (e.g., random noise) and projected correlation function (e.g., memory kernel) from a molecular dynamics trajectory. The algorithms are then applied to study the diffusive dynamics of a tagged particle in a Lennard-Jones fluid, the properties of the associated random noise, and a decomposition of the corresponding memory kernel.
A comparison of computational methods and algorithms for the complex gamma function
NASA Technical Reports Server (NTRS)
Ng, E. W.
1974-01-01
A survey and comparison of some computational methods and algorithms for gamma and log-gamma functions of complex arguments are presented. Methods and algorithms reported include Chebyshev approximations, Pade expansion and Stirling's asymptotic series. The comparison leads to the conclusion that Algorithm 421 published in the Communications of ACM by H. Kuki is the best program either for individual application or for the inclusion in subroutine libraries.
NASA Astrophysics Data System (ADS)
Cauda, Franco; Costa, Tommaso; Tamietto, Marco
2014-09-01
Recent evidence in cognitive neuroscience lends support to the idea that network models of brain architecture provide a privileged access to the understanding of the relation between brain organization and cognitive processes [1]. The core perspective holds that cognitive processes depend on the interactions among distributed neuronal populations and brain structures, and that the impact of a given region on behavior largely depends on its pattern of anatomical and functional connectivity [2,3].
NASA Astrophysics Data System (ADS)
Aspon, Siti Zulaiha; Murid, Ali Hassan Mohamed; Rahmat, Hamisan
2014-07-01
This research is about computing the Green's functions on unbounded doubly connected regions by using the method of boundary integral equation. The method depends on solving an exterior Dirichlet problem. The Dirichlet problem is then solved using a uniquely solvable Fredholm integral equation on the boundary of the region. The kernel of this integral equation is the generalized Neumann kernel. The method for solving this integral equation is by using the Nyström method with trapezoidal rule to discretize it to a linear system. The linear system is then solved by the Gaussian elimination method. Mathematica plots of Green's functions for several test regions are also presented.
Method, systems, and computer program products for implementing function-parallel network firewall
Fulp, Errin W [Winston-Salem, NC; Farley, Ryan J [Winston-Salem, NC
2011-10-11
Methods, systems, and computer program products for providing function-parallel firewalls are disclosed. According to one aspect, a function-parallel firewall includes a first firewall node for filtering received packets using a first portion of a rule set including a plurality of rules. The first portion includes less than all of the rules in the rule set. At least one second firewall node filters packets using a second portion of the rule set. The second portion includes at least one rule in the rule set that is not present in the first portion. The first and second portions together include all of the rules in the rule set.
Wan, Songlin; Zhang, Xiangchao; He, Xiaoying; Xu, Min
2016-12-20
Computer controlled optical surfacing requires an accurate tool influence function (TIF) for reliable path planning and deterministic fabrication. Near the edge of the workpieces, the TIF has a nonlinear removal behavior, which will cause a severe edge-roll phenomenon. In the present paper, a new edge pressure model is developed based on the finite element analysis results. The model is represented as the product of a basic pressure function and a correcting function. The basic pressure distribution is calculated according to the surface shape of the polishing pad, and the correcting function is used to compensate the errors caused by the edge effect. Practical experimental results demonstrate that the new model can accurately predict the edge TIFs with different overhang ratios. The relative error of the new edge model can be reduced to 15%.
From machine and tape to structure and function: formulation of a reflexively computing system.
Salzberg, Chris
2006-01-01
The relationship between structure and function is explored via a system of labeled directed graph structures upon which a single elementary read/write rule is applied locally. Boundaries between static (information-carrying) and active (information-processing) objects, imposed by mandate of the rules or physics in earlier models, emerge instead as a result of a structure-function dynamic that is reflexive: objects may operate directly on their own structure. A representation of an arbitrary Turing machine is reproduced in terms of structural constraints by means of a simple mapping from tape squares and machine states to a uniform medium of nodes and links, establishing computation universality. Exploiting flexibility of the formulation, examples of other unconventional "self-computing" structures are demonstrated. A straightforward representation of a kinematic machine system based on the model devised by Laing is also reproduced in detail. Implications of the findings are discussed in terms of their relation to other formal models of computation and construction. It is argued that reflexivity of the structure-function relationship is a critical informational dynamic in biochemical systems, overlooked in previous models but well captured by the proposed formulation.
Computing wave functions of nonlinear Schroedinger equations: A time-independent approach
Chang, S.-L.; Chien, C.-S. Jeng, B.-W.
2007-09-10
We present a novel algorithm for computing the ground-state and excited-state solutions of M-coupled nonlinear Schroedinger equations (MCNLS). First we transform the MCNLS to the stationary state ones by using separation of variables. The energy level of a quantum particle governed by the Schroedinger eigenvalue problem (SEP) is used as an initial guess to computing their counterpart of a nonlinear Schroedinger equation (NLS). We discretize the system via centered difference approximations. A predictor-corrector continuation method is exploited as an iterative method to trace solution curves and surfaces of the MCNLS, where the chemical potentials are treated as continuation parameters. The wave functions can be easily obtained whenever the solution manifolds are numerically traced. The proposed algorithm has the advantage that it is unnecessary to discretize or integrate the partial derivatives of wave functions. Moreover, the wave functions can be computed for any time scale. Numerical results on the ground-state and excited-state solutions are reported, where the physical properties of the system such as isotropic and nonisotropic trapping potentials, mass conservation constraints, and strong and weak repulsive interactions are considered in our numerical experiments.
NASA Astrophysics Data System (ADS)
Betancourt-Benítez, Ricardo; Ning, Ruola; Liu, Shaohua
2009-11-01
Several factors during the scanning process, image reconstruction and geometry of an imaging system, influence the spatial resolution of a computed tomography imaging system. In this work, the spatial resolution of a state of the art flat panel detector-based cone beam computed tomography breast imaging system is evaluated. First, scattering, exposure level, voltage, voxel size, pixel size, back-projection filter, reconstruction algorithm, and number of projections are varied to evaluate their effect on spatial resolution. Second, its uniformity throughout the whole field of view is evaluated as a function of radius along the x-y plane and as a function of z at the center of rotation. The results of the study suggest that the modulation transfer function is mainly influenced by the pixel, back-projection filter, and number of projections used. The evaluation of spatial resolution throughout the field of view also suggests that this imaging system does have a 3-D quasi-isotropic spatial resolution in a cylindrical region of radius equal to 40 mm centered at the axis of rotation. Overall, this study provides a useful tool to determine the optimal parameters for the best possible use of this cone beam computed tomography breast imaging system.
Antibodies: Computer-Aided Prediction of Structure and Design of Function.
Sevy, Alexander M; Meiler, Jens
2014-12-01
With the advent of high-throughput sequencing, and the increased availability of experimental structures of antibodies and antibody-antigen complexes, comes the improvement of computational approaches to predict the structure and design the function of antibodies and antibody-antigen complexes. While antibodies pose formidable challenges for protein structure prediction and design due to their large size and highly flexible loops in the complementarity-determining regions, they also offer exciting opportunities: the central importance of antibodies for human health results in a wealth of structural and sequence information that-as a knowledge base-can drive the modeling algorithms by limiting the conformational and sequence search space to likely regions of success. Further, efficient experimental platforms exist to test predicted antibody structure or designed antibody function, thereby leading to an iterative feedback loop between computation and experiment. We briefly review the history of computer-aided prediction of structure and design of function in the antibody field before we focus on recent methodological developments and the most exciting application examples.
Moyers, M. F.
2014-06-15
Purpose: Adequate evaluation of the results from multi-institutional trials involving light ion beam treatments requires consideration of the planning margins applied to both targets and organs at risk. A major uncertainty that affects the size of these margins is the conversion of x ray computed tomography numbers (XCTNs) to relative linear stopping powers (RLSPs). Various facilities engaged in multi-institutional clinical trials involving proton beams have been applying significantly different margins in their patient planning. This study was performed to determine the variance in the conversion functions used at proton facilities in the U.S.A. wishing to participate in National Cancer Institute sponsored clinical trials. Methods: A simplified method of determining the conversion function was developed using a standard phantom containing only water and aluminum. The new method was based on the premise that all scanners have their XCTNs for air and water calibrated daily to constant values but that the XCTNs for high density/high atomic number materials are variable with different scanning conditions. The standard phantom was taken to 10 different proton facilities and scanned with the local protocols resulting in 14 derived conversion functions which were compared to the conversion functions used at the local facilities. Results: For tissues within ±300 XCTN of water, all facility functions produced converted RLSP values within ±6% of the values produced by the standard function and within 8% of the values from any other facility's function. For XCTNs corresponding to lung tissue, converted RLSP values differed by as great as ±8% from the standard and up to 16% from the values of other facilities. For XCTNs corresponding to low-density immobilization foam, the maximum to minimum values differed by as much as 40%. Conclusions: The new method greatly simplifies determination of the conversion function, reduces ambiguity, and in the future could promote
Ribeiro, M.
2015-06-21
Ab initio calculations of hydrogen-passivated Si nanowires were performed using density functional theory within LDA-1/2, to account for the excited states properties. A range of diameters was calculated to draw conclusions about the ability of the method to correctly describe the main trends of bandgap, quantum confinement, and self-energy corrections versus the diameter of the nanowire. Bandgaps are predicted with excellent accuracy if compared with other theoretical results like GW, and with the experiment as well, but with a low computational cost.
Transport activity and presence of ClC-7/Ostm1 complex account for different cellular functions.
Weinert, Stefanie; Jabs, Sabrina; Hohensee, Svea; Chan, Wing Lee; Kornak, Uwe; Jentsch, Thomas J
2014-07-01
Loss of the lysosomal ClC-7/Ostm1 2Cl(-)/H(+) exchanger causes lysosomal storage disease and osteopetrosis in humans and additionally changes fur colour in mice. Its conversion into a Cl(-) conductance in Clcn7(unc/unc) mice entails similarly severe lysosomal storage, but less severe osteopetrosis and no change in fur colour. To elucidate the basis for these phenotypical differences, we generated Clcn7(td/td) mice expressing an ion transport-deficient mutant. Their osteopetrosis was as severe as in Clcn7(-/-) mice, suggesting that the electric shunt provided by ClC-7(unc) can partially rescue osteoclast function. The normal coat colour of Clcn7(td/td) mice and their less severe neurodegeneration suggested that the ClC-7 protein, even when lacking measurable ion transport activity, is sufficient for hair pigmentation and that the conductance of ClC-7(unc) is harmful for neurons. Our in vivo structure-function analysis of ClC-7 reveals that both protein-protein interactions and ion transport must be considered in the pathogenesis of ClC-7-related diseases.
ERIC Educational Resources Information Center
Brown, R. W.; And Others
The computerized Painless Accountability System is a performance objective system from which instructional programs are developed. Three main simplified behavioral response levels characterize this system: (1) cognitive, (2) psychomotor, and (3) affective domains. Each of these objectives are classified by one of 16 descriptors. The second major…
ERIC Educational Resources Information Center
Chieppo, Charles D.; Gass, James T.
2009-01-01
This article reports that special interest groups opposed to charter schools and high-stakes testing have hijacked Massachusetts's once-independent board of education and stand poised to water down the Massachusetts Comprehensive Assessment System (MCAS) tests and the accountability system they support. President Barack Obama and Massachusetts…
Brennan, Douglas; Schubert, Leah; Diot, Quentin; Castillo, Richard; Castillo, Edward; Guerrero, Thomas; Martel, Mary K.; Linderman, Derek; Gaspar, Laurie E.; Miften, Moyed; Kavanagh, Brian D.; Vinogradskiy, Yevgeniy
2015-06-01
Purpose: A new form of functional imaging has been proposed in the form of 4-dimensional computed tomography (4DCT) ventilation. Because 4DCTs are acquired as part of routine care for lung cancer patients, calculating ventilation maps from 4DCTs provides spatial lung function information without added dosimetric or monetary cost to the patient. Before 4DCT-ventilation is implemented it needs to be clinically validated. Pulmonary function tests (PFTs) provide a clinically established way of evaluating lung function. The purpose of our work was to perform a clinical validation by comparing 4DCT-ventilation metrics with PFT data. Methods and Materials: Ninety-eight lung cancer patients with pretreatment 4DCT and PFT data were included in the study. Pulmonary function test metrics used to diagnose obstructive lung disease were recorded: forced expiratory volume in 1 second (FEV1) and FEV1/forced vital capacity. Four-dimensional CT data sets and spatial registration were used to compute 4DCT-ventilation images using a density change–based and a Jacobian-based model. The ventilation maps were reduced to single metrics intended to reflect the degree of ventilation obstruction. Specifically, we computed the coefficient of variation (SD/mean), ventilation V20 (volume of lung ≤20% ventilation), and correlated the ventilation metrics with PFT data. Regression analysis was used to determine whether 4DCT ventilation data could predict for normal versus abnormal lung function using PFT thresholds. Results: Correlation coefficients comparing 4DCT-ventilation with PFT data ranged from 0.63 to 0.72, with the best agreement between FEV1 and coefficient of variation. Four-dimensional CT ventilation metrics were able to significantly delineate between clinically normal versus abnormal PFT results. Conclusions: Validation of 4DCT ventilation with clinically relevant metrics is essential. We demonstrate good global agreement between PFTs and 4DCT-ventilation, indicating that 4DCT
Computer-Based Cognitive Training for Executive Functions after Stroke: A Systematic Review
van de Ven, Renate M.; Murre, Jaap M. J.; Veltman, Dick J.; Schmand, Ben A.
2016-01-01
Background: Stroke commonly results in cognitive impairments in working memory, attention, and executive function, which may be restored with appropriate training programs. Our aim was to systematically review the evidence for computer-based cognitive training of executive dysfunctions. Methods: Studies were included if they concerned adults who had suffered stroke or other types of acquired brain injury, if the intervention was computer training of executive functions, and if the outcome was related to executive functioning. We searched in MEDLINE, PsycINFO, Web of Science, and The Cochrane Library. Study quality was evaluated based on the CONSORT Statement. Treatment effect was evaluated based on differences compared to pre-treatment and/or to a control group. Results: Twenty studies were included. Two were randomized controlled trials that used an active control group. The other studies included multiple baselines, a passive control group, or were uncontrolled. Improvements were observed in tasks similar to the training (near transfer) and in tasks dissimilar to the training (far transfer). However, these effects were not larger in trained than in active control groups. Two studies evaluated neural effects and found changes in both functional and structural connectivity. Most studies suffered from methodological limitations (e.g., lack of an active control group and no adjustment for multiple testing) hampering differentiation of training effects from spontaneous recovery, retest effects, and placebo effects. Conclusions: The positive findings of most studies, including neural changes, warrant continuation of research in this field, but only if its methodological limitations are addressed. PMID:27148007
Using computational fluid dynamics to test functional and ecological hypotheses in fossil taxa
NASA Astrophysics Data System (ADS)
Rahman, Imran
2016-04-01
Reconstructing how ancient organisms moved and fed is a major focus of study in palaeontology. Traditionally, this has been hampered by a lack of objective data on the functional morphology of extinct species, especially those without a clear modern analogue. However, cutting-edge techniques for characterizing specimens digitally and in three dimensions, coupled with state-of-the-art computer models, now provide a robust framework for testing functional and ecological hypotheses even in problematic fossil taxa. One such approach is computational fluid dynamics (CFD), a method for simulating fluid flows around objects that has primarily been applied to complex engineering-design problems. Here, I will present three case studies of CFD applied to fossil taxa, spanning a range of specimen sizes, taxonomic groups and geological ages. First, I will show how CFD enabled a rigorous test of hypothesized feeding modes in an enigmatic Ediacaran organism with three-fold symmetry, revealing previously unappreciated complexity of pre-Cambrian ecosystems. Second, I will show how CFD was used to evaluate hydrodynamic performance and feeding in Cambrian stem-group echinoderms, shedding light on the probable feeding strategy of the latest common ancestor of all deuterostomes. Third, I will show how CFD allowed us to explore the link between form and function in Mesozoic ichthyosaurs. These case studies serve to demonstrate the enormous potential of CFD for addressing long-standing hypotheses for a variety of fossil taxa, opening up an exciting new avenue in palaeontological studies of functional morphology.
Filippov, A. V. Dyatko, N. A.; Kostenko, A. S.
2014-11-15
The charging of dust particles in weakly ionized inert gases at atmospheric pressure has been investigated. The conditions under which the gas is ionized by an external source, a beam of fast electrons, are considered. The electron energy distribution function in argon, krypton, and xenon has been calculated for three rates of gas ionization by fast electrons: 10{sup 13}, 10{sup 14}, and 10{sup 15} cm{sup −1}. A model of dust particle charging with allowance for the nonlocal formation of the electron energy distribution function in the region of strong plasma quasi-neutrality violation around the dust particle is described. The nonlocality is taken into account in an approximation where the distribution function is a function of only the total electron energy. Comparative calculations of the dust particle charge with and without allowance for the nonlocality of the electron energy distribution function have been performed. Allowance for the nonlocality is shown to lead to a noticeable increase in the dust particle charge due to the influence of the group of hot electrons from the tail of the distribution function. It has been established that the screening constant virtually coincides with the smallest screening constant determined according to the asymptotic theory of screening with the electron transport and recombination coefficients in an unperturbed plasma.
NASA Astrophysics Data System (ADS)
Perlov, A.; Chadov, S.; Ebert, H.
2003-12-01
An approach for the calculation of the optical and magneto-optical properties of solids based on the one-particle Green function is introduced in the framework of the linear muffin-tin orbital method. The approach keeps all advantages of the more accurate Korringa-Kohn-Rostoker scheme as the possibility to account for many-body effects in terms of the nonlocal energy dependent self-energy but is numerically much more efficient. Application of various proposed model self-energies for the calculation of the optical properties of bulk Ni and Fe demonstrates the great potential of the new scheme.
Quantitative computed tomography assessment of lung structure and function in pulmonary emphysema.
Madani, A; Keyzer, C; Gevenois, P A
2001-10-01
Accurate diagnosis and quantification of pulmonary emphysema during life is important to understand the natural history of the disease, to assess the extent of the disease, and to evaluate and follow-up therapeutic interventions. Since pulmonary emphysema is defined through pathological criteria, new methods of diagnosis and quantification should be validated by comparisons against histological references. Recent studies have addressed the capability of computed tomography (CT) to quantify pulmonary emphysema accurately. The studies reviewed in this article have been based on CT scans obtained after deep inspiration or expiration, on subjective visual grading and on objective measurements of attenuation values. Especially dedicated software was used for this purpose, which provided numerical data, on both two- and three-dimensional approaches, and compared CT data with pulmonary function tests. More recently, fractal and textural analyses were applied to computed tomography scans to assess the presence, the extent, and the types of emphysema. Quantitative computed tomography has already been used in patient selection for surgical treatment of pulmonary emphysema and in pharmacotherapeutical trials. However, despite numerous and extensive studies, this technique has not yet been standardized and important questions about how best to use computed tomography for the quantification of pulmonary emphysema are still unsolved.
Stable computations with flat radial basis functions using vector-valued rational approximations
NASA Astrophysics Data System (ADS)
Wright, Grady B.; Fornberg, Bengt
2017-02-01
One commonly finds in applications of smooth radial basis functions (RBFs) that scaling the kernels so they are 'flat' leads to smaller discretization errors. However, the direct numerical approach for computing with flat RBFs (RBF-Direct) is severely ill-conditioned. We present an algorithm for bypassing this ill-conditioning that is based on a new method for rational approximation (RA) of vector-valued analytic functions with the property that all components of the vector share the same singularities. This new algorithm (RBF-RA) is more accurate, robust, and easier to implement than the Contour-Padé method, which is similarly based on vector-valued rational approximation. In contrast to the stable RBF-QR and RBF-GA algorithms, which are based on finding a better conditioned base in the same RBF-space, the new algorithm can be used with any type of smooth radial kernel, and it is also applicable to a wider range of tasks (including calculating Hermite type implicit RBF-FD stencils). We present a series of numerical experiments demonstrating the effectiveness of this new method for computing RBF interpolants in the flat regime. We also demonstrate the flexibility of the method by using it to compute implicit RBF-FD formulas in the flat regime and then using these for solving Poisson's equation in a 3-D spherical shell.
Introducing ONETEP: linear-scaling density functional simulations on parallel computers.
Skylaris, Chris-Kriton; Haynes, Peter D; Mostofi, Arash A; Payne, Mike C
2005-02-22
We present ONETEP (order-N electronic total energy package), a density functional program for parallel computers whose computational cost scales linearly with the number of atoms and the number of processors. ONETEP is based on our reformulation of the plane wave pseudopotential method which exploits the electronic localization that is inherent in systems with a nonvanishing band gap. We summarize the theoretical developments that enable the direct optimization of strictly localized quantities expressed in terms of a delocalized plane wave basis. These same localized quantities lead us to a physical way of dividing the computational effort among many processors to allow calculations to be performed efficiently on parallel supercomputers. We show with examples that ONETEP achieves excellent speedups with increasing numbers of processors and confirm that the time taken by ONETEP as a function of increasing number of atoms for a given number of processors is indeed linear. What distinguishes our approach is that the localization is achieved in a controlled and mathematically consistent manner so that ONETEP obtains the same accuracy as conventional cubic-scaling plane wave approaches and offers fast and stable convergence. We expect that calculations with ONETEP have the potential to provide quantitative theoretical predictions for problems involving thousands of atoms such as those often encountered in nanoscience and biophysics.
NASA Technical Reports Server (NTRS)
Zehe, Michael J.; Gordon, Sanford; McBride, Bonnie J.
2002-01-01
For several decades the NASA Glenn Research Center has been providing a file of thermodynamic data for use in several computer programs. These data are in the form of least-squares coefficients that have been calculated from tabular thermodynamic data by means of the NASA Properties and Coefficients (PAC) program. The source thermodynamic data are obtained from the literature or from standard compilations. Most gas-phase thermodynamic functions are calculated by the authors from molecular constant data using ideal gas partition functions. The Coefficients and Properties (CAP) program described in this report permits the generation of tabulated thermodynamic functions from the NASA least-squares coefficients. CAP provides considerable flexibility in the output format, the number of temperatures to be tabulated, and the energy units of the calculated properties. This report provides a detailed description of input preparation, examples of input and output for several species, and a listing of all species in the current NASA Glenn thermodynamic data file.
CAP: A Computer Code for Generating Tabular Thermodynamic Functions from NASA Lewis Coefficients
NASA Technical Reports Server (NTRS)
Zehe, Michael J.; Gordon, Sanford; McBride, Bonnie J.
2001-01-01
For several decades the NASA Glenn Research Center has been providing a file of thermodynamic data for use in several computer programs. These data are in the form of least-squares coefficients that have been calculated from tabular thermodynamic data by means of the NASA Properties and Coefficients (PAC) program. The source thermodynamic data are obtained from the literature or from standard compilations. Most gas-phase thermodynamic functions are calculated by the authors from molecular constant data using ideal gas partition functions. The Coefficients and Properties (CAP) program described in this report permits the generation of tabulated thermodynamic functions from the NASA least-squares coefficients. CAP provides considerable flexibility in the output format, the number of temperatures to be tabulated, and the energy units of the calculated properties. This report provides a detailed description of input preparation, examples of input and output for several species, and a listing of all species in the current NASA Glenn thermodynamic data file.
Guido, Ciro A. Cortona, Pietro; Adamo, Carlo
2014-03-14
We extend our previous definition of the metric Δr for electronic excitations in the framework of the time-dependent density functional theory [C. A. Guido, P. Cortona, B. Mennucci, and C. Adamo, J. Chem. Theory Comput. 9, 3118 (2013)], by including a measure of the difference of electronic position variances in passing from occupied to virtual orbitals. This new definition, called Γ, permits applications in those situations where the Δr-index is not helpful: transitions in centrosymmetric systems and Rydberg excitations. The Γ-metric is then extended by using the Natural Transition Orbitals, thus providing an intuitive picture of how locally the electron density changes during the electronic transitions. Furthermore, the Γ values give insight about the functional performances in reproducing different type of transitions, and allow one to define a “confidence radius” for GGA and hybrid functionals.
Grossert, J Stuart; Cubero Herrera, Lisandra; Ramaley, Louis; Melanson, Jeremy E
2014-08-01
Analysis of triacylglycerols (TAGs), found as complex mixtures in living organisms, is typically accomplished using liquid chromatography, often coupled to mass spectrometry. TAGs, weak bases not protonated using electrospray ionization, are usually ionized by adduct formation with a cation, including those present in the solvent (e.g., Na(+)). There are relatively few reports on the binding of TAGs with cations or on the mechanisms by which cationized TAGs fragment. This work examines binding efficiencies, determined by mass spectrometry and computations, for the complexation of TAGs to a range of cations (Na(+), Li(+), K(+), Ag(+), NH4(+)). While most cations bind to oxygen, Ag(+) binding to unsaturation in the acid side chains is significant. The importance of dimer formation, [2TAG + M](+) was demonstrated using several different types of mass spectrometers. From breakdown curves, it became apparent that two or three acid side chains must be attached to glycerol for strong cationization. Possible mechanisms for fragmentation of lithiated TAGs were modeled by computations on tripropionylglycerol. Viable pathways were found for losses of neutral acids and lithium salts of acids from different positions on the glycerol moiety. Novel lactone structures were proposed for the loss of a neutral acid from one position of the glycerol moiety. These were studied further using triple-stage mass spectrometry (MS(3)). These lactones can account for all the major product ions in the MS(3) spectra in both this work and the literature, which should allow for new insights into the challenging analytical methods needed for naturally occurring TAGs.
An evolutionary computational theory of prefrontal executive function in decision-making.
Koechlin, Etienne
2014-11-05
The prefrontal cortex subserves executive control and decision-making, that is, the coordination and selection of thoughts and actions in the service of adaptive behaviour. We present here a computational theory describing the evolution of the prefrontal cortex from rodents to humans as gradually adding new inferential Bayesian capabilities for dealing with a computationally intractable decision problem: exploring and learning new behavioural strategies versus exploiting and adjusting previously learned ones through reinforcement learning (RL). We provide a principled account identifying three inferential steps optimizing this arbitration through the emergence of (i) factual reactive inferences in paralimbic prefrontal regions in rodents; (ii) factual proactive inferences in lateral prefrontal regions in primates and (iii) counterfactual reactive and proactive inferences in human frontopolar regions. The theory clarifies the integration of model-free and model-based RL through the notion of strategy creation. The theory also shows that counterfactual inferences in humans yield to the notion of hypothesis testing, a critical reasoning ability for approximating optimal adaptive processes and presumably endowing humans with a qualitative evolutionary advantage in adaptive behaviour.
NASA Astrophysics Data System (ADS)
Grossert, J. Stuart; Herrera, Lisandra Cubero; Ramaley, Louis; Melanson, Jeremy E.
2014-08-01
Analysis of triacylglycerols (TAGs), found as complex mixtures in living organisms, is typically accomplished using liquid chromatography, often coupled to mass spectrometry. TAGs, weak bases not protonated using electrospray ionization, are usually ionized by adduct formation with a cation, including those present in the solvent (e.g., Na+). There are relatively few reports on the binding of TAGs with cations or on the mechanisms by which cationized TAGs fragment. This work examines binding efficiencies, determined by mass spectrometry and computations, for the complexation of TAGs to a range of cations (Na+, Li+, K+, Ag+, NH4 +). While most cations bind to oxygen, Ag+ binding to unsaturation in the acid side chains is significant. The importance of dimer formation, [2TAG + M]+ was demonstrated using several different types of mass spectrometers. From breakdown curves, it became apparent that two or three acid side chains must be attached to glycerol for strong cationization. Possible mechanisms for fragmentation of lithiated TAGs were modeled by computations on tripropionylglycerol. Viable pathways were found for losses of neutral acids and lithium salts of acids from different positions on the glycerol moiety. Novel lactone structures were proposed for the loss of a neutral acid from one position of the glycerol moiety. These were studied further using triple-stage mass spectrometry (MS3). These lactones can account for all the major product ions in the MS3 spectra in both this work and the literature, which should allow for new insights into the challenging analytical methods needed for naturally occurring TAGs.
Moscovitch, Morris; Rosenbaum, R Shayna; Gilboa, Asaf; Addis, Donna Rose; Westmacott, Robyn; Grady, Cheryl; McAndrews, Mary Pat; Levine, Brian; Black, Sandra; Winocur, Gordon; Nadel, Lynn
2005-01-01
We review lesion and neuroimaging evidence on the role of the hippocampus, and other structures, in retention and retrieval of recent and remote memories. We examine episodic, semantic and spatial memory, and show that important distinctions exist among different types of these memories and the structures that mediate them. We argue that retention and retrieval of detailed, vivid autobiographical memories depend on the hippocampal system no matter how long ago they were acquired. Semantic memories, on the other hand, benefit from hippocampal contribution for some time before they can be retrieved independently of the hippocampus. Even semantic memories, however, can have episodic elements associated with them that continue to depend on the hippocampus. Likewise, we distinguish between experientially detailed spatial memories (akin to episodic memory) and more schematic memories (akin to semantic memory) that are sufficient for navigation but not for re-experiencing the environment in which they were acquired. Like their episodic and semantic counterparts, the former type of spatial memory is dependent on the hippocampus no matter how long ago it was acquired, whereas the latter can survive independently of the hippocampus and is represented in extra-hippocampal structures. In short, the evidence reviewed suggests strongly that the function of the hippocampus (and possibly that of related limbic structures) is to help encode, retain, and retrieve experiences, no matter how long ago the events comprising the experience occurred, and no matter whether the memories are episodic or spatial. We conclude that the evidence favours a multiple trace theory (MTT) of memory over two other models: (1) traditional consolidation models which posit that the hippocampus is a time-limited memory structure for all forms of memory; and (2) versions of cognitive map theory which posit that the hippocampus is needed for representing all forms of allocentric space in memory. PMID
NASA Astrophysics Data System (ADS)
Gusev, M. I.
2016-10-01
We study the penalty function type methods for computing the reachable sets of nonlinear control systems with state constraints. The state constraints are given by a finite system of smooth inequalities. The proposed methods are based on removing the state constraints by replacing the original system with an auxiliary system without constraints. This auxiliary system is obtained by modifying the set of velocities of the original system around the boundary of constraints. The right-hand side of the system depends on a penalty parameter. We prove that the reachable sets of the auxiliary system approximate in the Hausdorff metric the reachable set of the original system with state constraints as the penalty parameter tends to zero (infinity) and give the estimates of the rate of convergence. The numerical algorithms for computing the reachable sets, based on Pontryagin's maximum principle, are also considered.
NASA Astrophysics Data System (ADS)
Rajavel, A.; Aditya Prasad, A.; Jeyakumar, T.
2017-02-01
The structural features of conformational isomerism in 4-isopropylbenzylidine thiophene-2-carbohydrazide (ITC) polymorphs have been investigated to conquer distinguishable strong Nsbnd H⋯O and weak Csbnd H⋯S hydrogen bond interactions. The single crystals were grown at constant temperature and have characterized by density functional theory computations using B3LYP method by 3-21G basis set. The conformational isomers of ITC were compared and spectroscopically characterized by FT-IR and Raman spectroscopy. The bulk phases were studied by the powder X-ray diffraction patterns. External morphology of ITC was discussed using scanning electron microscopic and transmission electron microscopic studies. Comparisons between various types of intermolecular interactions in the two polymorphic forms have been quantified via Fingerprint and Hirshfeld surface analysis. DFT computations were used to illustrate molecular electrostatic potential, HOMO-LUMO, mulliken atomic charges and electron density of states.
ERIC Educational Resources Information Center
Zahner, William; Moschkovich, Judit
2010-01-01
Students often voice computations during group discussions of mathematics problems. Yet, this type of private speech has received little attention from mathematics educators or researchers. In this article, we use excerpts from middle school students' group mathematical discussions to illustrate and describe "computational private…
Educational Accounting Procedures.
ERIC Educational Resources Information Center
Tidwell, Sam B.
This chapter of "Principles of School Business Management" reviews the functions, procedures, and reports with which school business officials must be familiar in order to interpret and make decisions regarding the school district's financial position. Among the accounting functions discussed are financial management, internal auditing,…
NASA Astrophysics Data System (ADS)
Lei, Weiwei; Li, Kai
2016-12-01
There are four recursive algorithms used in the computation of the fully normalized associated Legendre functions (FNALFs): the standard forward column algorithm, the standard forward row algorithm, the recursive algorithm between every other degree, and the Belikov algorithm. These algorithms were evaluated in terms of their first relative numerical accuracy, second relative numerical accuracy, and computation speed and efficiency. The results show that when the degree n reaches 3000, both the recursive algorithm between every other degree and the Belikov algorithm are applicable for | cos θ | ∈[0, 1], with the latter better second relative numerical accuracy than the former at a slower computation speed. In terms of | cos θ | ∈[0, 1], the standard forward column algorithm, the recursive algorithm between every other degree, and the Belikov algorithm are applicable within degree n of 1900, and the standard forward column algorithm has the highest computation speed. The standard forward column algorithm is applicable for | cos θ | ∈[0, 1] within degree n of 1900. This algorithm's range of applicability decreases as the degree increases beyond 1900; however, it remains applicable within a minute range when | cos θ | is approximately equal to 1. The standard forward row algorithm has the smallest range of applicability: it is only applicable within degree n of 100 for | cos θ | ∈[0, 1], and its range of applicability decreases rapidly when the degree is greater than 100. The results of this research are expected to be useful to researchers in choosing the best algorithms for use in the computation of the FNALFs.
Accelerating Computation of DCM for ERP in MATLAB by External Function Calls to the GPU.
Wang, Wei-Jen; Hsieh, I-Fan; Chen, Chun-Chuan
2013-01-01
This study aims to improve the performance of Dynamic Causal Modelling for Event Related Potentials (DCM for ERP) in MATLAB by using external function calls to a graphics processing unit (GPU). DCM for ERP is an advanced method for studying neuronal effective connectivity. DCM utilizes an iterative procedure, the expectation maximization (EM) algorithm, to find the optimal parameters given a set of observations and the underlying probability model. As the EM algorithm is computationally demanding and the analysis faces possible combinatorial explosion of models to be tested, we propose a parallel computing scheme using the GPU to achieve a fast estimation of DCM for ERP. The computation of DCM for ERP is dynamically partitioned and distributed to threads for parallel processing, according to the DCM model complexity and the hardware constraints. The performance efficiency of this hardware-dependent thread arrangement strategy was evaluated using the synthetic data. The experimental data were used to validate the accuracy of the proposed computing scheme and quantify the time saving in practice. The simulation results show that the proposed scheme can accelerate the computation by a factor of 155 for the parallel part. For experimental data, the speedup factor is about 7 per model on average, depending on the model complexity and the data. This GPU-based implementation of DCM for ERP gives qualitatively the same results as the original MATLAB implementation does at the group level analysis. In conclusion, we believe that the proposed GPU-based implementation is very useful for users as a fast screen tool to select the most likely model and may provide implementation guidance for possible future clinical applications such as online diagnosis.
Accelerating Computation of DCM for ERP in MATLAB by External Function Calls to the GPU
Wang, Wei-Jen; Hsieh, I-Fan; Chen, Chun-Chuan
2013-01-01
This study aims to improve the performance of Dynamic Causal Modelling for Event Related Potentials (DCM for ERP) in MATLAB by using external function calls to a graphics processing unit (GPU). DCM for ERP is an advanced method for studying neuronal effective connectivity. DCM utilizes an iterative procedure, the expectation maximization (EM) algorithm, to find the optimal parameters given a set of observations and the underlying probability model. As the EM algorithm is computationally demanding and the analysis faces possible combinatorial explosion of models to be tested, we propose a parallel computing scheme using the GPU to achieve a fast estimation of DCM for ERP. The computation of DCM for ERP is dynamically partitioned and distributed to threads for parallel processing, according to the DCM model complexity and the hardware constraints. The performance efficiency of this hardware-dependent thread arrangement strategy was evaluated using the synthetic data. The experimental data were used to validate the accuracy of the proposed computing scheme and quantify the time saving in practice. The simulation results show that the proposed scheme can accelerate the computation by a factor of 155 for the parallel part. For experimental data, the speedup factor is about 7 per model on average, depending on the model complexity and the data. This GPU-based implementation of DCM for ERP gives qualitatively the same results as the original MATLAB implementation does at the group level analysis. In conclusion, we believe that the proposed GPU-based implementation is very useful for users as a fast screen tool to select the most likely model and may provide implementation guidance for possible future clinical applications such as online diagnosis. PMID:23840507
Zhan, Qiqin; Chen, Xiaojun
2016-01-01
This paper proposes an interactive method of model clipping for computer-assisted surgical planning. The model is separated by a data filter that is defined by the implicit function of the clipping path. Being interactive to surgeons, the clipping path that is composed of the plane widgets can be manually repositioned along the desirable presurgical path, which means that surgeons can produce any accurate shape of the clipped model. The implicit function is acquired through a recursive algorithm based on the Boolean combinations (including Boolean union and Boolean intersection) of a series of plane widgets' implicit functions. The algorithm is evaluated as highly efficient because the best time performance of the algorithm is linear, which applies to most of the cases in the computer-assisted surgical planning. Based on the above stated algorithm, a user-friendly module named SmartModelClip is developed on the basis of Slicer platform and VTK. A number of arbitrary clipping paths have been tested. Experimental results of presurgical planning for three types of Le Fort fractures and for tumor removal demonstrate the high reliability and efficiency of our recursive algorithm and robustness of the module.
Boolean Combinations of Implicit Functions for Model Clipping in Computer-Assisted Surgical Planning
2016-01-01
This paper proposes an interactive method of model clipping for computer-assisted surgical planning. The model is separated by a data filter that is defined by the implicit function of the clipping path. Being interactive to surgeons, the clipping path that is composed of the plane widgets can be manually repositioned along the desirable presurgical path, which means that surgeons can produce any accurate shape of the clipped model. The implicit function is acquired through a recursive algorithm based on the Boolean combinations (including Boolean union and Boolean intersection) of a series of plane widgets’ implicit functions. The algorithm is evaluated as highly efficient because the best time performance of the algorithm is linear, which applies to most of the cases in the computer-assisted surgical planning. Based on the above stated algorithm, a user-friendly module named SmartModelClip is developed on the basis of Slicer platform and VTK. A number of arbitrary clipping paths have been tested. Experimental results of presurgical planning for three types of Le Fort fractures and for tumor removal demonstrate the high reliability and efficiency of our recursive algorithm and robustness of the module. PMID:26751685
Saier, M H
1994-01-01
Three-dimensional structures have been elucidated for very few integral membrane proteins. Computer methods can be used as guides for estimation of solute transport protein structure, function, biogenesis, and evolution. In this paper the application of currently available computer programs to over a dozen distinct families of transport proteins is reviewed. The reliability of sequence-based topological and localization analyses and the importance of sequence and residue conservation to structure and function are evaluated. Evidence concerning the nature and frequency of occurrence of domain shuffling, splicing, fusion, deletion, and duplication during evolution of specific transport protein families is also evaluated. Channel proteins are proposed to be functionally related to carriers. It is argued that energy coupling to transport was a late occurrence, superimposed on preexisting mechanisms of solute facilitation. It is shown that several transport protein families have evolved independently of each other, employing different routes, at different times in evolutionary history, to give topologically similar transmembrane protein complexes. The possible significance of this apparent topological convergence is discussed. PMID:8177172
Computing single step operators of logic programming in radial basis function neural networks
NASA Astrophysics Data System (ADS)
Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong
2014-07-01
Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (Tp:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.
Computing single step operators of logic programming in radial basis function neural networks
Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong
2014-07-10
Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T{sub p}:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.
Arkansas' Curriculum Guide. Competency Based Computerized Accounting.
ERIC Educational Resources Information Center
Arkansas State Dept. of Education, Little Rock. Div. of Vocational, Technical and Adult Education.
This guide contains the essential parts of a total curriculum for a one-year secondary-level course in computerized accounting. Addressed in the individual sections of the guide are the following topics: the complete accounting cycle, computer operations for accounting, computerized accounting and general ledgers, computerized accounts payable,…
NASA Astrophysics Data System (ADS)
Amin, Ahmed
1986-07-01
A computer-controlled system for measuring bulk resistivity of insulating solids as a function of temperature is described. The measuring circuit is a modification of that given in the ASTM standard D257-66, to allow for a number of operations during the data-acquisition cycle. The bulk resistivity of an acceptor-doped morphotropic lead zirconate-titanate piezoelectric composition has been measured over the temperature range +40 to +200 °C. The activation energy derived from the experimental data is compared to the published values of similar morphotropic compositions.
Brown, James Carrington, Tucker
2015-07-28
Although phase-space localized Gaussians are themselves poor basis functions, they can be used to effectively contract a discrete variable representation basis [A. Shimshovitz and D. J. Tannor, Phys. Rev. Lett. 109, 070402 (2012)]. This works despite the fact that elements of the Hamiltonian and overlap matrices labelled by discarded Gaussians are not small. By formulating the matrix problem as a regular (i.e., not a generalized) matrix eigenvalue problem, we show that it is possible to use an iterative eigensolver to compute vibrational energy levels in the Gaussian basis.
NASA Astrophysics Data System (ADS)
Shi, Guangyuan; Li, Song; Huang, Ke; Li, Zile; Zheng, Guoxing
2016-10-01
We have developed a new numerical ray-tracing approach for LIDAR signal power function computation, in which the light round-trip propagation is analyzed by geometrical optics and a simple experiment is employed to acquire the laser intensity distribution. It is relatively more accurate and flexible than previous methods. We emphatically discuss the relationship between the inclined angle and the dynamic range of detector output signal in biaxial LIDAR system. Results indicate that an appropriate negative angle can compress the signal dynamic range. This technique has been successfully proved by comparison with real measurements.
Monte Carlo Computation of the Finite-Size Scaling Function: an Alternative Approach
NASA Astrophysics Data System (ADS)
Kim, Jae-Kwon; de Souza, Adauto J. F.; Landau, D. P.
1996-03-01
We show how to compute numerically a finite-size-scaling function which is particularly effective in extracting accurate infinite- volume -limit values (bulk values) of certain physical quantities^1. We illustrate our procedure for the two and three dimensional Ising models, and report our bulk values for the correlation lenth, magnetic susceptibility, and renormalized four-point coupling constant. Based on these bulk values we extract the values of various critical parameters. ^1 J.-K. Kim, Euro. Phys. Lett. 28, 211 (1994) Research supported in part by the NSF ^Permanent address: Departmento de Fisica e Matematica, Universidade Federal Rural de Pernambuco, 52171-900, Recife, Pernambuco, Brazil
NASA Astrophysics Data System (ADS)
Barnwell, Richard W.
1993-01-01
The derivation of the accurate, second-order, almost linear, approximate equation governing the defect stream function for nonequilibrium compressible turbulent boundary layers is reviewed. The similarity of this equation to the heat conduction equation is exploited in the development of an unconditionally stable, tridiagonal computational method which is second-order accurate in the marching direction and fourth-order accurate in the surface-normal direction. Results compare well with experimental data. Nonlinear effects are shown to be small. This two-dimensional method is simple and has been implemented on a programmable calculator.
San José Estépar, Raúl; Mendoza, Carlos S.; Hersh, Craig P.; Laird, Nan; Crapo, James D.; Lynch, David A.; Silverman, Edwin K.; Washko, George R.
2013-01-01
Rationale: Emphysema occurs in distinct pathologic patterns, but little is known about the epidemiologic associations of these patterns. Standard quantitative measures of emphysema from computed tomography (CT) do not distinguish between distinct patterns of parenchymal destruction. Objectives: To study the epidemiologic associations of distinct emphysema patterns with measures of lung-related physiology, function, and health care use in smokers. Methods: Using a local histogram-based assessment of lung density, we quantified distinct patterns of low attenuation in 9,313 smokers in the COPDGene Study. To determine if such patterns provide novel insights into chronic obstructive pulmonary disease epidemiology, we tested for their association with measures of physiology, function, and health care use. Measurements and Main Results: Compared with percentage of low-attenuation area less than −950 Hounsfield units (%LAA-950), local histogram-based measures of distinct CT low-attenuation patterns are more predictive of measures of lung function, dyspnea, quality of life, and health care use. These patterns are strongly associated with a wide array of measures of respiratory physiology and function, and most of these associations remain highly significant (P < 0.005) after adjusting for %LAA-950. In smokers without evidence of chronic obstructive pulmonary disease, the mild centrilobular disease pattern is associated with lower FEV1 and worse functional status (P < 0.005). Conclusions: Measures of distinct CT emphysema patterns provide novel information about the relationship between emphysema and key measures of physiology, physical function, and health care use. Measures of mild emphysema in smokers with preserved lung function can be extracted from CT scans and are significantly associated with functional measures. PMID:23980521
Ryszawy, Damian; Sarna, Michał; Rak, Monika; Szpak, Katarzyna; Kędracka-Krok, Sylwia; Michalik, Marta; Siedlar, Maciej; Zuba-Surma, Ewa; Burda, Kvetoslava; Korohoda, Włodzimierz; Madeja, Zbigniew; Czyż, Jarosław
2014-09-01
Suppressive function of connexin(Cx)43 in carcinogenesis was recently contested by reports that showed a multifaceted function of Cx43 in cancer progression. These studies did not attempt to model the dynamics of intratumoral heterogeneity involved in the metastatic cascade. An unorthodox look at the phenotypic heterogeneity of prostate cancer cells in vitro enabled us to identify links between Cx43 functions and Snail-1-regulated functional speciation of invasive cells. Incomplete Snail-1-dependent phenotypic shifts accounted for the formation of phenotypically stable subclones of AT-2 cells. These subclones showed diverse predilection for invasive behavior. High Snail-1 and Cx43 levels accompanied high motility and nanomechanical elasticity of the fibroblastoid AT-2_Fi2 subclone, which determined its considerable invasiveness. Transforming growth factor-β and ectopic Snail-1 overexpression induced invasiveness and Cx43 expression in epithelioid AT-2 subclones and DU-145 cells. Functional links between Snail-1 function and Cx43 expression were confirmed by Cx43 downregulation and phenotypic shifts in AT-2_Fi2, DU-145 and MAT-LyLu cells upon Snail-1 silencing. Corresponding morphological changes and Snail-1 downregulation were seen upon Cx43 silencing in AT-2_Fi2 cells. This indicates that feedback loops between both proteins regulate cell invasive behavior. We demonstrate that Cx43 may differentially predispose prostate cancer cells for invasion in a coupling-dependent and coupling-independent manner. When extrapolated to in vivo conditions, these data show the complexity of Cx43 functions during the metastatic cascade of prostate cancer. They may explain how Cx43 confers a selective advantage during cooperative invasion of clonally evolving, invasive prostate cancer cell subpopulations.
Mandonnet, Emmanuel; Duffau, Hugues
2014-01-01
Historically, cerebral processing has been conceptualized as a framework based on statically localized functions. However, a growing amount of evidence supports a hodotopical (delocalized) and flexible organization. A number of studies have reported absence of a permanent neurological deficit after massive surgical resections of eloquent brain tissue. These results highlight the tremendous plastic potential of the brain. Understanding anatomo-functional correlates underlying this cerebral reorganization is a prerequisite to restore brain functions through brain-computer interfaces (BCIs) in patients with cerebral diseases, or even to potentiate brain functions in healthy individuals. Here, we review current knowledge of neural networks that could be utilized in the BCIs that enable movements and language. To this end, intraoperative electrical stimulation in awake patients provides valuable information on the cerebral functional maps, their connectomics and plasticity. Overall, these studies indicate that the complex cerebral circuitry that underpins interactions between action, cognition and behavior should be throughly investigated before progress in BCI approaches can be achieved. PMID:24834030
Functional Priorities, Assistive Technology, and Brain-Computer Interfaces after Spinal Cord Injury
Collinger, Jennifer L.; Boninger, Michael L.; Bruns, Tim M.; Curley, Kenneth; Wang, Wei; Weber, Douglas J.
2012-01-01
Spinal cord injury often impacts a person’s ability to perform critical activities of daily living and can have a negative impact on their quality of life. Assistive technology aims to bridge this gap to augment function and increase independence. It is critical to involve consumers in the design and evaluation process as new technologies, like brain-computer interfaces (BCIs), are developed. In a survey study of fifty-seven veterans with spinal cord injury who were participating in the National Veterans Wheelchair Games, we found that restoration of bladder/bowel control, walking, and arm/hand function (tetraplegia only) were all high priorities for improving quality of life. Many of the participants had not used or heard of some currently available technologies designed to improve function or the ability to interact with their environment. The majority of individuals in this study were interested in using a BCI, particularly for controlling functional electrical stimulation to restore lost function. Independent operation was considered to be the most important design criteria. Interestingly, many participants reported that they would be willing to consider surgery to implant a BCI even though non-invasiveness was a high priority design requirement. This survey demonstrates the interest of individuals with spinal cord injury in receiving and contributing to the design of BCI. PMID:23760996
Functional priorities, assistive technology, and brain-computer interfaces after spinal cord injury.
Collinger, Jennifer L; Boninger, Michael L; Bruns, Tim M; Curley, Kenneth; Wang, Wei; Weber, Douglas J
2013-01-01
Spinal cord injury (SCI) often affects a person's ability to perform critical activities of daily living and can negatively affect his or her quality of life. Assistive technology aims to bridge this gap in order to augment function and increase independence. It is critical to involve consumers in the design and evaluation process as new technologies such as brain-computer interfaces (BCIs) are developed. In a survey study of 57 veterans with SCI participating in the 2010 National Veterans Wheelchair Games, we found that restoration of bladder and bowel control, walking, and arm and hand function (tetraplegia only) were all high priorities for improving quality of life. Many of the participants had not used or heard of some currently available technologies designed to improve function or the ability to interact with their environment. The majority of participants in this study were interested in using a BCI, particularly for controlling functional electrical stimulation to restore lost function. Independent operation was considered to be the most important design criteria. Interestingly, many participants reported that they would consider surgery to implant a BCI even though noninvasiveness was a high-priority design requirement. This survey demonstrates the interest of individuals with SCI in receiving and contributing to the design of BCIs.
Management Needs for Computer Support.
ERIC Educational Resources Information Center
Irby, Alice J.
University management has many and varied needs for effective computer services in support of their processing and information functions. The challenge for the computer center managers is to better understand these needs and assist in the development of effective and timely solutions. Management needs can range from accounting and payroll to…
Roberts, Timothy D; Clatworthy, Mark G; Frampton, Chris M; Young, Simon W
2015-09-01
The objective of this study was to determine whether computer assisted navigation in total knee arthroplasty (TKA) improves functional outcomes and implant survivability using data from a large national database. We analysed 9054 primary TKA procedures performed between 2006 and 2012 from the New Zealand National Joint Registry. Functional outcomes were assessed using Oxford Knee Questionnaires at six months and five years. On multivariate analysis, there was no significant difference in mean Oxford Knee Scores between the navigated and non-navigated groups at six months (39.0 vs 38.1, P=0.54) or five years (42.2 vs 42.0, P=0.76). At current follow-up, there was no difference in revision rates between navigated and non-navigated TKA (0.46 vs 0.43 revisions 100 component years, P=0.8).
Purdy, Michael D; Bennett, Brad C; McIntire, William E; Khan, Ali K; Kasson, Peter M; Yeager, Mark
2014-08-01
Three vignettes exemplify the potential of combining EM and X-ray crystallographic data with molecular dynamics (MD) simulation to explore the architecture, dynamics and functional properties of multicomponent, macromolecular complexes. The first two describe how EM and X-ray crystallography were used to solve structures of the ribosome and the Arp2/3-actin complex, which enabled MD simulations that elucidated functional dynamics. The third describes how EM, X-ray crystallography, and microsecond MD simulations of a GPCR:G protein complex were used to explore transmembrane signaling by the β-adrenergic receptor. Recent technical advancements in EM, X-ray crystallography and computational simulation create unprecedented synergies for integrative structural biology to reveal new insights into heretofore intractable biological systems.
NASA Technical Reports Server (NTRS)
1975-01-01
A system analysis of the shuttle orbiter baseline system management (SM) computer function is performed. This analysis results in an alternative SM design which is also described. The alternative design exhibits several improvements over the baseline, some of which are increased crew usability, improved flexibility, and improved growth potential. The analysis consists of two parts: an application assessment and an implementation assessment. The former is concerned with the SM user needs and design functional aspects. The latter is concerned with design flexibility, reliability, growth potential, and technical risk. The system analysis is supported by several topical investigations. These include: treatment of false alarms, treatment of off-line items, significant interface parameters, and a design evaluation checklist. An in-depth formulation of techniques, concepts, and guidelines for design of automated performance verification is discussed.
Carrizo, Sebastián; Xie, Xinzhou; Peinado-Peinado, Rafael; Sánchez-Recalde, Angel; Jiménez-Valero, Santiago; Galeote-Garcia, Guillermo; Moreno, Raúl
2014-10-01
Clinical trials have shown that functional assessment of coronary stenosis by fractional flow reserve (FFR) improves clinical outcomes. Intravascular ultrasound (IVUS) complements conventional angiography, and is a powerful tool to assess atherosclerotic plaques and to guide percutaneous coronary intervention (PCI). Computational fluid dynamics (CFD) simulation represents a novel method for the functional assessment of coronary flow. A CFD simulation can be calculated from the data normally acquired by IVUS images. A case of coronary heart disease studied with FFR and IVUS, before and after PCI, is presented. A three-dimensional model was constructed based on IVUS images, to which CFD was applied. A discussion of the literature concerning the clinical utility of CFD simulation is provided.
Integrative computed tomographic imaging of cardiac structure, function, perfusion, and viability.
Thilo, Christian; Hanley, Michael; Bastarrika, Gorka; Ruzsics, Balazs; Schoepf, U Joseph
2010-01-01
Recent advances in multidetector-row computed tomography (MDCT) technology have created new opportunities in cardiac imaging and provided new insights into a variety of disease states. Use of 64-slice coronary computed tomography angiography has been validated for the evaluation of clinically relevant coronary artery stenosis with high negative predictive values for ruling out significant obstructive disease. This technology has also advanced the care of patients with acute chest pain by simultaneous assessment of acute coronary syndrome, pulmonary embolism, and acute aortic syndrome ("triple rule out"). Although MDCT has been instrumental in the advancement of cardiac imaging, there are still limitations in patients with high or irregular heart rates. Newer MDCT scanner generations hold promise to improve some of these limitations for noninvasive cardiac imaging. The evaluation of coronary artery stenosis remains the primary clinical indication for cardiac computed tomography angiography. However, the use of MDCT for simultaneous assessment of coronary artery stenosis, atherosclerotic plaque formation, ventricular function, myocardial perfusion, and viability with a single modality is under intense investigation. Recent technical developments hold promise for accomplishing this goal and establishing MDCT as a comprehensive stand-alone test for integrative imaging of coronary heart disease.
Wang, Hongbo; Shu, Shengjie; Li, Jinping; Jiang, Huijie
2016-02-01
The objective of this study was to observe the change in blood perfusion of liver cancer following argon-helium knife treatment with functional computer tomography perfusion imaging. Twenty-seven patients with primary liver cancer treated with argon-helium knife and were included in this study. Plain computer tomography (CT) and computer tomography perfusion (CTP) imaging were conducted in all patients before and after treatment. Perfusion parameters including blood flows, blood volume, hepatic artery perfusion fraction, hepatic artery perfusion, and hepatic portal venous perfusion were used for evaluating therapeutic effect. All parameters in liver cancer were significantly decreased after argon-helium knife treatment (p < 0.05 to all). Significant decrease in hepatic artery perfusion was also observed in pericancerous liver tissue, but other parameters kept constant. CT perfusion imaging is able to detect decrease in blood perfusion of liver cancer post-argon-helium knife therapy. Therefore, CTP imaging would play an important role for liver cancer management followed argon-helium knife therapy.
Davies, Sherri R.; Chang, Li-Wei; Patra, Debabrata; Xing, Xiaoyun; Posey, Karen; Hecht, Jacqueline; Stormo, Gary D.; Sandell, Linda J.
2007-01-01
Chondrocyte gene regulation is important for the generation and maintenance of cartilage tissues. Several regulatory factors have been identified that play a role in chondrogenesis, including the positive transacting factors of the SOX family such as SOX9, SOX5, and SOX6, as well as negative transacting factors such as C/EBP and delta EF1. However, a complete understanding of the intricate regulatory network that governs the tissue-specific expression of cartilage genes is not yet available. We have taken a computational approach to identify cis-regulatory, transcription factor (TF) binding motifs in a set of cartilage characteristic genes to better define the transcriptional regulatory networks that regulate chondrogenesis. Our computational methods have identified several TFs, whose binding profiles are available in the TRANSFAC database, as important to chondrogenesis. In addition, a cartilage-specific SOX-binding profile was constructed and used to identify both known, and novel, functional paired SOX-binding motifs in chondrocyte genes. Using DNA pattern-recognition algorithms, we have also identified cis-regulatory elements for unknown TFs. We have validated our computational predictions through mutational analyses in cell transfection experiments. One novel regulatory motif, N1, found at high frequency in the COL2A1 promoter, was found to bind to chondrocyte nuclear proteins. Mutational analyses suggest that this motif binds a repressive factor that regulates basal levels of the COL2A1 promoter. PMID:17785538
Terrell, Cassidy R; Listenberger, Laura L
2017-02-01
Recognizing that undergraduate students can benefit from analysis of 3D protein structure and function, we have developed a multiweek, inquiry-based molecular visualization project for Biochemistry I students. This project uses a virtual model of cyclooxygenase-1 (COX-1) to guide students through multiple levels of protein structure analysis. The first assignment explores primary structure by generating and examining a protein sequence alignment. Subsequent assignments introduce 3D visualization software to explore secondary, tertiary, and quaternary structure. Students design an inhibitor, based on scrutiny of the enzyme active site, and evaluate the fit of the molecule using computed binding energies. In the last assignment, students introduce a point mutation to model the active site of the related COX-2 enzyme and analyze the impact of the mutation on inhibitor binding. With this project we aim to increase knowledge about, and confidence in using, online databases and computational tools. Here, we share results of our mixed methods pre- and postsurvey demonstrating student gains in knowledge about, and confidence using, online databases and computational tools. © 2017 by The International Union of Biochemistry and Molecular Biology, 2017.
Do, An H; Wang, Po T; King, Christine E; Schombs, Andrew; Cramer, Steven C; Nenadic, Zoran
2012-01-01
Gait impairment due to foot drop is a common outcome of stroke, and current physiotherapy provides only limited restoration of gait function. Gait function can also be aided by orthoses, but these devices may be cumbersome and their benefits disappear upon removal. Hence, new neuro-rehabilitative therapies are being sought to generate permanent improvements in motor function beyond those of conventional physiotherapies through positive neural plasticity processes. Here, the authors describe an electroencephalogram (EEG) based brain-computer interface (BCI) controlled functional electrical stimulation (FES) system that enabled a stroke subject with foot drop to re-establish foot dorsiflexion. To this end, a prediction model was generated from EEG data collected as the subject alternated between periods of idling and attempted foot dorsiflexion. This prediction model was then used to classify online EEG data into either "idling" or "dorsiflexion" states, and this information was subsequently used to control an FES device to elicit effective foot dorsiflexion. The performance of the system was assessed in online sessions, where the subject was prompted by a computer to alternate between periods of idling and dorsiflexion. The subject demonstrated purposeful operation of the BCI-FES system, with an average cross-correlation between instructional cues and BCI-FES response of 0.60 over 3 sessions. In addition, analysis of the prediction model indicated that non-classical brain areas were activated in the process, suggesting post-stroke cortical re-organization. In the future, these systems may be explored as a potential therapeutic tool that can help promote positive plasticity and neural repair in chronic stroke patients.
NASA Astrophysics Data System (ADS)
Avanaki, Mohammad R. N.; Xia, Jun; Wang, Lihong V.
2013-03-01
Photoacoustic computed tomography (PACT) is an emerging imaging technique which is based on the acoustic detection of optical absorption from tissue chromophores, such as oxy-hemoglobin and deoxy-hemoglobin. An important application of PACT is functional brain imaging of small animals. The conversion of light to acoustic waves allows PACT to provide high resolution images of cortical vasculatures through the intact scalp. Here, PACT was utilized to study the activated areas of the mouse brain during forepaw and hindpaw stimulations. Temporal PACT images were acquired enabling computation of hemodynamic changes during stimulation. The stimulations were performed by trains of pulses at different stimulation currents (between 0.1 to 2 mA) and pulse repetition rates (between 0.05 Hz to 0.01Hz). The response at somatosensory cortex-forelimb, and somatosensory cortex-hindlimb, were investigated. The Paxinos mouse brain atlas was used to confirm the activated regions. The study shows that PACT is a promising new technology that can be used to study brain functionality with high spatial resolution.
Song, Inyoung; Park, Jung Ah; Choi, Bo Hwa; Shin, Je Kyoun; Chee, Hyun Keun; Kim, Jun Seok
2016-01-01
Objective The aim of this study was to identify the morphological and functional characteristics of quadricuspid aortic valves (QAV) on cardiac computed tomography (CCT). Materials and Methods We retrospectively enrolled 11 patients with QAV. All patients underwent CCT and transthoracic echocardiography (TTE), and 7 patients underwent cardiovascular magnetic resonance (CMR). The presence and classification of QAV assessed by CCT was compared with that of TTE and intraoperative findings. The regurgitant orifice area (ROA) measured by CCT was compared with severity of aortic regurgitation (AR) by TTE and the regurgitant fraction (RF) by CMR. Results All of the patients had AR; 9 had pure AR, 1 had combined aortic stenosis and regurgitation, and 1 had combined subaortic stenosis and regurgitation. Two patients had a subaortic fibrotic membrane and 1 of them showed a subaortic stenosis. One QAV was misdiagnosed as tricuspid aortic valve on TTE. In accordance with the Hurwitz and Robert's classification, consensus was reached on the QAV classification between the CCT and TTE findings in 7 of 10 patients. The patients were classified as type A (n = 1), type B (n = 3), type C (n = 1), type D (n = 4), and type F (n = 2) on CCT. A very high correlation existed between ROA by CCT and RF by CMR (r = 0.99) but a good correlation existed between ROA by CCT and regurgitant severity by TTE (r = 0.62). Conclusion Cardiac computed tomography provides comprehensive anatomical and functional information about the QAV. PMID:27390538
Highly automated computer-aided diagnosis of neurological disorders using functional brain imaging
NASA Astrophysics Data System (ADS)
Spetsieris, P. G.; Ma, Y.; Dhawan, V.; Moeller, J. R.; Eidelberg, D.
2006-03-01
We have implemented a highly automated analytical method for computer aided diagnosis (CAD) of neurological disorders using functional brain imaging that is based on the Scaled Subprofile Model (SSM). Accurate diagnosis of functional brain disorders such as Parkinson's disease is often difficult clinically, particularly in early stages. Using principal component analysis (PCA) in conjunction with SSM on brain images of patients and normals, we can identify characteristic abnormal network covariance patterns which provide a subject dependent scalar score that not only discriminates a particular disease but also correlates with independent measures of disease severity. These patterns represent disease-specific brain networks that have been shown to be highly reproducible in distinct groups of patients. Topographic Profile Rating (TPR) is a reverse SSM computational algorithm that can be used to determine subject scores for new patients on a prospective basis. In our implementation, reference values for a full range of patients and controls are automatically accessed for comparison. We also implemented an automated recalibration step to produce reference scores for images generated in a different imaging environment from that used in the initial network derivation. New subjects under the same setting can then be evaluated individually and a simple report is generated indicating the subject's classification. For scores near the normal limits, additional criteria are used to make a definitive diagnosis. With further refinement, automated TPR can be used to efficiently assess disease severity, monitor disease progression and evaluate treatment efficacy.
Computerized accounting methods. Final report
1994-12-31
This report summarizes the results of the research performed under the Task Order on computerized accounting methods in a period from 03 August to 31 December 1994. Computerized nuclear material accounting methods are analyzed and evaluated. Selected methods are implemented in a hardware-software complex developed as a prototype of the local network-based CONMIT system. This complex has been put into trial operation for test and evaluation of the selected methods at two selected ``Kurchatov Institute`` Russian Research Center (``KI`` RRC) nuclear facilities. Trial operation is carried out since the beginning of Initial Physical Inventory Taking in these facilities that was performed in November 1994. Operation of CONMIT prototype system was demonstrated in the middle of December 1994. Results of evaluation of CONMIT prototype system features and functioning under real operating conditions are considered. Conclusions are formulated on the ways of further development of computerized nuclear material accounting methods. The most important conclusion is a need to strengthen computer and information security features supported by the operating environment. Security provisions as well as other LANL Client/Server System approaches being developed by Los Alamos National Laboratory are recommended for selection of software and hardware components to be integrated into production version of CONMIT system for KI RRC.
ERIC Educational Resources Information Center
Gillespie-Lynch, Kristen; Kapp, Steven K.; Shane-Simpson, Christina; Smith, David Shane; Hutman, Ted
2014-01-01
An online survey compared the perceived benefits and preferred functions of computer-mediated communication of participants with (N = 291) and without ASD (N = 311). Participants with autism spectrum disorder (ASD) perceived benefits of computer-mediated communication in terms of increased comprehension and control over communication, access to…
ERIC Educational Resources Information Center
International Business Machines Corp., White Plains, NY.
The economic and technical feasibility of providing a remote terminal central computing facility to serve a group of 25-75 secondary schools and colleges was investigated. The general functions of a central facility for an educational cluster were defined to include training in computer techniques, the solution of student development problems in…
NASA Astrophysics Data System (ADS)
Mishev, Alexander; Usoskin, Ilya
2016-07-01
A precise analysis of SEP (solar energetic particle) spectral and angular characteristics using neutron monitor (NM) data requires realistic modeling of propagation of those particles in the Earth's magnetosphere and atmosphere. On the basis of the method including a sequence of consecutive steps, namely a detailed computation of the SEP assymptotic cones of acceptance, and application of a neutron monitor yield function and convenient optimization procedure, we derived the rigidity spectra and anisotropy characteristics of several major GLEs. Here we present several major GLEs of the solar cycle 23: the Bastille day event on 14 July 2000 (GLE 59), GLE 69 on 20 January 2005, and GLE 70 on 13 December 2006. The SEP spectra and pitch angle distributions were computed in their dynamical development. For the computation we use the newly computed yield function of the standard 6NM64 neutron monitor for primary proton and alpha CR nuclei. In addition, we present new computations of NM yield function for the altitudes of 3000 m and 5000 m above the sea level The computations were carried out with Planetocosmics and CORSIKA codes as standardized Monte-Carlo tools for atmospheric cascade simulations. The flux of secondary neutrons and protons was computed using the Planetocosmics code appliyng a realistic curved atmospheric. Updated information concerning the NM registration efficiency for secondary neutrons and protons was used. The derived results for spectral and angular characteristics using the newly computed NM yield function at several altitudes are compared with the previously obtained ones using the double attenuation method.
Corda, Marcella; Tamburrini, Maurizio; De Rosa, Maria C; Sanna, Maria T; Fais, Antonella; Olianas, Alessandra; Pellegrini, Mariagiuseppina; Giardina, Bruno; di Prisco, Guido
2003-01-01
The functional properties of haemoglobin from the Mediterranean whale Balaenoptera physalus have been studied as functions of heterotropic effector concentration and temperature. Particular attention has been given to the effect of carbon dioxide and lactate since the animal is specialised for prolonged dives often in cold water. The molecular basis of the functional behaviour and in particular of the weak interaction with 2,3-diphosphoglycerate is discussed in the light of the primary structure and of computer modelling. On these bases, it is suggested that the A2 (Pro-->Ala) substitution observed in the beta chains of whale haemoglobin may be responsible for the displacement of the A helix known to be a key structural feature in haemoglobins that display an altered interaction with 2,3-diphosphoglycerate as compared with human haemoglobin. The functional and structural results, discussed in the light of a previous study on the haemoglobin from the Arctic whale Balaenoptera acutorostrata, give further insights into the regulatory mechanisms of the interactive effects of temperature, carbon dioxide and lactate.
Peters, James F.; Ramanna, Sheela; Tozzi, Arturo; İnan, Ebubekir
2017-01-01
We introduce a novel method for the measurement of information level in fMRI (functional Magnetic Resonance Imaging) neural data sets, based on image subdivision in small polygons equipped with different entropic content. We show how this method, called maximal nucleus clustering (MNC), is a novel, fast and inexpensive image-analysis technique, independent from the standard blood-oxygen-level dependent signals. MNC facilitates the objective detection of hidden temporal patterns of entropy/information in zones of fMRI images generally not taken into account by the subjective standpoint of the observer. This approach befits the geometric character of fMRIs. The main purpose of this study is to provide a computable framework for fMRI that not only facilitates analyses, but also provides an easily decipherable visualization of structures. This framework commands attention because it is easily implemented using conventional software systems. In order to evaluate the potential applications of MNC, we looked for the presence of a fourth dimension's distinctive hallmarks in a temporal sequence of 2D images taken during spontaneous brain activity. Indeed, recent findings suggest that several brain activities, such as mind-wandering and memory retrieval, might take place in the functional space of a four dimensional hypersphere, which is a double donut-like structure undetectable in the usual three dimensions. We found that the Rényi entropy is higher in MNC areas than in the surrounding ones, and that these temporal patterns closely resemble the trajectories predicted by the possible presence of a hypersphere in the brain. PMID:28203153
Peters, James F; Ramanna, Sheela; Tozzi, Arturo; İnan, Ebubekir
2017-01-01
We introduce a novel method for the measurement of information level in fMRI (functional Magnetic Resonance Imaging) neural data sets, based on image subdivision in small polygons equipped with different entropic content. We show how this method, called maximal nucleus clustering (MNC), is a novel, fast and inexpensive image-analysis technique, independent from the standard blood-oxygen-level dependent signals. MNC facilitates the objective detection of hidden temporal patterns of entropy/information in zones of fMRI images generally not taken into account by the subjective standpoint of the observer. This approach befits the geometric character of fMRIs. The main purpose of this study is to provide a computable framework for fMRI that not only facilitates analyses, but also provides an easily decipherable visualization of structures. This framework commands attention because it is easily implemented using conventional software systems. In order to evaluate the potential applications of MNC, we looked for the presence of a fourth dimension's distinctive hallmarks in a temporal sequence of 2D images taken during spontaneous brain activity. Indeed, recent findings suggest that several brain activities, such as mind-wandering and memory retrieval, might take place in the functional space of a four dimensional hypersphere, which is a double donut-like structure undetectable in the usual three dimensions. We found that the Rényi entropy is higher in MNC areas than in the surrounding ones, and that these temporal patterns closely resemble the trajectories predicted by the possible presence of a hypersphere in the brain.
Not Available
2012-07-01
NREL researchers use high-performance computing to demonstrate fundamental roles of aromatic residues in cellulase enzyme tunnels. National Renewable Energy Laboratory (NREL) computer simulations of a key industrial enzyme, the Trichoderma reesei Family 6 cellulase (Cel6A), predict that aromatic residues near the enzyme's active site and at the entrance and exit tunnel perform different functions in substrate binding and catalysis, depending on their location in the enzyme. These results suggest that nature employs aromatic-carbohydrate interactions with a wide variety of binding affinities for diverse functions. Outcomes also suggest that protein engineering strategies in which mutations are made around the binding sites may require tailoring specific to the enzyme family. Cellulase enzymes ubiquitously exhibit tunnels or clefts lined with aromatic residues for processing carbohydrate polymers to monomers, but the molecular-level role of these aromatic residues remains unknown. In silico mutation of the aromatic residues near the catalytic site of Cel6A has little impact on the binding affinity, but simulation suggests that these residues play a major role in the glucopyranose ring distortion necessary for cleaving glycosidic bonds to produce fermentable sugars. Removal of aromatic residues at the entrance and exit of the cellulase tunnel, however, dramatically impacts the binding affinity. This suggests that these residues play a role in acquiring cellulose chains from the cellulose crystal and stabilizing the reaction product, respectively. These results illustrate that the role of aromatic-carbohydrate interactions varies dramatically depending on the position in the enzyme tunnel. As aromatic-carbohydrate interactions are present in all carbohydrate-active enzymes, the results have implications for understanding protein structure-function relationships in carbohydrate metabolism and recognition, carbon turnover in nature, and protein engineering strategies for
Mehio, Nada; Lashely, Mark A.; Nugent, Joseph W.; Tucker, Lyndsay; Correia, Bruna; Do-Thanh, Chi-Linh; Dai, Sheng; Hancock, Robert D.; Bryantsev, Vyacheslav S.
2015-01-26
Poly(acrylamidoxime) adsorbents are often invoked in discussions of mining uranium from seawater. It has been demonstrated repeatedly in the literature that the success of these materials is due to the amidoxime functional group. While the amidoxime-uranyl chelation mode has been established, a number of essential binding constants remain unclear. This is largely due to the wide range of conflicting pK_{a} values that have been reported for the amidoxime functional group in the literature. To resolve this existing controversy we investigated the pK_{a} values of the amidoxime functional group using a combination of experimental and computational methods. Experimentally, we used spectroscopic titrations to measure the pK_{a} values of representative amidoximes, acetamidoxime and benzamidoxime. Computationally, we report on the performance of several protocols for predicting the pK_{a} values of aqueous oxoacids. Calculations carried out at the MP2 or M06-2X levels of theory combined with solvent effects calculated using the SMD model provide the best overall performance with a mean absolute error of 0.33 pK_{a} units and 0.35 pK_{a} units, respectively, and a root mean square deviation of 0.46 pK_{a} units and 0.45 pK_{a} units, respectively. Finally, we employ our two best methods to predict the pK_{a} values of promising, uncharacterized amidoxime ligands. Hence, our study provides a convenient means for screening suitable amidoxime monomers for future generations of poly(acrylamidoxime) adsorbents used to mine uranium from seawater.
Mehio, Nada; Lashely, Mark A.; Nugent, Joseph W.; ...
2015-01-26
Poly(acrylamidoxime) adsorbents are often invoked in discussions of mining uranium from seawater. It has been demonstrated repeatedly in the literature that the success of these materials is due to the amidoxime functional group. While the amidoxime-uranyl chelation mode has been established, a number of essential binding constants remain unclear. This is largely due to the wide range of conflicting pKa values that have been reported for the amidoxime functional group in the literature. To resolve this existing controversy we investigated the pKa values of the amidoxime functional group using a combination of experimental and computational methods. Experimentally, we used spectroscopicmore » titrations to measure the pKa values of representative amidoximes, acetamidoxime and benzamidoxime. Computationally, we report on the performance of several protocols for predicting the pKa values of aqueous oxoacids. Calculations carried out at the MP2 or M06-2X levels of theory combined with solvent effects calculated using the SMD model provide the best overall performance with a mean absolute error of 0.33 pKa units and 0.35 pKa units, respectively, and a root mean square deviation of 0.46 pKa units and 0.45 pKa units, respectively. Finally, we employ our two best methods to predict the pKa values of promising, uncharacterized amidoxime ligands. Hence, our study provides a convenient means for screening suitable amidoxime monomers for future generations of poly(acrylamidoxime) adsorbents used to mine uranium from seawater.« less
Matano, Francesca; Sambucini, Valeria
2016-11-01
In phase II single-arm studies, the response rate of the experimental treatment is typically compared with a fixed target value that should ideally represent the true response rate for the standard of care therapy. Generally, this target value is estimated through previous data, but the inherent variability in the historical response rate is not taken into account. In this paper, we present a Bayesian procedure to construct single-arm two-stage designs that allows to incorporate uncertainty in the response rate of the standard treatment. In both stages, the sample size determination criterion is based on the concepts of conditional and predictive Bayesian power functions. Different kinds of prior distributions, which play different roles in the designs, are introduced, and some guidelines for their elicitation are described. Finally, some numerical results about the performance of the designs are provided and a real data example is illustrated. Copyright © 2016 John Wiley & Sons, Ltd.
PLATO Instruction for Elementary Accounting.
ERIC Educational Resources Information Center
McKeown, James C.
A progress report of a study using computer assisted instruction (CAI) materials for an elementary course in accounting principles is presented. The study was based on the following objectives: (1) improvement of instruction in the elementary accounting sequence, and (2) help for transfer students from two-year institutions. The materials under…
Computer Simulation on the Cooperation of Functional Molecules during the Early Stages of Evolution
Ma, Wentao; Hu, Jiming
2012-01-01
It is very likely that life began with some RNA (or RNA-like) molecules, self-replicating by base-pairing and exhibiting enzyme-like functions that favored the self-replication. Different functional molecules may have emerged by favoring their own self-replication at different aspects. Then, a direct route towards complexity/efficiency may have been through the coexistence/cooperation of these molecules. However, the likelihood of this route remains quite unclear, especially because the molecules would be competing for limited common resources. By computer simulation using a Monte-Carlo model (with “micro-resolution” at the level of nucleotides and membrane components), we show that the coexistence/cooperation of these molecules can occur naturally, both in a naked form and in a protocell form. The results of the computer simulation also lead to quite a few deductions concerning the environment and history in the scenario. First, a naked stage (with functional molecules catalyzing template-replication and metabolism) may have occurred early in evolution but required high concentration and limited dispersal of the system (e.g., on some mineral surface); the emergence of protocells enabled a “habitat-shift” into bulk water. Second, the protocell stage started with a substage of “pseudo-protocells”, with functional molecules catalyzing template-replication and metabolism, but still missing the function involved in the synthesis of membrane components, the emergence of which would lead to a subsequent “true-protocell” substage. Third, the initial unstable membrane, composed of prebiotically available fatty acids, should have been superseded quite early by a more stable membrane (e.g., composed of phospholipids, like modern cells). Additionally, the membrane-takeover probably occurred at the transition of the two substages of the protocells. The scenario described in the present study should correspond to an episode in early evolution, after the
ABINIT: Plane-Wave-Based Density-Functional Theory on High Performance Computers
NASA Astrophysics Data System (ADS)
Torrent, Marc
2014-03-01
For several years, a continuous effort has been produced to adapt electronic structure codes based on Density-Functional Theory to the future computing architectures. Among these codes, ABINIT is based on a plane-wave description of the wave functions which allows to treat systems of any kind. Porting such a code on petascale architectures pose difficulties related to the many-body nature of the DFT equations. To improve the performances of ABINIT - especially for what concerns standard LDA/GGA ground-state and response-function calculations - several strategies have been followed: A full multi-level parallelisation MPI scheme has been implemented, exploiting all possible levels and distributing both computation and memory. It allows to increase the number of distributed processes and could not be achieved without a strong restructuring of the code. The core algorithm used to solve the eigen problem (``Locally Optimal Blocked Congugate Gradient''), a Blocked-Davidson-like algorithm, is based on a distribution of processes combining plane-waves and bands. In addition to the distributed memory parallelization, a full hybrid scheme has been implemented, using standard shared-memory directives (openMP/openACC) or porting some comsuming code sections to Graphics Processing Units (GPU). As no simple performance model exists, the complexity of use has been increased; the code efficiency strongly depends on the distribution of processes among the numerous levels. ABINIT is able to predict the performances of several process distributions and automatically choose the most favourable one. On the other hand, a big effort has been carried out to analyse the performances of the code on petascale architectures, showing which sections of codes have to be improved; they all are related to Matrix Algebra (diagonalisation, orthogonalisation). The different strategies employed to improve the code scalability will be described. They are based on an exploration of new diagonalization
Novel hold-release functionality in a P300 brain-computer interface
NASA Astrophysics Data System (ADS)
Alcaide-Aguirre, R. E.; Huggins, J. E.
2014-12-01
Assistive technology control interface theory describes interface activation and interface deactivation as distinct properties of any control interface. Separating control of activation and deactivation allows precise timing of the duration of the activation. Objective. We propose a novel P300 brain-computer interface (BCI) functionality with separate control of the initial activation and the deactivation (hold-release) of a selection. Approach. Using two different layouts and off-line analysis, we tested the accuracy with which subjects could (1) hold their selection and (2) quickly change between selections. Main results. Mean accuracy across all subjects for the hold-release algorithm was 85% with one hold-release classification and 100% with two hold-release classifications. Using a layout designed to lower perceptual errors, accuracy increased to a mean of 90% and the time subjects could hold a selection was 40% longer than with the standard layout. Hold-release functionality provides improved response time (6-16 times faster) over the initial P300 BCI selection by allowing the BCI to make hold-release decisions from very few flashes instead of after multiple sequences of flashes. Significance. For the BCI user, hold-release functionality allows for faster, more continuous control with a P300 BCI, creating new options for BCI applications.
Deliquescence of NaBH4 computed from density functional theory
NASA Astrophysics Data System (ADS)
Li, Ping; Al-Saidi, Wissam; Johnson, Karl
2012-02-01
Complex hydrides are promising hydrogen storage materials and have received significant attention due to their high hydrogen-capacity. The hydrolysis reaction of NaBH4 releases hydrogen with both fast kinetics and high extent of reaction under technical conditions by using steam deliquescence of NaBH4. This catalyst-free reaction has many advantages over traditional catalytic aqueous phase hydrolysis. The first step in the reaction is deliquescence, i.e. adsorption of water onto NaBH4 surface and then formation of a liquid layer of a concentrated NaBH4 solution, which is quickly followed by hydrogen generation. We have used periodic plane wave density functional theory to compute the energetics and dynamics of the initial stages of deliquescence on the (001) surface of NaBH4. Comparison of results from standard generalized gradient approximation functionals with a dispersion-corrected density functional show that dispersion forces are important for adsorption. We used DFT molecular dynamics to assess the elementary steps in the deliquescence process.
Synaptic Efficacy as a Function of Ionotropic Receptor Distribution: A Computational Study
Allam, Sushmita L.; Bouteiller, Jean-Marie C.; Hu, Eric Y.; Ambert, Nicolas; Greget, Renaud; Bischoff, Serge; Baudry, Michel; Berger, Theodore W.
2015-01-01
Glutamatergic synapses are the most prevalent functional elements of information processing in the brain. Changes in pre-synaptic activity and in the function of various post-synaptic elements contribute to generate a large variety of synaptic responses. Previous studies have explored postsynaptic factors responsible for regulating synaptic strength variations, but have given far less importance to synaptic geometry, and more specifically to the subcellular distribution of ionotropic receptors. We analyzed the functional effects resulting from changing the subsynaptic localization of ionotropic receptors by using a hippocampal synaptic computational framework. The present study was performed using the EONS (Elementary Objects of the Nervous System) synaptic modeling platform, which was specifically developed to explore the roles of subsynaptic elements as well as their interactions, and that of synaptic geometry. More specifically, we determined the effects of changing the localization of ionotropic receptors relative to the presynaptic glutamate release site, on synaptic efficacy and its variations following single pulse and paired-pulse stimulation protocols. The results indicate that changes in synaptic geometry do have consequences on synaptic efficacy and its dynamics. PMID:26480028
Synaptic Efficacy as a Function of Ionotropic Receptor Distribution: A Computational Study.
Allam, Sushmita L; Bouteiller, Jean-Marie C; Hu, Eric Y; Ambert, Nicolas; Greget, Renaud; Bischoff, Serge; Baudry, Michel; Berger, Theodore W
2015-01-01
Glutamatergic synapses are the most prevalent functional elements of information processing in the brain. Changes in pre-synaptic activity and in the function of various post-synaptic elements contribute to generate a large variety of synaptic responses. Previous studies have explored postsynaptic factors responsible for regulating synaptic strength variations, but have given far less importance to synaptic geometry, and more specifically to the subcellular distribution of ionotropic receptors. We analyzed the functional effects resulting from changing the subsynaptic localization of ionotropic receptors by using a hippocampal synaptic computational framework. The present study was performed using the EONS (Elementary Objects of the Nervous System) synaptic modeling platform, which was specifically developed to explore the roles of subsynaptic elements as well as their interactions, and that of synaptic geometry. More specifically, we determined the effects of changing the localization of ionotropic receptors relative to the presynaptic glutamate release site, on synaptic efficacy and its variations following single pulse and paired-pulse stimulation protocols. The results indicate that changes in synaptic geometry do have consequences on synaptic efficacy and its dynamics.
Parry, David A D
2016-01-01
Experimental and theoretical research aimed at determining the structure and function of the family of intermediate filament proteins has made significant advances over the past 20 years. Much of this has either contributed to or relied on the amino acid sequence databases that are now available online, and the data mining approaches that have been developed to analyze these sequences. As the quality of sequence data is generally high, it follows that it is the design of the computational and graphical methodologies that are of especial importance to researchers who aspire to gain a greater understanding of those sequence features that specify both function and structural hierarchy. However, these techniques are necessarily subject to limitations and it is important that these be recognized. In addition, no single method is likely to be successful in solving a particular problem, and a coordinated approach using a suite of methods is generally required. A final step in the process involves the interpretation of the results obtained and the construction of a working model or hypothesis that suggests further experimentation. While such methods allow meaningful progress to be made it is still important that the data are interpreted correctly and conservatively. New data mining methods are continually being developed, and it can be expected that even greater understanding of the relationship between structure and function will be gleaned from sequence data in the coming years.
Reproducibility of physiologic parameters obtained using functional computed tomography in mice
NASA Astrophysics Data System (ADS)
Krishnamurthi, Ganapathy; Stantz, Keith M.; Steinmetz, Rosemary; Hutchins, Gary D.; Liang, Yun
2004-04-01
High-speed X-ray computed tomography (CT) has the potential to observe the transport of iodinated radio-opaque contrast agent (CA) through tissue enabling the quantification of tissue physiology in organs and tumors. The concentration of Iodine in the tissue and in the left ventricle is extracted as a function of time and is fit to a compartmental model for physiologic parameter estimation. The reproducibility of the physiologic parameters depend on the (1) The image-sampling rate. According to our simulations 5-second sampling is required for CA injection rates of 1.0ml/min (2) the compartmental model should reflect the real tissue function to give meaning results. In order to verify these limits a functional CT study was carried out in a group of 3 mice. Dynamic CT scans were performed on all the mice with 0.5ml/min, 1ml/min and 2ml/min CA injection rates. The physiologic parameters were extracted using 4 parameter and 6 parameter two compartmental models (2CM). Single factor ANOVA did not indicate a significant difference in the perfusion, in the kidneys for the different injection rates. The physiologic parameter obtained using the 6-parameter 2CM model was in line with literature values and the 6-parameter significantly improves chi-square goodness of fits for two cases.
Comparison of measured and computed phase functions of individual tropospheric ice crystals
NASA Astrophysics Data System (ADS)
Stegmann, Patrick G.; Tropea, Cameron; Järvinen, Emma; Schnaiter, Martin
2016-07-01
Airplanes passing the incuda (lat. anvils) regions of tropical cumulonimbi-clouds are at risk of suffering an engine power-loss event and engine damage due to ice ingestion (Mason et al., 2006 [1]). Research in this field relies on optical measurement methods to characterize ice crystals; however the design and implementation of such methods presently suffer from the lack of reliable and efficient means of predicting the light scattering from ice crystals. The nascent discipline of direct measurement of phase functions of ice crystals in conjunction with particle imaging and forward modelling through geometrical optics derivative- and Transition matrix-codes for the first time allow us to obtain a deeper understanding of the optical properties of real tropospheric ice crystals. In this manuscript, a sample phase function obtained via the Particle Habit Imaging and Polar Scattering (PHIPS) probe during a measurement campaign in flight over Brazil will be compared to three different light scattering codes. This includes a newly developed first order geometrical optics code taking into account the influence of the Gaussian beam illumination used in the PHIPS device, as well as the reference ray tracing code of Macke and the T-matrix code of Kahnert.
Su, Xiaoquan; Pan, Weihua; Song, Baoxing; Xu, Jian; Ning, Kang
2014-01-01
The metagenomic method directly sequences and analyses genome information from microbial communities. The main computational tasks for metagenomic analyses include taxonomical and functional structure analysis for all genomes in a microbial community (also referred to as a metagenomic sample). With the advancement of Next Generation Sequencing (NGS) techniques, the number of metagenomic samples and the data size for each sample are increasing rapidly. Current metagenomic analysis is both data- and computation- intensive, especially when there are many species in a metagenomic sample, and each has a large number of sequences. As such, metagenomic analyses require extensive computational power. The increasing analytical requirements further augment the challenges for computation analysis. In this work, we have proposed Parallel-META 2.0, a metagenomic analysis software package, to cope with such needs for efficient and fast analyses of taxonomical and functional structures for microbial communities. Parallel-META 2.0 is an extended and improved version of Parallel-META 1.0, which enhances the taxonomical analysis using multiple databases, improves computation efficiency by optimized parallel computing, and supports interactive visualization of results in multiple views. Furthermore, it enables functional analysis for metagenomic samples including short-reads assembly, gene prediction and functional annotation. Therefore, it could provide accurate taxonomical and functional analyses of the metagenomic samples in high-throughput manner and on large scale.
Training Older Adults to Use Tablet Computers: Does It Enhance Cognitive Function?
Chan, Micaela Y.; Haber, Sara; Drew, Linda M.; Park, Denise C.
2016-01-01
Purpose of the Study: Recent evidence shows that engaging in learning new skills improves episodic memory in older adults. In this study, older adults who were computer novices were trained to use a tablet computer and associated software applications. We hypothesize that sustained engagement in this mentally challenging training would yield a dual benefit of improved cognition and enhancement of everyday function by introducing useful skills. Design and Methods: A total of 54 older adults (age 60-90) committed 15 hr/week for 3 months. Eighteen participants received extensive iPad training, learning a broad range of practical applications. The iPad group was compared with 2 separate controls: a Placebo group that engaged in passive tasks requiring little new learning; and a Social group that had regular social interaction, but no active skill acquisition. All participants completed the same cognitive battery pre- and post-engagement. Results: Compared with both controls, the iPad group showed greater improvements in episodic memory and processing speed but did not differ in mental control or visuospatial processing. Implications: iPad training improved cognition relative to engaging in social or nonchallenging activities. Mastering relevant technological devices have the added advantage of providing older adults with technological skills useful in facilitating everyday activities (e.g., banking). This work informs the selection of targeted activities for future interventions and community programs. PMID:24928557
Computational modeling of heterogeneity and function of CD4+ T cells
Carbo, Adria; Hontecillas, Raquel; Andrew, Tricity; Eden, Kristin; Mei, Yongguo; Hoops, Stefan; Bassaganya-Riera, Josep
2014-01-01
The immune system is composed of many different cell types and hundreds of intersecting molecular pathways and signals. This large biological complexity requires coordination between distinct pro-inflammatory and regulatory cell subsets to respond to infection while maintaining tissue homeostasis. CD4+ T cells play a central role in orchestrating immune responses and in maintaining a balance between pro- and anti- inflammatory responses. This tight balance between regulatory and effector reactions depends on the ability of CD4+ T cells to modulate distinct pathways within large molecular networks, since dysregulated CD4+ T cell responses may result in chronic inflammatory and autoimmune diseases. The CD4+ T cell differentiation process comprises an intricate interplay between cytokines, their receptors, adaptor molecules, signaling cascades and transcription factors that help delineate cell fate and function. Computational modeling can help to describe, simulate, analyze, and predict some of the behaviors in this complicated differentiation network. This review provides a comprehensive overview of existing computational immunology methods as well as novel strategies used to model immune responses with a particular focus on CD4+ T cell differentiation. PMID:25364738
Computation of the response functions of spiral waves in active media.
Biktasheva, I V; Barkley, D; Biktashev, V N; Bordyugov, G V; Foulkes, A J
2009-05-01
Rotating spiral waves are a form of self-organization observed in spatially extended systems of physical, chemical, and biological natures. A small perturbation causes gradual change in spatial location of spiral's rotation center and frequency, i.e., drift. The response functions (RFs) of a spiral wave are the eigenfunctions of the adjoint linearized operator corresponding to the critical eigenvalues lambda=0,+/-iomega. The RFs describe the spiral's sensitivity to small perturbations in the way that a spiral is insensitive to small perturbations where its RFs are close to zero. The velocity of a spiral's drift is proportional to the convolution of RFs with the perturbation. Here we develop a regular and generic method of computing the RFs of stationary rotating spirals in reaction-diffusion equations. We demonstrate the method on the FitzHugh-Nagumo system and also show convergence of the method with respect to the computational parameters, i.e., discretization steps and size of the medium. The obtained RFs are localized at the spiral's core.
Snyder, Abigail C.; Jiao, Yu
2010-10-01
Neutron experiments at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) frequently generate large amounts of data (on the order of 106-1012 data points). Hence, traditional data analysis tools run on a single CPU take too long to be practical and scientists are unable to efficiently analyze all data generated by experiments. Our goal is to develop a scalable algorithm to efficiently compute high-dimensional integrals of arbitrary functions. This algorithm can then be used to integrate the four-dimensional integrals that arise as part of modeling intensity from the experiments at the SNS. Here, three different one-dimensional numerical integration solvers from the GNU Scientific Library were modified and implemented to solve four-dimensional integrals. The results of these solvers on a final integrand provided by scientists at the SNS can be compared to the results of other methods, such as quasi-Monte Carlo methods, computing the same integral. A parallelized version of the most efficient method can allow scientists the opportunity to more effectively analyze all experimental data.
Magesh, R; George Priya Doss, C
2014-12-01
Ornithine transcarbamylase (OTC) (E.C. 2.1.3.3) is one of the enzymes in the urea cycle, which involves in a sequence of reactions in the liver cells. During protein assimilation in our body surplus nitrogen is made, this open nitrogen is altered into urea and expelled out of the body by kidneys, in this cycle OTC helps in the conversion of free toxic nitrogen into urea. Ornithine transcarbamylase deficiency (OTCD: OMIM#311250) is triggered by mutation in this OTC gene. To date more than 200 mutations have been noted. Mutation in OTC gene indicates alteration in enzyme production, which upsets the ability to carry out the chemical reaction. The computational analysis was initiated to identify the deleterious nsSNPs in OTC gene in causing OTCD using five different computational tools such as SIFT, PolyPhen 2, I-Mutant 3, SNPs&Go, and PhD-SNP. Studies on the molecular basis of OTC gene and OTCD have been done partially till date. Hence, in silico categorization of functional SNPs in OTC gene can provide valuable insight in near future in the diagnosis and treatment of OTCD.
Functional near-infrared spectroscopy for adaptive human-computer interfaces
NASA Astrophysics Data System (ADS)
Yuksel, Beste F.; Peck, Evan M.; Afergan, Daniel; Hincks, Samuel W.; Shibata, Tomoki; Kainerstorfer, Jana; Tgavalekos, Kristen; Sassaroli, Angelo; Fantini, Sergio; Jacob, Robert J. K.
2015-03-01
We present a brain-computer interface (BCI) that detects, analyzes and responds to user cognitive state in real-time using machine learning classifications of functional near-infrared spectroscopy (fNIRS) data. Our work is aimed at increasing the narrow communication bandwidth between the human and computer by implicitly measuring users' cognitive state without any additional effort on the part of the user. Traditionally, BCIs have been designed to explicitly send signals as the primary input. However, such systems are usually designed for people with severe motor disabilities and are too slow and inaccurate for the general population. In this paper, we demonstrate with previous work1 that a BCI that implicitly measures cognitive workload can improve user performance and awareness compared to a control condition by adapting to user cognitive state in real-time. We also discuss some of the other applications we have used in this field to measure and respond to cognitive states such as cognitive workload, multitasking, and user preference.
Aguilera-Pesantes, Daniel; Méndez, Miguel A
2017-02-08
While Zika virus (ZIKV) outbreaks are a growing concern for global health, a deep understanding about the virus is lacking. Here we report a contribution to the basic science on the virus- a detailed computational analysis of the non structural protein NS2b. This protein acts as a cofactor for the NS3 protease (NS3Pro) domain that is important on the viral life cycle, and is an interesting target for drug development. We found that ZIKV NS2b cofactor is highly similar to other virus within the Flavivirus genus, especially to West Nile Virus, suggesting that it is completely necessary for the protease complex activity. Furthermore, the ZIKV NS2b has an important role to the function and stability of the ZIKV NS3 protease domain even when presents a low conservation score. In addition, ZIKV NS2b is mostly rigid, which could imply a non dynamic nature in substrate recognition. Finally, by performing a computational alanine scanning mutagenesis, we found that residues Gly 52 and Asp 83 in the NS2b could be important in substrate recognition.
Smallwood, D.O.
1995-08-07
It is shown that the usual method for computing the coherence functions (ordinary, partial, and multiple) for a general multiple-input/multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross spectral density matrix of the inputs and outputs. The modified form of Cholesky decomposition used is G{sub zz} = LCL{prime}, where G is the cross spectral density matrix of inputs and outputs, L is a lower; triangular matrix with ones on the diagonal, and C is a diagonal matrix, and the symbol {prime} denotes the conjugate transpose. If a diagonal element of C is zero, the off diagonal elements in the corresponding column of L are set to zero. It is shown that the results can be equivalently obtained using singular value decomposition (SVD) of G{sub zz}. The formulation as a SVD problem suggests a way to order the inputs when a natural physical order of the inputs is absent.
Contreras-García, J; Pendás, A Martín; Recio, J M; Silvi, B
2009-01-13
We present a novel computational procedure, general, automated, and robust, for the analysis of local and global properties of the electron localization function (ELF) in crystalline solids. Our algorithm successfully faces the two main shortcomings of the ELF analysis in crystals: (i) the automated identification and characterization of the ELF induced topology in periodic systems, which is impeded by the great number and concentration of critical points in crystalline cells, and (ii) the localization of the zero flux surfaces and subsequent integration of basins, whose difficulty is due to the diverse (in many occasions very flat or very steep) ELF profiles connecting the set of critical points. Application of the new code to representative crystals exhibiting different bonding patterns is carried out in order to show the performance of the algorithm and the conceptual possibilities offered by the complete characterization of the ELF topology in solids.
Ding, Wendu; Koepf, Matthieu; Koenigsmann, Christopher; Batra, Arunabh; Venkataraman, Latha; Negre, Christian F. A.; Brudvig, Gary W.; Crabtree, Robert H.; Schmuttenmaer, Charles A.; Batista, Victor S.
2015-11-03
Here, we report a systematic computational search of molecular frameworks for intrinsic rectification of electron transport. The screening of molecular rectifiers includes 52 molecules and conformers spanning over 9 series of structural motifs. N-Phenylbenzamide is found to be a promising framework with both suitable conductance and rectification properties. A targeted screening performed on 30 additional derivatives and conformers of N-phenylbenzamide yielded enhanced rectification based on asymmetric functionalization. We demonstrate that electron-donating substituent groups that maintain an asymmetric distribution of charge in the dominant transport channel (e.g., HOMO) enhance rectification by raising the channel closer to the Fermi level. These findings are particularly valuable for the design of molecular assemblies that could ensure directionality of electron transport in a wide range of applications, from molecular electronics to catalytic reactions.
de Almeida, Licurgo; Reiner, Seungdo J; Ennis, Matthew; Linster, Christiane
2015-01-01
Noradrenergic modulation from the locus coerulus is often associated with the regulation of sensory signal-to-noise ratio. In the olfactory system, noradrenergic modulation affects both bulbar and cortical processing, and has been shown to modulate the detection of low concentration stimuli. We here implemented a computational model of the olfactory bulb and piriform cortex, based on known experimental results, to explore how noradrenergic modulation in the olfactory bulb and piriform cortex interact to regulate odor processing. We show that as predicted by behavioral experiments in our lab, norepinephrine can play a critical role in modulating the detection and associative learning of very low odor concentrations. Our simulations show that bulbar norepinephrine serves to pre-process odor representations to facilitate cortical learning, but not recall. We observe the typical non-uniform dose-response functions described for norepinephrine modulation and show that these are imposed mainly by bulbar, but not cortical processing.
2016-01-01
Covering: 2003 to 2016 The last decade has seen the first major discoveries regarding the genomic basis of plant natural product biosynthetic pathways. Four key computationally driven strategies have been developed to identify such pathways, which make use of physical clustering, co-expression, evolutionary co-occurrence and epigenomic co-regulation of the genes involved in producing a plant natural product. Here, we discuss how these approaches can be used for the discovery of plant biosynthetic pathways encoded by both chromosomally clustered and non-clustered genes. Additionally, we will discuss opportunities to prioritize plant gene clusters for experimental characterization, and end with a forward-looking perspective on how synthetic biology technologies will allow effective functional reconstitution of candidate pathways using a variety of genetic systems. PMID:27321668
Localization of functional adrenal tumors by computed tomography and venous sampling
Dunnick, N.R.; Doppman, J.L.; Gill, J.R. Jr.; Strott, C.A.; Keiser, H.R.; Brennan, M.F.
1982-02-01
Fifty-eight patients with functional lesions of the adrenal glands underwent radiographic evaluation. Twenty-eight patients had primary aldosteronism (Conn syndrome), 20 had Cushing syndrome, and 10 had pheochromocytoma. Computed tomography (CT) correctly identified adrenal tumors in 11 (61%) of 18 patients with aldosteronomas, 6 of 6 patients with benign cortisol-producing adrenal tumors, and 5 (83%) of 6 patients with pheochromocytomas. No false-positive diagnoses were encountered among patients with adrenal adenomas. Bilateral adrenal hyperplasia appeared on CT scans as normal or prominent adrenal glands with a normal configuration; however, CT was not able to exclude the presence of small adenomas. Adrenal venous sampling was correct in each case, and reliably distinguished adrenal tumors from hyperplasia. Recurrent pheochromocytomas were the most difficult to loclize on CT due to the surgical changes in the region of the adrenals and the frequent extra-adrenal locations.
Simplified Computation for Nonparametric Windows Method of Probability Density Function Estimation.
Joshi, Niranjan; Kadir, Timor; Brady, Michael
2011-08-01
Recently, Kadir and Brady proposed a method for estimating probability density functions (PDFs) for digital signals which they call the Nonparametric (NP) Windows method. The method involves constructing a continuous space representation of the discrete space and sampled signal by using a suitable interpolation method. NP Windows requires only a small number of observed signal samples to estimate the PDF and is completely data driven. In this short paper, we first develop analytical formulae to obtain the NP Windows PDF estimates for 1D, 2D, and 3D signals, for different interpolation methods. We then show that the original procedure to calculate the PDF estimate can be significantly simplified and made computationally more efficient by a judicious choice of the frame of reference. We have also outlined specific algorithmic details of the procedures enabling quick implementation. Our reformulation of the original concept has directly demonstrated a close link between the NP Windows method and the Kernel Density Estimator.
Computing frequency by using generalized zero-crossing applied to intrinsic mode functions
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2006-01-01
This invention presents a method for computing Instantaneous Frequency by applying Empirical Mode Decomposition to a signal and using Generalized Zero-Crossing (GZC) and Extrema Sifting. The GZC approach is the most direct, local, and also the most accurate in the mean. Furthermore, this approach will also give a statistical measure of the scattering of the frequency value. For most practical applications, this mean frequency localized down to quarter of a wave period is already a well-accepted result. As this method physically measures the period, or part of it, the values obtained can serve as the best local mean over the period to which it applies. Through Extrema Sifting, instead of the cubic spline fitting, this invention constructs the upper envelope and the lower envelope by connecting local maxima points and local minima points of the signal with straight lines, respectively, when extracting a collection of Intrinsic Mode Functions (IMFs) from a signal under consideration.
On the Exact Evaluation of Certain Instances of the Potts Partition Function by Quantum Computers
NASA Astrophysics Data System (ADS)
Geraci, Joseph; Lidar, Daniel A.
2008-05-01
We present an efficient quantum algorithm for the exact evaluation of either the fully ferromagnetic or anti-ferromagnetic q-state Potts partition function Z for a family of graphs related to irreducible cyclic codes. This problem is related to the evaluation of the Jones and Tutte polynomials. We consider the connection between the weight enumerator polynomial from coding theory and Z and exploit the fact that there exists a quantum algorithm for efficiently estimating Gauss sums in order to obtain the weight enumerator for a certain class of linear codes. In this way we demonstrate that for a certain class of sparse graphs, which we call Irreducible Cyclic Cocycle Code (ICCCɛ) graphs, quantum computers provide a polynomial speed up in the difference between the number of edges and vertices of the graph, and an exponential speed up in q, over the best classical algorithms known to date.
Ding, Wendu; Koepf, Matthieu; Koenigsmann, Christopher; ...
2015-11-03
Here, we report a systematic computational search of molecular frameworks for intrinsic rectification of electron transport. The screening of molecular rectifiers includes 52 molecules and conformers spanning over 9 series of structural motifs. N-Phenylbenzamide is found to be a promising framework with both suitable conductance and rectification properties. A targeted screening performed on 30 additional derivatives and conformers of N-phenylbenzamide yielded enhanced rectification based on asymmetric functionalization. We demonstrate that electron-donating substituent groups that maintain an asymmetric distribution of charge in the dominant transport channel (e.g., HOMO) enhance rectification by raising the channel closer to the Fermi level. These findingsmore » are particularly valuable for the design of molecular assemblies that could ensure directionality of electron transport in a wide range of applications, from molecular electronics to catalytic reactions.« less
Accounting for the environment.
Lutz, E; Munasinghe, M
1991-03-01
Environmental awareness in the 1980s has led to efforts to improve the current UN System of National Accounts (SNA) for better measurement of the value of environmental resources when estimating income. National governments, the UN, the International Monetary Fund, and the World Bank are interested in solving this issue. The World Bank relies heavily on national aggregates in income accounts compiled by means of the SNA that was published in 1968 and stressed gross domestic product (GDP). GDP measures mainly market activity, but it takes does not consider the consumption of natural capital, and indirectly inhibits sustained development. The deficiencies of the current method of accounting are inconsistent treatment of manmade and natural capital, the omission of natural resources and their depletion from balance sheets, and pollution cleanup costs from national income. In the calculation of GDP pollution is overlooked, and beneficial environmental inputs are valued at zero. The calculation of environmentally adjusted net domestic product (EDP) and environmentally adjusted net income (ENI) would lower income and growth rate, as the World Resources Institute found with respect to Indonesia for 1971-84. When depreciation for oil, timber, and top soil was included the net domestic product (NDP) was only 4% compared with a 7.1% GDP. The World Bank has advocated environmental accounting since 1983 in SNA revisions. The 1989 revised Blue Book of the SNA takes environment concerns into account. Relevant research is under way in Mexico and Papua New Guinea using the UN Statistical Office framework as a system for environmentally adjusted economic accounts that computes EDP and ENI and integrates environmental data with national accounts while preserving SNA concepts.
Hathaway, R.M.; McNellis, J.M.
1989-01-01
Investigating the occurrence, quantity, quality, distribution, and movement of the Nation 's water resources is the principal mission of the U.S. Geological Survey 's Water Resources Division. Reports of these investigations are published and available to the public. To accomplish this mission, the Division requires substantial computer technology to process, store, and analyze data from more than 57,000 hydrologic sites. The Division 's computer resources are organized through the Distributed Information System Program Office that manages the nationwide network of computers. The contract that provides the major computer components for the Water Resources Division 's Distributed information System expires in 1991. Five work groups were organized to collect the information needed to procure a new generation of computer systems for the U. S. Geological Survey, Water Resources Division. Each group was assigned a major Division activity and asked to describe its functional requirements of computer systems for the next decade. The work groups and major activities are: (1) hydrologic information; (2) hydrologic applications; (3) geographic information systems; (4) reports and electronic publishing; and (5) administrative. The work groups identified 42 functions and described their functional requirements for 1988, 1992, and 1997. A few new functions such as Decision Support Systems and Executive Information Systems, were identified, but most are the same as performed today. Although the number of functions will remain about the same, steady growth in the size, complexity, and frequency of many functions is predicted for the next decade. No compensating increase in the Division 's staff is anticipated during this period. To handle the increased workload and perform these functions, new approaches will be developed that use advanced computer technology. The advanced technology is required in a unified, tightly coupled system that will support all functions simultaneously
NASA Technical Reports Server (NTRS)
Raju, M. S.
1998-01-01
The success of any solution methodology used in the study of gas-turbine combustor flows depends a great deal on how well it can model the various complex and rate controlling processes associated with the spray's turbulent transport, mixing, chemical kinetics, evaporation, and spreading rates, as well as convective and radiative heat transfer and other phenomena. The phenomena to be modeled, which are controlled by these processes, often strongly interact with each other at different times and locations. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and evaporation in many practical combustion devices. The influence of turbulence in a diffusion flame manifests itself in several forms, ranging from the so-called wrinkled, or stretched, flamelets regime to the distributed combustion regime, depending upon how turbulence interacts with various flame scales. Conventional turbulence models have difficulty treating highly nonlinear reaction rates. A solution procedure based on the composition joint probability density function (PDF) approach holds the promise of modeling various important combustion phenomena relevant to practical combustion devices (such as extinction, blowoff limits, and emissions predictions) because it can account for nonlinear chemical reaction rates without making approximations. In an attempt to advance the state-of-the-art in multidimensional numerical methods, we at the NASA Lewis Research Center extended our previous work on the PDF method to unstructured grids, parallel computing, and sprays. EUPDF, which was developed by M.S. Raju of Nyma, Inc., was designed to be massively parallel and could easily be coupled with any existing gas-phase and/or spray solvers. EUPDF can use an unstructured mesh with mixed triangular, quadrilateral, and/or tetrahedral elements. The application of the PDF method showed favorable results when applied to several supersonic
Zhou, Peng; Yang, Chao; Ren, Yanrong; Wang, Congcong; Tian, Feifei
2013-12-01
Peptides with antihypertensive potency have long been attractive to the medical and food communities. However, serving as food additives, rather than therapeutic agents, peptides should have a good taste. In the present study, we explore the intrinsic relationship between the angiotensin I-converting enzyme (ACE) inhibition and bitterness of short peptides in the framework of computational peptidology, attempting to find out the appropriate properties for functional food peptides with satisfactory bioactivities. As might be expected, quantitative structure-activity relationship modeling reveals a significant positive correlation between the ACE inhibition and bitterness of dipeptides, but this correlation is quite modest for tripeptides and, particularly, tetrapeptides. Moreover, quantum mechanics/molecular mechanics analysis of the structural basis and energetic profile involved in ACE-peptide complexes unravels that peptides of up to 4 amino acids long are sufficient to have efficient binding to ACE, and more additional residues do not bring with substantial enhance in their ACE-binding affinity and, thus, antihypertensive capability. All of above, it is coming together to suggest that the tripeptides and tetrapeptides could be considered as ideal candidates for seeking potential functional food additives with both high antihypertensive activity and low bitterness.
Watanabe, H; Honda, E; Kurabayashi, T
2010-01-01
Objectives The aim was to investigate the possibility of evaluating the modulation transfer function (MTF) of cone beam CT (CBCT) for dental use using the oversampling method. Methods The CBCT apparatus (3D Accuitomo) with an image intensifier was used with a 100 μm tungsten wire placed inside the scanner at a slight angle to the plane perpendicular to the plane of interest and scanned. 200 contiguous reconstructed images were used to obtain the oversampling line-spread function (LSF). The MTF curve was obtained by computing the Fourier transformation from the oversampled LSF. Line pair tests were also performed using Catphan®. Results The oversampling method provided smooth and reproducible MTF curves. The MTF curves revealed that the spatial resolution in the z-axis direction was significantly higher than that in the axial direction. This result was also confirmed by the line pair test. Conclusions MTF analysis was performed successfully using the oversampling method. In addition, this study clarified that the 3D Accuitomo had high spatial resolution, especially in the z-axis direction. PMID:20089741
NASA Astrophysics Data System (ADS)
Lopez-Encarnacion, Juan M.
2016-06-01
In this talk, the power and synergy of combining experimental measurements with density functional theory computations as a single tool to unambiguously characterize the molecular structure of complex atomic systems is shown. Here, we bring three beautiful cases where the interaction between the experiment and theory is in very good agreement for both finite and extended systems: 1) Characterizing Metal Coordination Environments in Porous Organic Polymers: A Joint Density Functional Theory and Experimental Infrared Spectroscopy Study 2) Characterization of Rhenium Compounds Obtained by Electrochemical Synthesis After Aging Process and 3) Infrared Study of H(D)2 + Co4+ Chemical Reaction: Characterizing Molecular Structures. J.M. López-Encarnación, K.K. Tanabe, M.J.A. Johnson, J. Jellinek, Chemistry-A European Journal 19 (41), 13646-13651 A. Vargas-Uscategui, E. Mosquera, J.M. López-Encarnación, B. Chornik, R. S. Katiyar, L. Cifuentes, Journal of Solid State Chemistry 220, 17-21
The Auditing of Computerized Accounting Systems.
ERIC Educational Resources Information Center
Skudrna, Vincent J.
1982-01-01
Describes an investigation undertaken to indicate the curricular content (knowledge and skills) needed to prepare the accounting student to audit computerized accounting systems. Areas studied included programing languages, data processing, desired course training, and computer audit techniques. (CT)
Brain-Computer Interface Controlled Functional Electrical Stimulation System for Ankle Movement
2011-01-01
Background Many neurological conditions, such as stroke, spinal cord injury, and traumatic brain injury, can cause chronic gait function impairment due to foot-drop. Current physiotherapy techniques provide only a limited degree of motor function recovery in these individuals, and therefore novel therapies are needed. Brain-computer interface (BCI) is a relatively novel technology with a potential to restore, substitute, or augment lost motor behaviors in patients with neurological injuries. Here, we describe the first successful integration of a noninvasive electroencephalogram (EEG)-based BCI with a noninvasive functional electrical stimulation (FES) system that enables the direct brain control of foot dorsiflexion in able-bodied individuals. Methods A noninvasive EEG-based BCI system was integrated with a noninvasive FES system for foot dorsiflexion. Subjects underwent computer-cued epochs of repetitive foot dorsiflexion and idling while their EEG signals were recorded and stored for offline analysis. The analysis generated a prediction model that allowed EEG data to be analyzed and classified in real time during online BCI operation. The real-time online performance of the integrated BCI-FES system was tested in a group of five able-bodied subjects who used repetitive foot dorsiflexion to elicit BCI-FES mediated dorsiflexion of the contralateral foot. Results Five able-bodied subjects performed 10 alternations of idling and repetitive foot dorsifiexion to trigger BCI-FES mediated dorsifiexion of the contralateral foot. The epochs of BCI-FES mediated foot dorsifiexion were highly correlated with the epochs of voluntary foot dorsifiexion (correlation coefficient ranged between 0.59 and 0.77) with latencies ranging from 1.4 sec to 3.1 sec. In addition, all subjects achieved a 100% BCI-FES response (no omissions), and one subject had a single false alarm. Conclusions This study suggests that the integration of a noninvasive BCI with a lower-extremity FES system is
Ni, Pengsheng; McDonough, Christine M.; Jette, Alan M.; Bogusz, Kara; Marfeo, Elizabeth E.; Rasch, Elizabeth K.; Brandt, Diane E.; Meterko, Mark; Chan, Leighton
2014-01-01
Objectives To develop and test an instrument to assess physical function (PF) for Social Security Administration (SSA) disability programs, the SSA-PF. Item Response Theory (IRT) analyses were used to 1) create a calibrated item bank for each of the factors identified in prior factor analyses, 2) assess the fit of the items within each scale, 3) develop separate Computer-Adaptive Test (CAT) instruments for each scale, and 4) conduct initial psychometric testing. Design Cross-sectional data collection; IRT analyses; CAT simulation. Setting Telephone and internet survey. Participants Two samples: 1,017 SSA claimants, and 999 adults from the US general population. Interventions None. Main Outcome Measure Model fit statistics, correlation and reliability coefficients, Results IRT analyses resulted in five unidimensional SSA-PF scales: Changing & Maintaining Body Position, Whole Body Mobility, Upper Body Function, Upper Extremity Fine Motor, and Wheelchair Mobility for a total of 102 items. High CAT accuracy was demonstrated by strong correlations between simulated CAT scores and those from the full item banks. Comparing the simulated CATs to the full item banks, very little loss of reliability or precision was noted, except at the lower and upper ranges of each scale. No difference in response patterns by age or sex was noted. The distributions of claimant scores were shifted to the lower end of each scale compared to those of a sample of US adults. Conclusions The SSA-PF instrument contributes important new methodology for measuring the physical function of adults applying to the SSA disability programs. Initial evaluation revealed that the SSA-PF instrument achieved considerable breadth of coverage in each content domain and demonstrated noteworthy psychometric properties. PMID:23578594
Indices of cognitive function measured in rugby union players using a computer-based test battery.
MacDonald, Luke A; Minahan, Clare L
2016-09-01
The purpose of this study was to investigate the intra- and inter-day reliability of cognitive performance using a computer-based test battery in team-sport athletes. Eighteen elite male rugby union players (age: 19 ± 0.5 years) performed three experimental trials (T1, T2 and T3) of the test battery: T1 and T2 on the same day and T3, on the following day, 24 h later. The test battery comprised of four cognitive tests assessing the cognitive domains of executive function (Groton Maze Learning Task), psychomotor function (Detection Task), vigilance (Identification Task), visual learning and memory (One Card Learning Task). The intraclass correlation coefficients (ICCs) for the Detection Task, the Identification Task and the One Card Learning Task performance variables ranged from 0.75 to 0.92 when comparing T1 to T2 to assess intraday reliability, and 0.76 to 0.83 when comparing T1 and T3 to assess inter-day reliability. The ICCs for the Groton Maze Learning Task intra- and inter-day reliability were 0.67 and 0.57, respectively. We concluded that the Detection Task, the Identification Task and the One Card Learning Task are reliable measures of psychomotor function, vigilance, visual learning and memory in rugby union players. The reliability of the Groton Maze Learning Task is questionable (mean coefficient of variation (CV) = 19.4%) and, therefore, results should be interpreted with caution.
Shalaumova, Yu V; Varaksin, A N; Panov, V G
2016-01-01
There was performed an analysis of the accounting of the impact of concomitant variables (confounders), introducing a systematic error in the assessment of the impact of risk factors on the resulting variable. The analysis showed that standardization is an effective method for the reduction of the shift of risk assessment. In the work there is suggested an algorithm implementing the method of standardization based on stratification, providing for the minimization of the difference of distributions of confounders in groups on risk factors. To automate the standardization procedures there was developed a software available on the website of the Institute of Industrial Ecology, UB RAS. With the help of the developed software by numerically modeling there were determined conditions of the applicability of the method of standardization on the basis of stratification for the case of the normal distribution on the response and confounder and linear relationship between them. Comparison ofresults obtained with the help of the standardization with statistical methods (logistic regression and analysis of covariance) in solving the problem of human ecology, has shown that obtaining close results is possible if there will be met exactly conditions for the applicability of statistical methods. Standardization is less sensitive to violations of conditions of applicability.
da Silva, Silvia Maria Doria; Paschoal, Ilma Aparecida; De Capitani, Eduardo Mello; Moreira, Marcos Mello; Palhares, Luciana Campanatti; Pereira, Mônica Corso
2016-01-01
Background Computed tomography (CT) phenotypic characterization helps in understanding the clinical diversity of chronic obstructive pulmonary disease (COPD) patients, but its clinical relevance and its relationship with functional features are not clarified. Volumetric capnography (VC) uses the principle of gas washout and analyzes the pattern of CO2 elimination as a function of expired volume. The main variables analyzed were end-tidal concentration of carbon dioxide (ETCO2), Slope of phase 2 (Slp2), and Slope of phase 3 (Slp3) of capnogram, the curve which represents the total amount of CO2 eliminated by the lungs during each breath. Objective To investigate, in a group of patients with severe COPD, if the phenotypic analysis by CT could identify different subsets of patients, and if there was an association of CT findings and functional variables. Subjects and methods Sixty-five patients with COPD Gold III–IV were admitted for clinical evaluation, high-resolution CT, and functional evaluation (spirometry, 6-minute walk test [6MWT], and VC). The presence and profusion of tomography findings were evaluated, and later, the patients were identified as having emphysema (EMP) or airway disease (AWD) phenotype. EMP and AWD groups were compared; tomography findings scores were evaluated versus spirometric, 6MWT, and VC variables. Results Bronchiectasis was found in 33.8% and peribronchial thickening in 69.2% of the 65 patients. Structural findings of airways had no significant correlation with spirometric variables. Air trapping and EMP were strongly correlated with VC variables, but in opposite directions. There was some overlap between the EMP and AWD groups, but EMP patients had signicantly lower body mass index, worse obstruction, and shorter walked distance on 6MWT. Concerning VC, EMP patients had signicantly lower ETCO2, Slp2 and Slp3. Increases in Slp3 characterize heterogeneous involvement of the distal air spaces, as in AWD. Conclusion Visual assessment and
NASA Astrophysics Data System (ADS)
Sutter, Kiplangat
This thesis illustrates the utilization of Density functional theory (DFT) in calculations of gas and solution phase Nuclear Magnetic Resonance (NMR) properties of light and heavy nuclei. Computing NMR properties is still a challenge and there are many unknown factors that are still being explored. For instance, influence of hydrogen-bonding; thermal motion; vibration; rotation and solvent effects. In one of the theoretical studies of 195Pt NMR chemical shift in cisplatin and its derivatives illustrated in Chapter 2 and 3 of this thesis. The importance of representing explicit solvent molecules explicitly around the Pt center in cisplatin complexes was outlined. In the same complexes, solvent effect contributed about half of the J(Pt-N) coupling constant. Indicating the significance of considering the surrounding solvent molecules in elucidating the NMR measurements of cisplatin binding to DNA. In chapter 4, we explore the Spin-Orbit (SO) effects on the 29Si and 13C chemical shifts induced by surrounding metal and ligands. The unusual Ni, Pd, Pt trends in SO effects to the 29Si in metallasilatrane complexes X-Si-(mu-mt)4-M-Y was interpreted based on electronic and relativistic effects rather than by structural differences between the complexes. In addition, we develop a non-linear model for predicting NMR SO effects in a series of organics bonded to heavy nuclei halides. In chapter 5, we extend the idea of "Chemist's orbitals" LMO analysis to the quantum chemical proton NMR computation of systems with internal resonance-assisted hydrogen bonds. Consequently, we explicitly link the relationship between the NMR parameters related to H-bonded systems and intuitive picture of a chemical bond from quantum calculations. The analysis shows how NMR signatures characteristic of H-bond can be explained by local bonding and electron delocalization concepts. One shortcoming of some of the anti-cancer agents like cisplatin is that they are toxic and researchers are looking for
NASA Astrophysics Data System (ADS)
Huang, X.; Gurrola, H.
2013-12-01
methods. All of these methods performed well in terms of stdev but we chose ARU for its high quality data and low signal to noise ratios (the average S/N ratio for these data were 4%). With real data, we tend to assume the method that has the lowest stdev is the best. But stdev does not account for a systematic bias toward incorrect values. In this case the LSD once again had the lowest stdev in computed amplitudes of Pds phases but it had the smallest values. But the FID, FWLD and MID tended to produce the largest amplitude while the LSD and TID tended toward the lower amplitudes. Considering that in the synthetics all these methods showed bias toward low amplitude, we believe that with real data those methods producing the largest amplitudes will be closest to the 'true values' and that is a better measure of the better method than a small stdev in amplitude estimates. We will also present results for applying TID and FID methods to the production of PP and SS precursor functions. When applied to these data, it is possible to moveout correct the cross-correlation functions before extracting the signal from each PdP (or SdS) phase in these data. As a result a much cleaner Earth function is produced and feequency content is significantly improved.
Response functions for computing absorbed dose to skeletal tissues from neutron irradiation.
Bahadori, Amir A; Johnson, Perry; Jokisch, Derek W; Eckerman, Keith F; Bolch, Wesley E
2011-11-07
Spongiosa in the adult human skeleton consists of three tissues-active marrow (AM), inactive marrow (IM) and trabecularized mineral bone (TB). AM is considered to be the target tissue for assessment of both long-term leukemia risk and acute marrow toxicity following radiation exposure. The total shallow marrow (TM(50)), defined as all tissues lying within the first 50 µm of the bone surfaces, is considered to be the radiation target tissue of relevance for radiogenic bone cancer induction. For irradiation by sources external to the body, kerma to homogeneous spongiosa has been used as a surrogate for absorbed dose to both of these tissues, as direct dose calculations are not possible using computational phantoms with homogenized spongiosa. Recent micro-CT imaging of a 40 year old male cadaver has allowed for the accurate modeling of the fine microscopic structure of spongiosa in many regions of the adult skeleton (Hough et al 2011 Phys. Med. Biol. 56 2309-46). This microstructure, along with associated masses and tissue compositions, was used to compute specific absorbed fraction (SAF) values for protons originating in axial and appendicular bone sites (Jokisch et al 2011 Phys. Med. Biol. 56 6857-72). These proton SAFs, bone masses, tissue compositions and proton production cross sections, were subsequently used to construct neutron dose-response functions (DRFs) for both AM and TM(50) targets in each bone of the reference adult male. Kerma conditions were assumed for other resultant charged particles. For comparison, AM, TM(50) and spongiosa kerma coefficients were also calculated. At low incident neutron energies, AM kerma coefficients for neutrons correlate well with values of the AM DRF, while total marrow (TM) kerma coefficients correlate well with values of the TM(50) DRF. At high incident neutron energies, all kerma coefficients and DRFs tend to converge as charged-particle equilibrium is established across the bone site. In the range of 10 eV to 100 Me
Ebner, Christian; Schroll, Henning; Winther, Gesche; Niedeggen, Michael; Hamker, Fred H
2015-09-01
How the brain decides which information to process 'consciously' has been debated over for decades without a simple explanation at hand. While most experiments manipulate the perceptual energy of presented stimuli, the distractor-induced blindness task is a prototypical paradigm to investigate gating of information into consciousness without or with only minor visual manipulation. In this paradigm, subjects are asked to report intervals of coherent dot motion in a rapid serial visual presentation (RSVP) stream, whenever these are preceded by a particular color stimulus in a different RSVP stream. If distractors (i.e., intervals of coherent dot motion prior to the color stimulus) are shown, subjects' abilities to perceive and report intervals of target dot motion decrease, particularly with short delays between intervals of target color and target motion. We propose a biologically plausible neuro-computational model of how the brain controls access to consciousness to explain how distractor-induced blindness originates from information processing in the cortex and basal ganglia. The model suggests that conscious perception requires reverberation of activity in cortico-subcortical loops and that basal-ganglia pathways can either allow or inhibit this reverberation. In the distractor-induced blindness paradigm, inadequate distractor-induced response tendencies are suppressed by the inhibitory 'hyperdirect' pathway of the basal ganglia. If a target follows such a distractor closely, temporal aftereffects of distractor suppression prevent target identification. The model reproduces experimental data on how delays between target color and target motion affect the probability of target detection.
Ramanantoanina, Harry; Sahnoun, Mohammed; Barbiero, Andrea; Ferbinteanu, Marilena; Cimpoesu, Fanica
2015-07-28
Ligand field density functional theory (LFDFT) is a methodology consisting of non-standard handling of DFT calculations and post-computation analysis, emulating the ligand field parameters in a non-empirical way. Recently, the procedure was extended for two-open-shell systems, with relevance for inter-shell transitions in lanthanides, of utmost importance in understanding the optical and magnetic properties of rare-earth materials. Here, we expand the model to the calculation of intensities of f → d transitions, enabling the simulation of spectral profiles. We focus on Eu(2+)-based systems: this lanthanide ion undergoes many dipole-allowed transitions from the initial 4f(7)((8)S7/2) state to the final 4f(6)5d(1) ones, considering the free ion and doped materials. The relativistic calculations showed a good agreement with experimental data for a gaseous Eu(2+) ion, producing reliable Slater-Condon and spin-orbit coupling parameters. The Eu(2+) ion-doped fluorite-type lattices, CaF2:Eu(2+) and SrCl2:Eu(2+), in sites with octahedral symmetry, are studied in detail. The related Slater-Condon and spin-orbit coupling parameters from the doped materials are compared to those for the free ion, revealing small changes for the 4f shell side and relatively important shifts for those associated with the 5d shell. The ligand field scheme, in Wybourne parameterization, shows a good agreement with the phenomenological interpretation of the experiment. The non-empirical computed parameters are used to calculate the energy and intensity of the 4f(7)-4f(6)5d(1) transitions, rendering a realistic convoluted spectrum.
Evaluation of pulmonary function using single-breath-hold dual-energy computed tomography with xenon
Kyoyama, Hiroyuki; Hirata, Yusuke; Kikuchi, Satoshi; Sakai, Kosuke; Saito, Yuriko; Mikami, Shintaro; Moriyama, Gaku; Yanagita, Hisami; Watanabe, Wataru; Otani, Katharina; Honda, Norinari; Uematsu, Kazutsugu
2017-01-01
Abstract Xenon-enhanced dual-energy computed tomography (xenon-enhanced CT) can provide lung ventilation maps that may be useful for assessing structural and functional abnormalities of the lung. Xenon-enhanced CT has been performed using a multiple-breath-hold technique during xenon washout. We recently developed xenon-enhanced CT using a single-breath-hold technique to assess ventilation. We sought to evaluate whether xenon-enhanced CT using a single-breath-hold technique correlates with pulmonary function testing (PFT) results. Twenty-six patients, including 11 chronic obstructive pulmonary disease (COPD) patients, underwent xenon-enhanced CT and PFT. Three of the COPD patients underwent xenon-enhanced CT before and after bronchodilator treatment. Images from xenon-CT were obtained by dual-source CT during a breath-hold after a single vital-capacity inspiration of a xenon–oxygen gas mixture. Image postprocessing by 3-material decomposition generated conventional CT and xenon-enhanced images. Low-attenuation areas on xenon images matched low-attenuation areas on conventional CT in 21 cases but matched normal-attenuation areas in 5 cases. Volumes of Hounsfield unit (HU) histograms of xenon images correlated moderately and highly with vital capacity (VC) and total lung capacity (TLC), respectively (r = 0.68 and 0.85). Means and modes of histograms weakly correlated with VC (r = 0.39 and 0.38), moderately with forced expiratory volume in 1 second (FEV1) (r = 0.59 and 0.56), weakly with the ratio of FEV1 to FVC (r = 0.46 and 0.42), and moderately with the ratio of FEV1 to its predicted value (r = 0.64 and 0.60). Mode and volume of histograms increased in 2 COPD patients after the improvement of FEV1 with bronchodilators. Inhalation of xenon gas caused no adverse effects. Xenon-enhanced CT using a single-breath-hold technique depicted functional abnormalities not detectable on thin-slice CT. Mode, mean, and volume of HU histograms of xenon images
2007-07-01
functions for the TRIPS compiler. The experiments were executed on the Rose- Hulman Institute of Technology Beowulf cluster. The primary metric...parameter for this benchmark • Implemented a parallel version of Finch on a Beowulf cluster using the Message Passing Interface (MPI) • Completed a 17... Beowulf ” Linux cluster (brain.rose-hulman.edu). However, the Beowulf cluster does not provide NFS and PBS, so it was also necessary to modify
Takx, Richard A P; Vliegenthart, Rozemarijn; Schoepf, U Joseph; Abro, Joseph A; Nance, John W; Ebersberger, Ullrich; Bamberg, Fabian; Carr, Christine M; Apfaltrer, Paul
2016-02-01
Blacks have higher mortality and hospitalization rates because of congestive heart failure compared with white counterparts. Differences in cardiac structure and function may contribute to the racial disparity in cardiovascular outcomes. Our aim was to compare computed tomography (CT)-derived cardiac measurements between black patients with acute chest pain and age- and gender-matched white patients. We performed a retrospective analysis under an institutional review board waiver and in Health Insurance Portability and Accountability Act compliance. We investigated patients who underwent cardiac dual-source CT for acute chest pain. Myocardial mass, left ventricular (LV) ejection fraction, LV end-systolic volume, and LV end-diastolic volume were quantified using an automated analysis algorithm. Septal wall thickness and cardiac chamber diameters were manually measured. Measurements were compared by independent t test and linear regression. The study population consisted of 300 patients (150 black-mean age 54 ± 12 years; 46% men; 150 white-mean age 55 ± 11 years; 46% men). Myocardial mass was larger for blacks compared with white (176.1 ± 58.4 vs 155.9 ± 51.7 g, p = 0.002), which remained significant after adjusting for age, gender, body mass index, and hypertension. Septal wall thickness was slightly greater (11.9 ± 2.7 vs 11.2 ± 3.1 mm, p = 0.036). The LV inner diameter was moderately larger in black patients in systole (32.3 ± 9.0 vs 30.1 ± 5.4 ml, p = 0.010) and in diastole (50.1 ± 7.8 vs 48.9 ± 5.2 ml, p = 0.137), as well as LV end-diastolic volume (134.5 ± 42.7 vs 128.2 ± 30.6 ml, p = 0.143). Ejection fraction was nonsignificantly lower in blacks (67.1 ± 13.5% vs 69.0 ± 9.6%, p = 0.169). In conclusion, CT-derived myocardial mass was larger in blacks compared with whites, whereas LV functional parameters were generally not statistically different, suggesting that LV mass might be a possible contributing factor to the higher rate of cardiac events
Lee, Yun; Escamilla-Treviño, Luis; Dixon, Richard A.; Voit, Eberhard O.
2012-01-01
Lignin is a polymer in secondary cell walls of plants that is known to have negative impacts on forage digestibility, pulping efficiency, and sugar release from cellulosic biomass. While targeted modifications of different lignin biosynthetic enzymes have permitted the generation of transgenic plants with desirable traits, such as improved digestibility or reduced recalcitrance to saccharification, some of the engineered plants exhibit monomer compositions that are clearly at odds with the expected outcomes when the biosynthetic pathway is perturbed. In Medicago, such discrepancies were partly reconciled by the recent finding that certain biosynthetic enzymes may be spatially organized into two independent channels for the synthesis of guaiacyl (G) and syringyl (S) lignin monomers. Nevertheless, the mechanistic details, as well as the biological function of these interactions, remain unclear. To decipher the working principles of this and similar control mechanisms, we propose and employ here a novel computational approach that permits an expedient and exhaustive assessment of hundreds of minimal designs that could arise in vivo. Interestingly, this comparative analysis not only helps distinguish two most parsimonious mechanisms of crosstalk between the two channels by formulating a targeted and readily testable hypothesis, but also suggests that the G lignin-specific channel is more important for proper functioning than the S lignin-specific channel. While the proposed strategy of analysis in this article is tightly focused on lignin synthesis, it is likely to be of similar utility in extracting unbiased information in a variety of situations, where the spatial organization of molecular components is critical for coordinating the flow of cellular information, and where initially various control designs seem equally valid. PMID:23144605
NASA Astrophysics Data System (ADS)
Troy, R. M.
2005-12-01
and functions may be integrated into a system efficiently, with minimal effort, and with an eye toward an eventual Computational Unification of the Earth Sciences. A fundamental to such systems is meta-data which describe not only the content of data but also how intricate relationships are represented and used to good advantage. Retrieval techniques will be discussed including trade-offs in using externally managed meta-data versus embedded meta-data, how the two may be integrated, and how "simplifying assumptions" may or may not actually be helpful. The perspectives presented in this talk or poster session are based upon the experience of the Sequoia 2000 and BigSur research projects at the University of California, Berkeley, which sought to unify NASA's Mission To Planet Earth's EOS-DIS, and on-going experience developed by Science Tools corporation, of which the author is a principal. NOTE: These ideas are most easily shared in the form of a talk, and we suspect that this session will generate a lot of interest. We would therefore prefer to have this session accepted as a talk as opposed to a poster session.
A Computational Model Quantifies the Effect of Anatomical Variability on Velopharyngeal Function
Inouye, Joshua M.; Perry, Jamie L.; Lin, Kant Y.
2015-01-01
Purpose This study predicted the effects of velopharyngeal (VP) anatomical parameters on VP function to provide a greater understanding of speech mechanics and aid in the treatment of speech disorders. Method We created a computational model of the VP mechanism using dimensions obtained from magnetic resonance imaging measurements of 10 healthy adults. The model components included the levator veli palatini (LVP), the velum, and the posterior pharyngeal wall, and the simulations were based on material parameters from the literature. The outcome metrics were the VP closure force and LVP muscle activation required to achieve VP closure. Results Our average model compared favorably with experimental data from the literature. Simulations of 1,000 random anatomies reflected the large variability in closure forces observed experimentally. VP distance had the greatest effect on both outcome metrics when considering the observed anatomic variability. Other anatomical parameters were ranked by their predicted influences on the outcome metrics. Conclusions Our results support the implication that interventions for VP dysfunction that decrease anterior to posterior VP portal distance, increase velar length, and/or increase LVP cross-sectional area may be very effective. Future modeling studies will help to further our understanding of speech mechanics and optimize treatment of speech disorders. PMID:26049120
Blanchet, Marc-Frédérick; St-Onge, Karine; Lisi, Véronique; Robitaille, Julie; Hamel, Sylvie; Major, François
2014-01-01
Anti-infection drugs target vital functions of infectious agents, including their ribosome and other essential non-coding RNAs. One of the reasons infectious agents become resistant to drugs is due to mutations that eliminate drug-binding affinity while maintaining vital elements. Identifying these elements is based on the determination of viable and lethal mutants and associated structures. However, determining the structure of enough mutants at high resolution is not always possible. Here, we introduce a new computational method, MC-3DQSAR, to determine the vital elements of target RNA structure from mutagenesis and available high-resolution data. We applied the method to further characterize the structural determinants of the bacterial 23S ribosomal RNA sarcin–ricin loop (SRL), as well as those of the lead-activated and hammerhead ribozymes. The method was accurate in confirming experimentally determined essential structural elements and predicting the viability of new SRL variants, which were either observed in bacteria or validated in bacterial growth assays. Our results indicate that MC-3DQSAR could be used systematically to evaluate the drug-target potentials of any RNA sites using current high-resolution structural data. PMID:25200082
Hetzroni, Orit E; Tannous, Juman
2004-04-01
This study investigated the use of computer-based intervention for enhancing communication functions of children with autism. The software program was developed based on daily life activities in the areas of play, food, and hygiene. The following variables were investigated: delayed echolalia, immediate echolalia, irrelevant speech, relevant speech, and communicative initiations. Multiple-baseline design across settings was used to examine the effects of the exposure of five children with autism to activities in a structured and controlled simulated environment on the communication manifested in their natural environment. Results indicated that after exposure to the simulations, all children produced fewer sentences with delayed and irrelevant speech. Most of the children engaged in fewer sentences involving immediate echolalia and increased the number of communication intentions and the amount of relevant speech they produced. Results indicated that after practicing in a controlled and structured setting that provided the children with opportunities to interact in play, food, and hygiene activities, the children were able to transfer their knowledge to the natural classroom environment. Implications and future research directions are discussed.
Hu, Haixiang; Zhang, Xin; Ford, Virginia; Luo, Xiao; Qi, Erhui; Zeng, Xuefeng; Zhang, Xuejun
2016-11-14
Edge effect is regarded as one of the most difficult technical issues in a computer controlled optical surfacing (CCOS) process. Traditional opticians have to even up the consequences of the two following cases. Operating CCOS in a large overhang condition affects the accuracy of material removal, while in a small overhang condition, it achieves a more accurate performance, but leaves a narrow rolled-up edge, which takes time and effort to remove. In order to control the edge residuals in the latter case, we present a new concept of the 'heterocercal' tool influence function (TIF). Generated from compound motion equipment, this type of TIF can 'transfer' the material removal from the inner place to the edge, meanwhile maintaining the high accuracy and efficiency of CCOS. We call it the 'heterocercal' TIF, because of the inspiration from the heterocercal tails of sharks, whose upper lobe provides most of the explosive power. The heterocercal TIF was theoretically analyzed, and physically realized in CCOS facilities. Experimental and simulation results showed good agreement. It enables significant control of the edge effect and convergence of entire surface errors in large tool-to-mirror size-ratio conditions. This improvement will largely help manufacturing efficiency in some extremely large optical system projects, like the tertiary mirror of the Thirty Meter Telescope.
Heintzen, P H; Brennecke, R; Bürsch, J H; Hahne, H J; Lange, P E; Moldenhauer, K; Onnasch, D; Radtke, W
1982-07-01
A survey of the evolution of roentgen-video-computer techniques is given which was initiated by the development of videodensitometry by Wood and his associates. Following fundamental studies of the usefulness and limitations of x-ray equipment for quantitative measurements and the applicability of the Lambert-Beers law to x-ray absorption, videodensitometry has been used experimentally and clinically for various circulatory studies and has proved to be particularly valuable for the quantitation of aortic, pulmonic, and mitral valvular regurgitation. The second offspring of these techniques, so-called videometry, uses dimensional measurements from single and biplane angiocardiograms for the assessment of size, shape, and contraction pattern of the heart chambers. Volumes of the right and left ventricles can be determined clinically with a standard error of estimate below 10%. On the basis of these studies, normal values have been derived for all age groups, and they depict geometric changes of the growing heart. Cardiac index and ejection fractions proved to be age-independent biologic constants. Finally, methods for complete digital processing of video-image sequences in an off-line and real-time mode are described which allow digital image storage and documentation, dynamic background subtraction for contrast enhancement, and intravenous angiocardiography, in addition to functional imaging by parameter extraction from a matrix of pixel densitograms. Wall thickness and motion determinations, regional flow distribution measurements, and various image-composition techniques are also feasible.
Babkirk, Sarah; Luehring-Jones, Peter; Dennis-Tiwary, Tracy A
2016-12-01
The use of computer-mediated communication (CMC) as a form of social interaction has become increasingly prevalent, yet few studies examine individual differences that may shed light on implications of CMC for adjustment. The current study examined neurocognitive individual differences associated with preferences to use technology in relation to social-emotional outcomes. In Study 1 (N = 91), a self-report measure, the Social Media Communication Questionnaire (SMCQ), was evaluated as an assessment of preferences for communicating positive and negative emotions on a scale ranging from purely via CMC to purely face-to-face. In Study 2, SMCQ preferences were examined in relation to event-related potentials (ERPs) associated with early emotional attention capture and reactivity (the frontal N1) and later sustained emotional processing and regulation (the late positive potential (LPP)). Electroencephalography (EEG) was recorded while 22 participants passively viewed emotional and neutral pictures and completed an emotion regulation task with instructions to increase, decrease, or maintain their emotional responses. A greater preference for CMC was associated with reduced size of and satisfaction with social support, greater early (N1) attention capture by emotional stimuli, and reduced LPP amplitudes to unpleasant stimuli in the increase emotion regulatory task. These findings are discussed in the context of possible emotion- and social-regulatory functions of CMC.
Development of the Computer-Adaptive Version of the Late-Life Function and Disability Instrument
Tian, Feng; Kopits, Ilona M.; Moed, Richard; Pardasaney, Poonam K.; Jette, Alan M.
2012-01-01
Background. Having psychometrically strong disability measures that minimize response burden is important in assessing of older adults. Methods. Using the original 48 items from the Late-Life Function and Disability Instrument and newly developed items, a 158-item Activity Limitation and a 62-item Participation Restriction item pool were developed. The item pools were administered to a convenience sample of 520 community-dwelling adults 60 years or older. Confirmatory factor analysis and item response theory were employed to identify content structure, calibrate items, and build the computer-adaptive testings (CATs). We evaluated real-data simulations of 10-item CAT subscales. We collected data from 102 older adults to validate the 10-item CATs against the Veteran’s Short Form-36 and assessed test–retest reliability in a subsample of 57 subjects. Results. Confirmatory factor analysis revealed a bifactor structure, and multi-dimensional item response theory was used to calibrate an overall Activity Limitation Scale (141 items) and an overall Participation Restriction Scale (55 items). Fit statistics were acceptable (Activity Limitation: comparative fit index = 0.95, Tucker Lewis Index = 0.95, root mean square error approximation = 0.03; Participation Restriction: comparative fit index = 0.95, Tucker Lewis Index = 0.95, root mean square error approximation = 0.05). Correlation of 10-item CATs with full item banks were substantial (Activity Limitation: r = .90; Participation Restriction: r = .95). Test–retest reliability estimates were high (Activity Limitation: r = .85; Participation Restriction r = .80). Strength and pattern of correlations with Veteran’s Short Form-36 subscales were as hypothesized. Each CAT, on average, took 3.56 minutes to administer. Conclusions. The Late-Life Function and Disability Instrument CATs demonstrated strong reliability, validity, accuracy, and precision. The Late-Life Function and Disability Instrument CAT can achieve
NASA Technical Reports Server (NTRS)
Schwenke, David W.; Truhlar, Donald G.
1990-01-01
The Generalized Newton Variational Principle for 3D quantum mechanical reactive scattering is briefly reviewed. Then three techniques are described which improve the efficiency of the computations. First, the fact that the Hamiltonian is Hermitian is used to reduce the number of integrals computed, and then the properties of localized basis functions are exploited in order to eliminate redundant work in the integral evaluation. A new type of localized basis function with desirable properties is suggested. It is shown how partitioned matrices can be used with localized basis functions to reduce the amount of work required to handle the complex boundary conditions. The new techniques do not introduce any approximations into the calculations, so they may be used to obtain converged solutions of the Schroedinger equation.
MATERIAL CONTROL ACCOUNTING INMM
Hasty, T.
2009-06-14
Since 1996, the Mining and Chemical Combine (MCC - formerly known as K-26), and the United States Department of Energy (DOE) have been cooperating under the cooperative Nuclear Material Protection, Control and Accounting (MPC&A) Program between the Russian Federation and the U.S. Governments. Since MCC continues to operate a reactor for steam and electricity production for the site and city of Zheleznogorsk which results in production of the weapons grade plutonium, one of the goals of the MPC&A program is to support implementation of an expanded comprehensive nuclear material control and accounting (MC&A) program. To date MCC has completed upgrades identified in the initial gap analysis and documented in the site MC&A Plan and is implementing additional upgrades identified during an update to the gap analysis. The scope of these upgrades includes implementation of MCC organization structure relating to MC&A, establishing material balance area structure for special nuclear materials (SNM) storage and bulk processing areas, and material control functions including SNM portal monitors at target locations. Material accounting function upgrades include enhancements in the conduct of physical inventories, limit of error inventory difference procedure enhancements, implementation of basic computerized accounting system for four SNM storage areas, implementation of measurement equipment for improved accountability reporting, and both new and revised site-level MC&A procedures. This paper will discuss the implementation of MC&A upgrades at MCC based on the requirements established in the comprehensive MC&A plan developed by the Mining and Chemical Combine as part of the MPC&A Program.
ERIC Educational Resources Information Center
Lagrange, Jean-Baptiste; Psycharis, Giorgos
2014-01-01
The general goal of this paper is to explore the potential of computer environments for the teaching and learning of functions. To address this, different theoretical frameworks and corresponding research traditions are available. In this study, we aim to network different frameworks by following a "double analysis" method to analyse two…
Technology Transfer Automated Retrieval System (TEKTRAN)
High resolution x-ray computed tomography (HRCT) is a non-destructive diagnostic imaging technique with sub-micron resolution capability that is now being used to evaluate the structure and function of plant xylem network in three dimensions (3D). HRCT imaging is based on the same principles as medi...
Hansen, Randy R.; Bass, Robert B.; Kouzes, Richard T.; Mileson, Nicholas D.
2003-01-20
This paper provides a brief overview of the implementation of the Advanced Encryption Standard (AES) as a hash function for confirming the identity of software resident on a computer system. The PNNL Software Authentication team chose to use a hash function to confirm software identity on a system for situations where: (1) there is limited time to perform the confirmation and (2) access to the system is restricted to keyboard or thumbwheel input and output can only be displayed on a monitor. PNNL reviewed three popular algorithms: the Secure Hash Algorithm - 1 (SHA-1), the Message Digest - 5 (MD-5), and the Advanced Encryption Standard (AES) and selected the AES to incorporate in software confirmation tool we developed. This paper gives a brief overview of the SHA-1, MD-5, and the AES and sites references for further detail. It then explains the overall processing steps of the AES to reduce a large amount of generic data-the plain text, such is present in memory and other data storage media in a computer system, to a small amount of data-the hash digest, which is a mathematically unique representation or signature of the former that could be displayed on a computer's monitor. This paper starts with a simple definition and example to illustrate the use of a hash function. It concludes with a description of how the software confirmation tool uses the hash function to confirm the identity of software on a computer system.
ERIC Educational Resources Information Center
Meunier, Lydie E.
1994-01-01
Computer adaptive language testing (CALT) offers a variety of advantages; however, since CALT cannot test the multidimensional nature of language, it does not assess communicative/functional language. This article proposes to replace multiple choice and cloze formats and to apply CALT to live-action simulations. (18 references) (LB)
1977-02-01
both academic and industrial environments. However, many problems still require efficient solutions. One of these problem -areas that can have a...distribution of a data base system over several processors increases the complexity of the recovery problem. Just the interprocessor comunications ...DBMS over a computer network enormously complicates the data base administration function. If a recovery scheme similar to that proposed in this
ERIC Educational Resources Information Center
Montpetit, Kathleen; Haley, Stephen; Bilodeau, Nathalie; Ni, Pengsheng; Tian, Feng; Gorton, George, III; Mulcahey, M. J.
2011-01-01
This article reports on the content range and measurement precision of an upper extremity (UE) computer adaptive testing (CAT) platform of physical function in children with cerebral palsy. Upper extremity items representing skills of all abilities were administered to 305 parents. These responses were compared with two traditional standardized…
Computational and Functional Analyses of a Small-Molecule Binding Site in ROMK
Swale, Daniel R.; Sheehan, Jonathan H.; Banerjee, Sreedatta; Husni, Afeef S.; Nguyen, Thuy T.; Meiler, Jens; Denton, Jerod S.
2015-01-01
The renal outer medullary potassium channel (ROMK, or Kir1.1, encoded by KCNJ1) critically regulates renal tubule electrolyte and water transport and hence blood volume and pressure. The discovery of loss-of-function mutations in KCNJ1 underlying renal salt and water wasting and lower blood pressure has sparked interest in developing new classes of antihypertensive diuretics targeting ROMK. The recent development of nanomolar-affinity small-molecule inhibitors of ROMK creates opportunities for exploring the chemical and physical basis of ligand-channel interactions required for selective ROMK inhibition. We previously reported that the bis-nitro-phenyl ROMK inhibitor VU591 exhibits voltage-dependent knock-off at hyperpolarizing potentials, suggesting that the binding site is located within the ion-conduction pore. In this study, comparative molecular modeling and in silico ligand docking were used to interrogate the full-length ROMK pore for energetically favorable VU591 binding sites. Cluster analysis of 2498 low-energy poses resulting from 9900 Monte Carlo docking trajectories on each of 10 conformationally distinct ROMK comparative homology models identified two putative binding sites in the transmembrane pore that were subsequently tested for a role in VU591-dependent inhibition using site-directed mutagenesis and patch-clamp electrophysiology. Introduction of mutations into the lower site had no effect on the sensitivity of the channel to VU591. In contrast, mutations of Val168 or Asn171 in the upper site, which are unique to ROMK within the Kir channel family, led to a dramatic reduction in VU591 sensitivity. This study highlights the utility of computational modeling for defining ligand-ROMK interactions and proposes a mechanism for inhibition of ROMK. PMID:25762321
Op’t Holt, Bryan T.; Vance, Michael A.; Mirica, Liviu M.; Stack, T. Daniel P.; Solomon, Edward I.
2009-01-01
The μ-η2:η2-peroxodicopper(II) complex synthesized by reacting the Cu(I) complex of the bis-diamine ligand N,N′-di-tert-butyl-ethylenediamine (DBED) with O2 is a functional and spectroscopic model of the coupled binuclear copper protein tyrosinase. This complex reacts with 2,4-di-tert-butylphenolate at low temperature to produce a mixture of the catechol and quinone products, which proceeds through three intermediates (A – C) that have been characterized. A, stabilized at 153K, is characterized as a phenolate-bonded bis-μ-oxo dicopper(III) species, which proceeds at 193K to B, presumably a catecholate-bridged coupled bis-copper(II) species via an electrophilic aromatic substitution mechanism wherein aromatic ring distortion is the rate-limiting step. Isotopic labeling shows that the oxygen inserted into the aromatic substrate during hydroxylation derives from dioxygen, and a late-stage ortho-H+ transfer to an exogenous base is associated with C-O bond formation. Addition of a proton to B produces C, determined from resonance Raman spectra to be a Cu(II)-semiquinone complex. The formation of C (the oxidation of catecholate and reduction to Cu(I)) is governed by the protonation state of the distal bridging oxygen ligand of B. Parallels and contrasts are drawn between the spectroscopically and computationally supported mechanism of the DBED system, presented here, and the experimentally-derived mechanism of the coupled binuclear copper protein tyrosinase. PMID:19368383
NASA Astrophysics Data System (ADS)
Galbraith, Eric D.; Dunne, John P.; Gnanadesikan, Anand; Slater, Richard D.; Sarmiento, Jorge L.; Dufour, Carolina O.; de Souza, Gregory F.; Bianchi, Daniele; Claret, Mariona; Rodgers, Keith B.; Marvasti, Seyedehsafoura Sedigh
2015-12-01
Earth System Models increasingly include ocean biogeochemistry models in order to predict changes in ocean carbon storage, hypoxia, and biological productivity under climate change. However, state-of-the-art ocean biogeochemical models include many advected tracers, that significantly increase the computational resources required, forcing a trade-off with spatial resolution. Here, we compare a state-of-the art model with 30 prognostic tracers (TOPAZ) with two reduced-tracer models, one with 6 tracers (BLING), and the other with 3 tracers (miniBLING). The reduced-tracer models employ parameterized, implicit biological functions, which nonetheless capture many of the most important processes resolved by TOPAZ. All three are embedded in the same coupled climate model. Despite the large difference in tracer number, the absence of tracers for living organic matter is shown to have a minimal impact on the transport of nutrient elements, and the three models produce similar mean annual preindustrial distributions of macronutrients, oxygen, and carbon. Significant differences do exist among the models, in particular the seasonal cycle of biomass and export production, but it does not appear that these are necessary consequences of the reduced tracer number. With increasing CO2, changes in dissolved oxygen and anthropogenic carbon uptake are very similar across the different models. Thus, while the reduced-tracer models do not explicitly resolve the diversity and internal dynamics of marine ecosystems, we demonstrate that such models are applicable to a broad suite of major biogeochemical concerns, including anthropogenic change. These results are very promising for the further development and application of reduced-tracer biogeochemical models that incorporate "sub-ecosystem-scale" parameterizations.
NASA Astrophysics Data System (ADS)
Koeppe, Robert Allen
Positron computed tomography (PCT) is a diagnostic imaging technique that provides both three dimensional imaging capability and quantitative measurements of local tissue radioactivity concentrations in vivo. This allows the development of non-invasive methods that employ the principles of tracer kinetics for determining physiological properties such as mass specific blood flow, tissue pH, and rates of substrate transport or utilization. A physiologically based, two-compartment tracer kinetic model was derived to mathematically describe the exchange of a radioindicator between blood and tissue. The model was adapted for use with dynamic sequences of data acquired with a positron tomograph. Rapid estimation techniques were implemented to produce functional images of the model parameters by analyzing each individual pixel sequence of the image data. A detailed analysis of the performance characteristics of three different parameter estimation schemes was performed. The analysis included examination of errors caused by statistical uncertainties in the measured data, errors in the timing of the data, and errors caused by violation of various assumptions of the tracer kinetic model. Two specific radioindicators were investigated. ('18)F -fluoromethane, an inert freely diffusible gas, was used for local quantitative determinations of both cerebral blood flow and tissue:blood partition coefficient. A method was developed that did not require direct sampling of arterial blood for the absolute scaling of flow values. The arterial input concentration time course was obtained by assuming that the alveolar or end-tidal expired breath radioactivity concentration is proportional to the arterial blood concentration. The scale of the input function was obtained from a series of venous blood concentration measurements. The method of absolute scaling using venous samples was validated in four studies, performed on normal volunteers, in which directly measured arterial concentrations
Sims, James S; George, William L; Griffin, Terence J; Hagedorn, John G; Hung, Howard K; Kelso, John T; Olano, Marc; Peskin, Adele P; Satterfield, Steven G; Terrill, Judith Devaney; Bryant, Garnett W; Diaz, Jose G
2008-01-01
This is the third in a series of articles that describe, through examples, how the Scientific Applications and Visualization Group (SAVG) at NIST has utilized high performance parallel computing, visualization, and machine learning to accelerate scientific discovery. In this article we focus on the use of high performance computing and visualization for simulations of nanotechnology.
NASA Astrophysics Data System (ADS)
Scheu, Norbert
1998-11-01
A non-perturbative computation of scHADRONIC STRUCTURE FUNCTIONS for deep inelastic lepton hadron scattering has not been achieved yet. In this thesis we investigate the viability of the Hamiltonian approach in order to compute hadronic structure functions. In the literature, the so- called scFRONT FORM (FF) approach is favoured over the conventional the scINSTANT FORM (IF)-the conventional Hamiltonian approach-due to claims (a) that structure functions are related to scLIGHT-LIKE CORRELATION FUNCTIONS and (b) that the front form is much simpler for numerical computations. We dispell both claims using general arguments as well as practical computations (in the case of the scSCALAR MODEL and scTWO-DIMENSIONAL QED) demonstrating (a) that structure functions are related to scSPACE-LIKE CORRELATIONS and that (b) the IF is better suited for practical computations if appropriate approximations are introduced. Moreover, we show that the FF is scUNPHYSICAL in general for reasons as follows: (1) the FF constitutes an scINCOMPLETE QUANTISATION of field theories (2) the FF 'predicts' an scINFINITE SPEED OF LIGHT in one space dimension, a scCOMPLETE BREAKDOWN OF MICROCAUSALITY and the scUBIQUITY OF TIME-TRAVEL. Additionally we demonstrate that the FF cannot be approached by so-called ɛ co-ordinates. We demonstrate that these co-ordinates are but the instant form in disguise. The FF cannot be legitimated to be an scEFFECTIVE THEORY. Finally, we demonstrate that the so- called scINFINITE MOMENTUM FRAME is neither physical nor equivalent to the FF.
Evaluating Accounting Software in Secondary Schools.
ERIC Educational Resources Information Center
Chalupa, Marilyn
1988-01-01
The secondary accounting curriculum must be modified to include computers and software. Educators must be aware of the computer skills needed on the job and of the accounting software that is available. Software selection must be tailored to fit the curriculum and the time available. (JOW)
NASA Astrophysics Data System (ADS)
Lambin, Ph.; Vigneron, J. P.
1984-03-01
The analytical tetrahedron method (ATM) for evaluating perfect-crystal Green's functions is reviewed. It is shown that the ATM allows for computing matrix elements of the resolvent operator in the entire complex-energy plane. These elements are written as a scalar product involving weighting functions of the complex energy, which are computed on a mesh of k--> points in the Brillouin zone. When the usual approximations are made within each tetrahedron, namely linear interpolations for the dispersion relations as well as for the numerator matrix elements, the weighting functions only depend on the perfect-crystal dispersion relations. In addition, the analytical expression obtained for a tetrahedral contribution to the weighting functions is simpler than what is usually expected. Analytical properties of our expressions are discussed and all the limiting forms are worked out. Special attention is paid to the numerical stability of the algorithm producing the Green's-function imaginary part on the real energy axis. Expressions which have been published earlier are subject to computational problems, which are solved in the new formulas reported here.
NASA Astrophysics Data System (ADS)
Turner, David M.; Niezgoda, Stephen R.; Kalidindi, Surya R.
2016-10-01
Chord length distributions (CLDs) and lineal path functions (LPFs) have been successfully utilized in prior literature as measures of the size and shape distributions of the important microscale constituents in the material system. Typically, these functions are parameterized only by line lengths, and thus calculated and derived independent of the angular orientation of the chord or line segment. We describe in this paper computationally efficient methods for estimating chord length distributions and lineal path functions for 2D (two dimensional) and 3D microstructure images defined on any number of arbitrary chord orientations. These so called fully angularly resolved distributions can be computed for over 1000 orientations on large microstructure images (5003 voxels) in minutes on modest hardware. We present these methods as new tools for characterizing microstructures in a statistically meaningful way.
Innovations in an Accounting Information Systems Course.
ERIC Educational Resources Information Center
Shaoul, Jean
A new approach to teaching an introductory accounting information systems course is outlined and the potential of this approach for integrating computers into the accounting curriculum at Manchester University (England) is demonstrated. Specifically, the use of a small inventory recording system and database in an accounting information course is…
Chinellato, Eris; Del Pobil, Angel P
2009-06-01
The topic of vision-based grasping is being widely studied in humans and in other primates using various techniques and with different goals. The fundamental related findings are reviewed in this paper, with the aim of providing researchers from different fields, including intelligent robotics and neural computation, a comprehensive but accessible view on the subject. A detailed description of the principal sensorimotor processes and the brain areas involved is provided following a functional perspective, in order to make this survey especially useful for computational modeling and bio-inspired robotic applications.
Accounting Fundamentals for Non-Accountants
The purpose of this module is to provide an introduction and overview of accounting fundamentals for non-accountants. The module also covers important topics such as communication, internal controls, documentation and recordkeeping.
Panda, Ananya; Bhalla, Ashu Seith; Sharma, Raju; Mohan, Anant; Sreenivas, Vishnu; Kalaimannan, Umasankar; Upadhyay, Ashish Dutt
2016-01-01
Aims: To study the correlation between dyspnea, radiological findings, and pulmonary function tests (PFTs) in patients with sequelae of pulmonary tuberculosis (TB). Materials and Methods: Clinical history, chest computed tomography (CT), and PFT of patients with post-TB sequelae were recorded. Dyspnea was graded according to the Modified Medical Research Council (mMRC) scale. CT scans were analyzed for fibrosis, cavitation, bronchiectasis, consolidation, nodules, and aspergilloma. Semi-quantitative analysis was done for these abnormalities. Scores were added to obtain a total morphological score (TMS). The lungs were also divided into three zones and scores added to obtain the total lung score (TLS). Spirometry was done for forced vital capacity (FVC), forced expiratory volume in 1 s (FEV1), and FEV1/FVC. Results: Dyspnea was present in 58/101 patients. A total of 22/58 patients had mMRC Grade 1, and 17/58 patients had Grades 2 and 3 dyspnea each. There was a significant difference in median fibrosis, bronchiectasis, nodules (P < 0.01) scores, TMS, and TLS (P < 0.0001) between dyspnea and nondyspnea groups. Significant correlations were obtained between grades of dyspnea and fibrosis (r = 0.34, P = 0.006), bronchiectasis (r = 0.35, P = 0.004), nodule (r = 0.24, P = 0.016) scores, TMS (r = 0.398, P = 0.000), and TLS (r = 0.35, P = 0.0003). PFTs were impaired in 78/101 (77.2%) patients. Restrictive defect was most common in 39.6% followed by mixed in 34.7%. There was a negative but statistically insignificant trend between PFT and fibrosis, bronchiectasis, nodule scores, TMS, and TLS. However, there were significant differences in median fibrosis, cavitation, and bronchiectasis scores in patients with normal, mild to moderate, and severe respiratory defects. No difference was seen in TMS and TLS according to the severity of the respiratory defect. Conclusion: Both fibrosis and bronchiectasis correlated with dyspnea and with PFT. However, this correlation was not
NASA Astrophysics Data System (ADS)
Mugunthan, Pradeep; Shoemaker, Christine A.; Regis, Rommel G.
2005-11-01
The performance of function approximation (FA) methods is compared to heuristic and derivative-based nonlinear optimization methods for automatic calibration of biokinetic parameters of a groundwater bioremediation model of chlorinated ethenes on a hypothetical and a real field case. For the hypothetical case, on the basis of 10 trials on two different objective functions, the FA methods had the lowest mean and smaller deviation of the objective function among all algorithms for a combined Nash-Sutcliffe objective and among all but the derivative-based algorithm for a total squared error objective. The best algorithms in the hypothetical case were applied to calibrate eight parameters to data obtained from a site in California. In three trials the FA methods outperformed heuristic and derivative-based methods for both objective functions. This study indicates that function approximation methods could be a more efficient alternative to heuristic and derivative-based methods for automatic calibration of computationally expensive bioremediation models.
Yamamoto, Tokihiro; Kabus, Sven; Berg, Jens von; Lorenz, Cristian; Keall, Paul J.
2011-01-01
Purpose: To quantify the dosimetric impact of four-dimensional computed tomography (4D-CT) pulmonary ventilation imaging-based functional treatment planning that avoids high-functional lung regions. Methods and Materials: 4D-CT ventilation images were created from 15 non-small-cell lung cancer patients using deformable image registration and quantitative analysis of the resultant displacement vector field. For each patient, anatomic and functional plans were created for intensity-modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT). Consistent beam angles and dose-volume constraints were used for all cases. The plans with Radiation Therapy Oncology Group (RTOG) 0617-defined major deviations were modified until clinically acceptable. Functional planning spared the high-functional lung, and anatomic planning treated the lungs as uniformly functional. We quantified the impact of functional planning compared with anatomic planning using the two- or one-tailed t test. Results: Functional planning led to significant reductions in the high-functional lung dose, without significantly increasing other critical organ doses, but at the expense of significantly degraded the planning target volume (PTV) conformity and homogeneity. The average reduction in the high-functional lung mean dose was 1.8 Gy for IMRT (p < .001) and 2.0 Gy for VMAT (p < .001). Significantly larger changes occurred in the metrics for patients with a larger amount of high-functional lung adjacent to the PTV. Conclusion: The results of the present study have demonstrated the impact of 4D-CT ventilation imaging-based functional planning for IMRT and VMAT for the first time. Our findings indicate the potential of functional planning in lung functional avoidance for both IMRT and VMAT, particularly for patients who have high-functional lung adjacent to the PTV.
Accounting: Accountants Need Verbal Skill Training
ERIC Educational Resources Information Center
Whitaker, Bruce L.
1978-01-01
Verbal skills training is one aspect of accounting education not usually included in secondary and postsecondary accounting courses. The author discusses the need for verbal competency and methods of incorporating it into accounting courses, particularly a variation of the Keller plan of individualized instruction. (MF)
Pérès, Sabine; Felicori, Liza; Rialle, Stéphanie; Jobard, Elodie; Molina, Franck
2010-01-01
Motivation: In the available databases, biological processes are described from molecular and cellular points of view, but these descriptions are represented with text annotations that make it difficult to handle them for computation. Consequently, there is an obvious need for formal descriptions of biological processes. Results: We present a formalism that uses the BioΨ concepts to model biological processes from molecular details to networks. This computational approach, based on elementary bricks of actions, allows us to calculate on biological functions (e.g. process comparison, mapping structure–function relationships, etc.). We illustrate its application with two examples: the functional comparison of proteases and the functional description of the glycolysis network. This computational approach is compatible with detailed biological knowledge and can be applied to different kinds of systems of simulation. Availability: www.sysdiag.cnrs.fr/publications/supplementary-materials/BioPsi_Manager/ Contact: sabine.peres@sysdiag.cnrs.fr; franck.molina@sysdiag.cnrs.fr Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20448138
Gillespie-Lynch, Kristen; Kapp, Steven K; Shane-Simpson, Christina; Smith, David Shane; Hutman, Ted
2014-12-01
An online survey compared the perceived benefits and preferred functions of computer-mediated communication of participants with (N = 291) and without ASD (N = 311). Participants with autism spectrum disorder (ASD) perceived benefits of computer-mediated communication in terms of increased comprehension and control over communication, access to similar others, and the opportunity to express their true selves. They enjoyed using the Internet to meet others more, and to maintain connections with friends and family less, than did participants without ASD. People with ASD enjoyed aspects of computer-mediated communication that may be associated with special interests or advocacy, such as blogging, more than did participants without ASD. This study suggests that people with ASD may use the Internet in qualitatively different ways from those without ASD. Suggestions for interventions are discussed.
Computer Equipment Repair Curriculum Guide.
ERIC Educational Resources Information Center
Reneau, Fred; And Others
This guide is intended for use in a course to train students to repair computer equipment and perform related administrative and customer service tasks. Addressed in the individual units are the following topics (with selected subtopics in brackets): performing administrative functions (preparing service bills, maintaining accounts and labor…
ERIC Educational Resources Information Center
Price, Kathleen J.
2011-01-01
The use of information technology is a vital part of everyday life, but for a person with functional impairments, technology interaction may be difficult at best. Information technology is commonly designed to meet the needs of a theoretical "normal" user. However, there is no such thing as a "normal" user. A user's capabilities will vary over…
ERIC Educational Resources Information Center
Coster, Wendy J.; Kramer, Jessica M.; Tian, Feng; Dooley, Meghan; Liljenquist, Kendra; Kao, Ying-Chia; Ni, Pengsheng
2016-01-01
The Pediatric Evaluation of Disability Inventory-Computer Adaptive Test is an alternative method for describing the adaptive function of children and youth with disabilities using a computer-administered assessment. This study evaluated the performance of the Pediatric Evaluation of Disability Inventory-Computer Adaptive Test with a national…
NASA Technical Reports Server (NTRS)
Vinokur, M.
1983-01-01
The class of one-dimensional stretching functions used in finite-difference calculations is studied. For solutions containing a highly localized region of rapid variation, simple criteria for a stretching function are derived using a truncation error analysis. These criteria are used to investigate two types of stretching functions. One an interior stretching function, for which the location and slope of an interior clustering region are specified. The simplest such function satisfying the criteria is found to be one based on the inverse hyperbolic sine. The other type of function is a two-sided stretching function, for which the arbitrary slopes at the two ends of the one-dimensional interval are specified. The simplest such general function is found to be one based on the inverse tangent. Previously announced in STAR as N80-25055
NASA Technical Reports Server (NTRS)
Vinokur, M.
1979-01-01
The class of one-dimensional stretching functions used in finite-difference calculations is studied. For solutions containing a highly localized region of rapid variation, simple criteria for a stretching function are derived using a truncation error analysis. These criteria are used to investigate two types of stretching functions. One is an interior stretching function, for which the location and slope of an interior clustering region are specified. The simplest such function satisfying the criteria is found to be one based on the inverse hyperbolic sine. The other type of function is a two-sided stretching function, for which the arbitrary slopes at the two ends of the one-dimensional interval are specified. The simplest such general function is found to be one based on the inverse tangent.
Zhou, Jiajian; Zhang, Suyang; Wang, Huating; Sun, Hao
2017-04-04
Long noncoding RNAs (lncRNAs) are key regulators of diverse cellular processes. Recent advances in high-throughput sequencing have allowed for an unprecedented discovery of novel lncRNAs. To identify functional lncRNAs from thousands of candidates for further functional validation is still a challenging task. Here, we present a novel computational framework, lncFunNet (lncRNA Functional inference through integrated Network) that integrates ChIP-seq, CLIP-seq and RNA-seq data to predict, prioritize and annotate lncRNA functions. In mouse embryonic stem cells (mESCs), using lncFunNet we not only recovered most of the functional lncRNAs known to maintain mESC pluripotency but also predicted a plethora of novel functional lncRNAs. Similarly, in mouse myoblast C2C12 cells, applying lncFunNet led to prediction of reservoirs of functional lncRNAs in both proliferating myoblasts (MBs) and differentiating myotubes (MTs). Further analyses demonstrated that these lncRNAs are frequently bound by key transcription factors, interact with miRNAs and constitute key nodes in biological network motifs. Further experimentations validated their dynamic expression profiles and functionality during myoblast differentiation. Collectively, our studies demonstrate the use of lncFunNet to annotate and identify functional lncRNAs in a given biological system.
NASA Technical Reports Server (NTRS)
Gu, Chong; Bates, Douglas M.; Chen, Zehua; Wahba, Grace
1989-01-01
An efficient algorithm for computing the generalized cross-validation function for the general cross-validated regularization/smoothing problem is provided. This algorithm is appropriate for problems where no natural structure is available, and the regularization/smoothing problem is solved (exactly) in a reproducing kernel Hilbert space. It is particularly appropriate for certain multivariate smoothing problems with irregularly spaced data, and certain remote sensing problems, such as those that occur in meteorology, where the sensors are arranged irregularly. The algorithm is applied to the fitting of interaction spline models with irregularly spaced data and two smoothing parameters; favorable timing results are presented. The algorithm may be extended to the computation of certain generalized maximum likelihood (GML) functions. Application of the GML algorithm to a problem in numerical weather forecasting, and to a broad class of hypothesis testing problems, is noted.
NASA Astrophysics Data System (ADS)
Gil, Amparo; Segura, Javier; Temme, Nico M.
2003-04-01
The use of a uniform Airy-type asymptotic expansion for the computation of the modified Bessel functions of the third kind of imaginary orders (Kia(x)) near the transition point x=a, is discussed. In A. Gil et al., Evaluation of the modified Bessel functions of the third kind of imaginary orders, J. Comput. Phys. 17 (2002) 398-411, an algorithm for the evaluation of Kia(x) was presented, which made use of series, a continued fraction method and nonoscillating integral representations. The range of validity of the algorithm was limited by the singularity of the steepest descent paths near the transition point. We show how uniform Airy-type asymptotic expansions fill the gap left by the steepest descent method.
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Van Leer, Bram; Roe, Philip L.
1991-01-01
A limiting method has been devised for a grid-independent flux function for use with the two-dimensional Euler and Navier-Stokes equations. This limiting is derived from a monotonicity analysis of the model and allows for solutions with reduced oscillatory behavior while still maintaining sharper resolution than a grid-aligned method. In addition to capturing oblique waves sharply, the grid-independent flux function also reduces the entropy generated over an airfoil in an Euler computation and reduces pressure distortions in the separated boundary layer of a viscous-flow airfoil computation. The model has also been extended to three dimensions, although no angle-limiting procedure for improving monotonicity characteristics has been incorporated.
The use of computer graphic techniques for the determination of ventricular function.
NASA Technical Reports Server (NTRS)
Sandler, H.; Rasmussen, D.
1972-01-01
Description of computer techniques employed to increase the speed, accuracy, reliability, and scope of angiocardiographic analyses determining human heart dimensions. Chamber margins are traced with a Calma 303 digitizer from projections of the angiographic films. The digitized margins of the ventricular images are filed in a computer for subsequent analysis. The margins can be displayed on the television screen of a graphics unit for individual study or they can be viewed in real time (or at any selected speed) to study dynamic changes in the chamber outline. The construction of three dimensional images of the ventricle is described.
Pierri, Ciro Leonardo; Parisi, Giovanni; Porcelli, Vito
2010-09-01
The functional characterization of proteins represents a daily challenge for biochemical, medical and computational sciences. Although finally proved on the bench, the function of a protein can be successfully predicted by computational approaches that drive the further experimental assays. Current methods for comparative modeling allow the construction of accurate 3D models for proteins of unknown structure, provided that a crystal structure of a homologous protein is available. Binding regions can be proposed by using binding site predictors, data inferred from homologous crystal structures, and data provided from a careful interpretation of the multiple sequence alignment of the investigated protein and its homologs. Once the location of a binding site has been proposed, chemical ligands that have a high likelihood of binding can be identified by using ligand docking and structure-based virtual screening of chemical libraries. Most docking algorithms allow building a list sorted by energy of the lowest energy docking configuration for each ligand of the library. In this review the state-of-the-art of computational approaches in 3D protein comparative modeling and in the study of protein-ligand interactions is provided. Furthermore a possible combined/concerted multistep strategy for protein function prediction, based on multiple sequence alignment, comparative modeling, binding region prediction, and structure-based virtual screening of chemical libraries, is described by using suitable examples. As practical examples, Abl-kinase molecular modeling studies, HPV-E6 protein multiple sequence alignment analysis, and some other model docking-based characterization reports are briefly described to highlight the importance of computational approaches in protein function prediction.
Shao, Nan; Sun, Xiao-Guang; Dai, Sheng; Jiang, Deen
2012-01-01
New electrolytes with large electrochemical windows are needed to meet the challenge for high-voltage Li-ion batteries. Sulfone as an electrolyte solvent boasts of high oxidation potentials. Here we examine the effect of multiple functionalization on sulfone's oxidation potential. We compute oxidation potentials for a series of sulfone-based molecules functionalized with fluorine, cyano, ester, and carbonate groups by using a quantum chemistry method within a continuum solvation model. We find that multifunctionalization is a key to achieving high oxidation potentials. This can be realized through either a fluorether group on a sulfone molecule or sulfonyl fluoride with a cyano or ester group.
Gonis, Antonios; Daene, Markus W; Nicholson, Don M; Stocks, George Malcolm
2012-01-01
We have developed and tested in terms of atomic calculations an exact, analytic and computationally simple procedure for determining the functional derivative of the exchange energy with respect to the density in the implementation of the Kohn Sham formulation of density functional theory (KS-DFT), providing an analytic, closed-form solution of the self-interaction problem in KS-DFT. We demonstrate the efficacy of our method through ground-state calculations of the exchange potential and energy for atomic He and Be atoms, and comparisons with experiment and the results obtained within the optimized effective potential (OEP) method.
Mackie, W.A.; Hinrichs, C.H.; Cohen, I.M.; Alin, J.S.; Schnitzler, D.T.; Carleson, P.; Ginn, R.; Krueger, P.; Vetter, C.G. ); Davis, P.R. )
1990-05-01
We report on a unique experimental method to determine thermionic work functions of major crystal planes of single crystal zirconium carbide. Applications for transition metal carbides could include cathodes for advanced thermionic energy conversion, radiation immune microcircuitry, {beta}-SiC substrates or high current density field emission cathodes. The primary emphasis of this paper is the analytical method used, that of computer processing a digitized image. ZrC single crystal specimens were prepared by floating zone arc refinement from sintered stock, yielding an average bulk stoichiometry of C/Zr=0.92. A 0.075 cm hemispherical cathode was prepared and mounted in a thermionic projection microscope (TPM) tube. The imaged patterns of thermally emitted electrons taken at various extraction voltages were digitized and computer analyzed to yield currents and corresponding emitting areas for major crystallographic planes. These data were taken at pyrometrically measured temperatures in the range 1700{lt}{ital T}{lt}2200 K. Schottky plots were then used to determine effective thermionic work functions as a function of crystallographic direction and temperature. Work function ordering for various crystal planes is reported through the TPM image processing method. Comparisons are made with effective thermionic and absolute (FERP) work function methods. To support the TPM image processing method, clean tungsten surfaces were examined and results are listed with accepted values.
Arensman, F W; Radley-Smith, R; Grieve, L; Gibson, D G; Yacoub, M H
1986-01-01
Left ventricular function before and after anatomical correction of transposition of the great arteries was assessed by computer assisted analysis of 78 echocardiographs from 27 patients obtained one year before to five years after operation. Sixteen patients had simple transposition, and 11 had complex transposition with additional large ventricular septal defect. Immediately after correction mean shortening fraction fell from 46(9)% to 33(8)%. There was a corresponding drop in normalised peak shortening rate from 5.4(3.7) to 3.3(1.1) s-1 and normal septal motion was usually absent. Systolic shortening fraction increased with time after correction and left ventricular end diastolic diameter increased appropriately for age. The preoperative rate of free wall thickening was significantly higher in simple (5.6(2.8) s-1) and complex transposition (4.5(1.8) s-1) than in controls (2.9(0.8) s-1). After operation these values remained high in both the short and long term. Thus, computer assisted analysis of left ventricular dimensions and their rates of change before and after anatomical correction showed only slight postoperative changes which tended to become normal with time. Septal motion was commonly absent after operation. This was associated with an increase in the rate of posterior wall thickening that suggested normal ventricular function associated with an altered contraction pattern. Computer assisted echocardiographic analysis may be helpful in the long term assessment of ventricular function after operation for various heart abnormalities. PMID:3942650
sTeam--Providing Primary Media Functions for Web-Based Computer-Supported Cooperative Learning.
ERIC Educational Resources Information Center
Hampel, Thorsten
The World Wide Web has developed as the de facto standard for computer based learning. However, as a server-centered approach, it confines readers and learners to passive nonsequential reading. Authoring and Web-publishing systems aim at supporting the authors' design process. Consequently, learners' activities are confined to selecting and…
ERIC Educational Resources Information Center
Rabab'ah, Ghaleb
2013-01-01
This study explores the discourse generated by English as a foreign language (EFL) learners using synchronous computer-mediated communication (CMC) as an approach to help English language learners to create social interaction in the classroom. It investigates the impact of synchronous CMC mode on the quantity of total words, lexical range and…
ERIC Educational Resources Information Center
Johnson, Erin Phinney; Perry, Justin; Shamir, Haya
2010-01-01
This study examines the effects on early reading skills of three different methods of presenting material with computer-assisted instruction (CAI): (1) learner-controlled picture menu, which allows the student to choose activities, (2) linear sequencer, which progresses the students through lessons at a pre-specified pace, and (3) mastery-based…
González-Díaz, Humberto; Agüero-Chapin, Guillermín; Varona, Javier; Molina, Reinaldo; Delogu, Giovanna; Santana, Lourdes; Uriarte, Eugenio; Podda, Gianni
2007-04-30
Methods for prediction of proteins, DNA, or RNA function and mapping it onto sequence often rely on bioinformatics alignment approach instead of chemical structure. Consequently, it is interesting to develop computational chemistry approaches based on molecular descriptors. In this sense, many researchers used sequence-coupling numbers and our group extended them to 2D proteins representations. However, no coupling numbers have been reported for 2D-RNA topology graphs, which are highly branched and contain useful information. Here, we use a computational chemistry scheme: (a) transforming sequences into RNA secondary structures, (b) defining and calculating new 2D-RNA-coupling numbers, (c) seek a structure-function model, and (d) map biological function onto the folded RNA. We studied as example 1-aminocyclopropane-1-carboxylic acid (ACC) oxidases known as ACO, which control fruit ripening having importance for biotechnology industry. First, we calculated tau(k)(2D-RNA) values to a set of 90-folded RNAs, including 28 transcripts of ACO and control sequences. Afterwards, we compared the classification performance of 10 different classifiers implemented in the software WEKA. In particular, the logistic equation ACO = 23.8 . tau(1)(2D-RNA) + 41.4 predicts ACOs with 98.9%, 98.0%, and 97.8% of accuracy in training, leave-one-out and 10-fold cross-validation, respectively. Afterwards, with this equation we predict ACO function to a sequence isolated in this work from Coffea arabica (GenBank accession DQ218452). The tau(1)(2D-RNA) also favorably compare with other descriptors. This equation allows us to map the codification of ACO activity on different mRNA topology features. The present computational-chemistry approach is general and could be extended to connect RNA secondary structure topology to other functions.
MAC, material accounting database user guide
Russell, V.K.
1994-09-22
The K Basins Material Accounting (MAC) database system user guide describes the user features and functions, and the document is structured like the database menus. This document presents the MAC database system user instructions which explain how to record the movements and configuration of canisters and materials within the K Basins on the computer, the mechanics of handling encapsulation tracking, and administrative functions associated with the system. This document includes the user instructions, which also serve as the software requirements specification for the system implemented on the microcomputer. This includes suggested user keystrokes, examples of screens displayed by the system, and reports generated by the system. It shows how the system is organized, via menus and screens. It does not explain system design nor provide programmer instructions.
Chen, Minxin; Li, Xiantao; Liu, Chun
2014-08-14
We present a numerical method to approximate the memory functions in the generalized Langevin models for the collective dynamics of macromolecules. We first derive the exact expressions of the memory functions, obtained from projection to subspaces that correspond to the selection of coarse-grain variables. In particular, the memory functions are expressed in the forms of matrix functions, which will then be approximated by Krylov-subspace methods. It will also be demonstrated that the random noise can be approximated under the same framework, and the second fluctuation-dissipation theorem is automatically satisfied. The accuracy of the method is examined through several numerical examples.
Accounting and Accountability for Distributed and Grid Systems
NASA Technical Reports Server (NTRS)
Thigpen, William; McGinnis, Laura F.; Hacker, Thomas J.
2001-01-01
While the advent of distributed and grid computing systems will open new opportunities for scientific exploration, the reality of such implementations could prove to be a system administrator's nightmare. A lot of effort is being spent on identifying and resolving the obvious problems of security, scheduling, authentication and authorization. Lurking in the background, though, are the largely unaddressed issues of accountability and usage accounting: (1) mapping resource usage to resource users; (2) defining usage economies or methods for resource exchange; (3) describing implementation standards that minimize and compartmentalize the tasks required for a site to participate in a grid.
Henyey-Greenstein and Mie phase functions in Monte Carlo radiative transfer computations.
Toublanc, D
1996-06-20
Monte Carlo radiative transfer simulation of light scattering in planetary atmospheres is not a simple problem, especially the study of angular distribution of light intensity. Approximate phase functions such as Henyey-Greenstein, modified Henyey-Greenstein, or Legendre polynomial decomposition are often used to simulate the Mie phase function. An alternative solution using an exact calculation alleviates these approximations.
Henyey-Greenstein and Mie phase functions in Monte Carlo radiative transfer computations
NASA Astrophysics Data System (ADS)
Toublanc, Dominique
1996-06-01
Monte Carlo radiative transfer simulation of light scattering in planetary atmospheres is not a simple problem, especially the study of angular distribution of light intensity. Approximate phase functions such as Henyey-Greenstein, modified Henyey-Greenstein, or Legendre polynomial decomposition are often used to simulate the Mie phase function. An alternative solution using an exact calculation alleviates these approximations.
A Computation of the Frequency Dependent Dielectric Function for Energetic Materials
NASA Astrophysics Data System (ADS)
Zwitter, D. E.; Kuklja, M. M.; Kunz, A. B.
1999-06-01
The imaginary part of the dielectric function as a function of frequency is calculated for the solids RDX, TATB, ADN, and PETN. Calculations have been performed including the effects of isotropic and uniaxial pressure. Simple lattice defects are included in some of the calculations.
Substrate tunnels in enzymes: structure-function relationships and computational methodology.
Kingsley, Laura J; Lill, Markus A
2015-04-01
In enzymes, the active site is the location where incoming substrates are chemically converted to products. In some enzymes, this site is deeply buried within the core of the protein, and, in order to access the active site, substrates must pass through the body of the protein via a tunnel. In many systems, these tunnels act as filters and have been found to influence both substrate specificity and catalytic mechanism. Identifying and understanding how these tunnels exert such control has been of growing interest over the past several years because of implications in fields such as protein engineering and drug design. This growing interest has spurred the development of several computational methods to identify and analyze tunnels and how ligands migrate through these tunnels. The goal of this review is to outline how tunnels influence substrate specificity and catalytic efficiency in enzymes with buried active sites and to provide a brief summary of the computational tools used to identify and evaluate these tunnels.
Substrate Tunnels in Enzymes: Structure-Function Relationships and Computational Methodology
Kingsley, Laura J.; Lill, Markus A.
2015-01-01
In enzymes, the active site is the location where incoming substrates are chemically converted to products. In some enzymes, this site is deeply buried within the core of the protein and in order to access the active site, substrates must pass through the body of the protein via a tunnel. In many systems, these tunnels act as filters and have been found to influence both substrate specificity and catalytic mechanism. Identifying and understanding how these tunnels exert such control has been of growing interest over the past several years due to implications in fields such as protein engineering and drug design. This growing interest has spurred the development of several computational methods to identify and analyze tunnels and how ligands migrate through these tunnels. The goal of this review is to outline how tunnels influence substrate specificity and catalytic efficiency in enzymes with tunnels and to provide a brief summary of the computational tools used to identify and evaluate these tunnels. PMID:25663659
Computational Methods for Structural and Functional Studies of Alzheimer's Amyloid Ion Channels.
Jang, Hyunbum; Arce, Fernando Teran; Lee, Joon; Gillman, Alan L; Ramachandran, Srinivasan; Kagan, Bruce L; Lal, Ratnesh; Nussinov, Ruth
2016-01-01
Aggregation can be studied by a range of methods, experimental and computational. Aggregates form in solution, across solid surfaces, and on and in the membrane, where they may assemble into unregulated leaking ion channels. Experimental probes of ion channel conformations and dynamics are challenging. Atomistic molecular dynamics (MD) simulations are capable of providing insight into structural details of amyloid ion channels in the membrane at a resolution not achievable experimentally. Since data suggest that late stage Alzheimer's disease involves formation of toxic ion channels, MD simulations have been used aiming to gain insight into the channel shapes, morphologies, pore dimensions, conformational heterogeneity, and activity. These can be exploited for drug discovery. Here we describe computational methods to model amyloid ion channels containing the β-sheet motif at atomic scale and to calculate toxic pore activity in the membrane.
Cosmic reionization on computers: The faint end of the galaxy luminosity function
Gnedin, Nickolay Y.
2016-07-01
Using numerical cosmological simulations completed under the “Cosmic Reionization On Computers” project, I explore theoretical predictions for the faint end of the galaxy UV luminosity functions atmore » $$z\\gtrsim 6$$. A commonly used Schechter function approximation with the magnitude cut at $${M}_{{\\rm{cut}}}\\sim -13$$ provides a reasonable fit to the actual luminosity function of simulated galaxies. When the Schechter functional form is forced on the luminosity functions from the simulations, the magnitude cut $${M}_{{\\rm{cut}}}$$ is found to vary between -12 and -14 with a mild redshift dependence. Here, an analytical model of reionization from Madau et al., as used by Robertson et al., provides a good description of the simulated results, which can be improved even further by adding two physically motivated modifications to the original Madau et al. equation.« less
Cosmic reionization on computers: The faint end of the galaxy luminosity function
Gnedin, Nickolay Y.
2016-07-01
Using numerical cosmological simulations completed under the “Cosmic Reionization On Computers” project, I explore theoretical predictions for the faint end of the galaxy UV luminosity functions at $z\\gtrsim 6$. A commonly used Schechter function approximation with the magnitude cut at ${M}_{{\\rm{cut}}}\\sim -13$ provides a reasonable fit to the actual luminosity function of simulated galaxies. When the Schechter functional form is forced on the luminosity functions from the simulations, the magnitude cut ${M}_{{\\rm{cut}}}$ is found to vary between -12 and -14 with a mild redshift dependence. Here, an analytical model of reionization from Madau et al., as used by Robertson et al., provides a good description of the simulated results, which can be improved even further by adding two physically motivated modifications to the original Madau et al. equation.
Cosmic Reionization on Computers: The Faint End of the Galaxy Luminosity Function
NASA Astrophysics Data System (ADS)
Gnedin, Nickolay Y.
2016-07-01
Using numerical cosmological simulations completed under the “Cosmic Reionization On Computers” project, I explore theoretical predictions for the faint end of the galaxy UV luminosity functions at z≳ 6. A commonly used Schechter function approximation with the magnitude cut at {M}{{cut}}˜ -13 provides a reasonable fit to the actual luminosity function of simulated galaxies. When the Schechter functional form is forced on the luminosity functions from the simulations, the magnitude cut {M}{{cut}} is found to vary between -12 and -14 with a mild redshift dependence. An analytical model of reionization from Madau et al., as used by Robertson et al., provides a good description of the simulated results, which can be improved even further by adding two physically motivated modifications to the original Madau et al. equation.
Computer Support to Navy Public Works Departments for Their Utilities Function.
1980-12-01
34menus". It can be self-instructing, in the English language and without abbreviations. Even some of the programming can now be done in user technology ...Approved by: Thesis Advisor Second Reader Chairm ,Department of Administrative Sciences Dean of Information and Policy Sciences i 3 ABSTRACT This thesis...us all. At the same time; however, computer technology has advanced tremendously and has become available at dramatically reduced costs. This
MASS: An automated accountability system
Erkkila, B.H.; Kelso, F.
1994-08-01
All Department of Energy contractors who manage accountable quantities of nuclear materials are required to implement an accountability system that tracks, and records the activities associated with those materials. At Los Alamos, the automated accountability system allows data entry on computer terminals and data base updating as soon as the entry is made. It is also able to generate all required reports in a timely Fashion. Over the last several years, the hardware and software have been upgraded to provide the users with all the capability needed to manage a large variety of operations with a wide variety of nuclear materials. Enhancements to the system are implemented as the needs of the users are identified. The system has grown with the expanded needs of the user; and has survived several years of changing operations and activity. The user community served by this system includes processing, materials control and accountability, and nuclear material management personnel. In addition to serving the local users, the accountability system supports the national data base (NMMSS). This paper contains a discussion of several details of the system design and operation. After several years of successful operation, this system provides an operating example of how computer systems can be used to manage a very dynamic data management problem.
Vinogradskiy, Yevgeniy; Castillo, Richard; Castillo, Edward; Tucker, Susan L.; Liao, Zhongxing; Guerrero, Thomas; Martel, Mary K.
2013-06-01
Purpose: Four-dimensional computed tomography (4DCT)-based ventilation is an emerging imaging modality that can be used in the thoracic treatment planning process. The clinical benefit of using ventilation images in radiation treatment plans remains to be tested. The purpose of the current work was to test the potential benefit of using ventilation in treatment planning by evaluating whether dose to highly ventilated regions of the lung resulted in increased incidence of clinical toxicity. Methods and Materials: Pretreatment 4DCT data were used to compute pretreatment ventilation images for 96 lung cancer patients. Ventilation images were calculated using 4DCT data, deformable image registration, and a density-change based algorithm. Dose–volume and ventilation-based dose function metrics were computed for each patient. The ability of the dose–volume and ventilation-based dose–function metrics to predict for severe (grade 3+) radiation pneumonitis was assessed using logistic regression analysis, area under the curve (AUC) metrics, and bootstrap methods. Results: A specific patient example is presented that demonstrates how incorporating ventilation-based functional information can help separate patients with and without toxicity. The logistic regression significance values were all lower for the dose–function metrics (range P=.093-.250) than for their dose–volume equivalents (range, P=.331-.580). The AUC values were all greater for the dose–function metrics (range, 0.569-0.620) than for their dose–volume equivalents (range, 0.500-0.544). Bootstrap results revealed an improvement in model fit using dose–function metrics compared to dose–volume metrics that approached significance (range, P=.118-.155). Conclusions: To our knowledge, this is the first study that attempts to correlate lung dose and 4DCT ventilation-based function to thoracic toxicity after radiation therapy. Although the results were not significant at the .05 level, our data suggests
Goings, Joshua J; Li, Xiaosong
2016-06-21
One of the challenges of interpreting electronic circular dichroism (ECD) band spectra is that different states may have different rotatory strength signs, determined by their absolute configuration. If the states are closely spaced and opposite in sign, observed transitions may be washed out by nearby states, unlike absorption spectra where transitions are always positive additive. To accurately compute ECD bands, it is necessary to compute a large number of excited states, which may be prohibitively costly if one uses the linear-response time-dependent density functional theory (TDDFT) framework. Here we implement a real-time, atomic-orbital based TDDFT method for computing the entire ECD spectrum simultaneously. The method is advantageous for large systems with a high density of states. In contrast to previous implementations based on real-space grids, the method is variational, independent of nuclear orientation, and does not rely on pseudopotential approximations, making it suitable for computation of chiroptical properties well into the X-ray regime.