Reduction of accounts receivable through total quality management.
LaFleur, N
1994-01-01
On October 1, 1990, The Miriam Hospital in Providence, R.I., converted to a new computer system for patient accounting applications and on-line registration functions. The new system automated the hospital's patient accounting, registration, and medical records functions and interfaced registration with patient accounts for billing purposes.
1988-11-01
Manufacturing System 22 4. Similar Parts Based Shape or Manufactuting Process 24 5. Projected Annual Unit Robot Sales and Installed Base Through 1992 30 6. U.S...effort needed to perform personnel, product design, marketing , and advertising, and finance tasks of the firm. Level III controls the resource...planning and accounting functions of the firm. Systems at this level support purchasing, accounts payable, accounts receivable, master scheduling and sales
Innovations in Mass Instruction
ERIC Educational Resources Information Center
Perritt, Roscoe D.
1974-01-01
This article deals with teaching accountancy but has wide applications. It describes the graduating teaching seminar, functions of the graduate assistant, computer accounting, honors sections, remedial sessions, report writing, and evaluation procedures. (Editor)
Management Needs for Computer Support.
ERIC Educational Resources Information Center
Irby, Alice J.
University management has many and varied needs for effective computer services in support of their processing and information functions. The challenge for the computer center managers is to better understand these needs and assist in the development of effective and timely solutions. Management needs can range from accounting and payroll to…
Lizunov, A Y; Gonchar, A L; Zaitseva, N I; Zosimov, V V
2015-10-26
We analyzed the frequency with which intraligand contacts occurred in a set of 1300 protein-ligand complexes [ Plewczynski et al. J. Comput. Chem. 2011 , 32 , 742 - 755 .]. Our analysis showed that flexible ligands often form intraligand hydrophobic contacts, while intraligand hydrogen bonds are rare. The test set was also thoroughly investigated and classified. We suggest a universal method for enhancement of a scoring function based on a potential of mean force (PMF-based score) by adding a term accounting for intraligand interactions. The method was implemented via in-house developed program, utilizing an Algo_score scoring function [ Ramensky et al. Proteins: Struct., Funct., Genet. 2007 , 69 , 349 - 357 .] based on the Tarasov-Muryshev PMF [ Muryshev et al. J. Comput.-Aided Mol. Des. 2003 , 17 , 597 - 605 .]. The enhancement of the scoring function was shown to significantly improve the docking and scoring quality for flexible ligands in the test set of 1300 protein-ligand complexes [ Plewczynski et al. J. Comput. Chem. 2011 , 32 , 742 - 755 .]. We then investigated the correlation of the docking results with two parameters of intraligand interactions estimation. These parameters are the weight of intraligand interactions and the minimum number of bonds between the ligand atoms required to take their interaction into account.
Computational Modeling Basis in the Photostress Recovery Model (PREMO)
2014-09-01
classes of filters, for radial frequency selectivity and for orientation selectivity. Our current implementation accounts for the radial frequency...glare function and its attribution to the components of ocular scatter. Chairman’s Report CIE TC 1-18, Commission de l’Eclairage. 14. Watson, A...radiometric to photometric units to account for the differential spectral sensitivity of the eye. The spectral luminosity function for photopic vision is
Effects of Attitudes and Behaviours on Learning Mathematics with Computer Tools
ERIC Educational Resources Information Center
Reed, Helen C.; Drijvers, Paul; Kirschner, Paul A.
2010-01-01
This mixed-methods study investigates the effects of student attitudes and behaviours on the outcomes of learning mathematics with computer tools. A computer tool was used to help students develop the mathematical concept of function. In the whole sample (N = 521), student attitudes could account for a 3.4 point difference in test scores between…
Automated Computer Access Request System
NASA Technical Reports Server (NTRS)
Snook, Bryan E.
2010-01-01
The Automated Computer Access Request (AutoCAR) system is a Web-based account provisioning application that replaces the time-consuming paper-based computer-access request process at Johnson Space Center (JSC). Auto- CAR combines rules-based and role-based functionality in one application to provide a centralized system that is easily and widely accessible. The system features a work-flow engine that facilitates request routing, a user registration directory containing contact information and user metadata, an access request submission and tracking process, and a system administrator account management component. This provides full, end-to-end disposition approval chain accountability from the moment a request is submitted. By blending both rules-based and rolebased functionality, AutoCAR has the flexibility to route requests based on a user s nationality, JSC affiliation status, and other export-control requirements, while ensuring a user s request is addressed by either a primary or backup approver. All user accounts that are tracked in AutoCAR are recorded and mapped to the native operating system schema on the target platform where user accounts reside. This allows for future extensibility for supporting creation, deletion, and account management directly on the target platforms by way of AutoCAR. The system s directory-based lookup and day-today change analysis of directory information determines personnel moves, deletions, and additions, and automatically notifies a user via e-mail to revalidate his/her account access as a result of such changes. AutoCAR is a Microsoft classic active server page (ASP) application hosted on a Microsoft Internet Information Server (IIS).
Green's function calculations for semi-infinite carbon nanotubes
NASA Astrophysics Data System (ADS)
John, D. L.; Pulfrey, D. L.
2006-02-01
In the modeling of nanoscale electronic devices, the non-equilibrium Green's function technique is gaining increasing popularity. One complication in this method is the need for computation of the self-energy functions that account for the interactions between the active portion of a device and its leads. In the one-dimensional case, these functions may be computed analytically. In higher dimensions, a numerical approach is required. In this work, we generalize earlier methods that were developed for tight-binding Hamiltonians, and present results for the case of a carbon nanotube.
An efficient method for hybrid density functional calculation with spin-orbit coupling
NASA Astrophysics Data System (ADS)
Wang, Maoyuan; Liu, Gui-Bin; Guo, Hong; Yao, Yugui
2018-03-01
In first-principles calculations, hybrid functional is often used to improve accuracy from local exchange correlation functionals. A drawback is that evaluating the hybrid functional needs significantly more computing effort. When spin-orbit coupling (SOC) is taken into account, the non-collinear spin structure increases computing effort by at least eight times. As a result, hybrid functional calculations with SOC are intractable in most cases. In this paper, we present an approximate solution to this problem by developing an efficient method based on a mixed linear combination of atomic orbital (LCAO) scheme. We demonstrate the power of this method using several examples and we show that the results compare very well with those of direct hybrid functional calculations with SOC, yet the method only requires a computing effort similar to that without SOC. The presented technique provides a good balance between computing efficiency and accuracy, and it can be extended to magnetic materials.
A computer system for processing data from routine pulmonary function tests.
Pack, A I; McCusker, R; Moran, F
1977-01-01
In larger pulmonary function laboratories there is a need for computerised techniques of data processing. A flexible computer system, which is used routinely, is described. The system processes data from a relatively large range of tests. Two types of output are produced--one for laboratory purposes, and one for return to the referring physician. The system adds an automatic interpretative report for each set of results. In developing the interpretative system it has been necessary to utilise a number of arbitrary definitions. The present terminology for reporting pulmonary function tests has limitations. The computer interpretation system affords the opportunity to take account of known interaction between measurements of function and different pathological states. Images PMID:329462
Methods for evaluating and ranking transportation energy conservation programs
NASA Astrophysics Data System (ADS)
Santone, L. C.
1981-04-01
The energy conservation programs are assessed in terms of petroleum savings, incremental costs to consumers probability of technical and market success, and external impacts due to environmental, economic, and social factors. Three ranking functions and a policy matrix are used to evaluate the programs. The net present value measure which computes the present worth of petroleum savings less the present worth of costs is modified by dividing by the present value of DOE funding to obtain a net present value per program dollar. The comprehensive ranking function takes external impacts into account. Procedures are described for making computations of the ranking functions and the attributes that require computation. Computations are made for the electric vehicle, Stirling engine, gas turbine, and MPG mileage guide program.
ERIC Educational Resources Information Center
Moses, Tim
2008-01-01
Equating functions are supposed to be population invariant, meaning that the choice of subpopulation used to compute the equating function should not matter. The extent to which equating functions are population invariant is typically assessed in terms of practical difference criteria that do not account for equating functions' sampling…
Gauge coupling beta functions in the standard model to three loops.
Mihaila, Luminita N; Salomon, Jens; Steinhauser, Matthias
2012-04-13
In this Letter, we compute the three-loop corrections to the beta functions of the three gauge couplings in the standard model of particle physics using the minimal subtraction scheme and taking into account Yukawa and Higgs self-couplings.
Space shuttle configuration accounting functional design specification
NASA Technical Reports Server (NTRS)
1974-01-01
An analysis is presented of the requirements for an on-line automated system which must be capable of tracking the status of requirements and engineering changes and of providing accurate and timely records. The functional design specification provides the definition, description, and character length of the required data elements and the interrelationship of data elements to adequately track, display, and report the status of active configuration changes. As changes to the space shuttle program levels II and III configuration are proposed, evaluated, and dispositioned, it is the function of the configuration management office to maintain records regarding changes to the baseline and to track and report the status of those changes. The configuration accounting system will consist of a combination of computers, computer terminals, software, and procedures, all of which are designed to store, retrieve, display, and process information required to track proposed and proved engineering changes to maintain baseline documentation of the space shuttle program levels II and III.
Multiscale modeling of metabolism, flows, and exchanges in heterogeneous organs
Bassingthwaighte, James B.; Raymond, Gary M.; Butterworth, Erik; Alessio, Adam; Caldwell, James H.
2010-01-01
Large-scale models accounting for the processes supporting metabolism and function in an organ or tissue with a marked heterogeneity of flows and metabolic rates are computationally complex and tedious to compute. Their use in the analysis of data from positron emission tomography (PET) and magnetic resonance imaging (MRI) requires model reduction since the data are composed of concentration–time curves from hundreds of regions of interest (ROI) within the organ. Within each ROI, one must account for blood flow, intracapillary gradients in concentrations, transmembrane transport, and intracellular reactions. Using modular design, we configured a whole organ model, GENTEX, to allow adaptive usage for multiple reacting molecular species while omitting computation of unused components. The temporal and spatial resolution and the number of species are adaptable and the numerical accuracy and computational speed is adjustable during optimization runs, which increases accuracy and spatial resolution as convergence approaches. An application to the interpretation of PET image sequences after intravenous injection of 13NH3 provides functional image maps of regional myocardial blood flows. PMID:20201893
Electronic spreadsheet vs. manual payroll.
Kiley, M M
1991-01-01
Medical groups with direct employees must employ someone or contract with a company to compute payroll, writes Michael Kiley, Ph.D., M.P.H. However, many medical groups, including small ones, own a personal or minicomputer to handle accounts receivable. Kiley explains, in detail, how this same computer and a spreadsheet program also can be used to perform payroll functions.
Information Infrastructures for Integrated Enterprises
1993-05-01
PROCESSING demographic CAM realization; ule leveling; studies; prelimi- rapid tooling; con- accounting/admin- nary CAFE and tinuous cost istrative reports...nies might consider franchising some facets of indirect labor, such as selected functions of administration, finance, and human resources. Incorporate as...vices CAFE Corporate Average Fuel Economy CAD Computer-Aided Design 0 CAE Computer-Aided Engineering CAIS Common Ada Programming Support Environment
Computation of output feedback gains for linear stochastic systems using the Zangwill-Powell method
NASA Technical Reports Server (NTRS)
Kaufman, H.
1977-01-01
Because conventional optimal linear regulator theory results in a controller which requires the capability of measuring and/or estimating the entire state vector, it is of interest to consider procedures for computing controls which are restricted to be linear feedback functions of a lower dimensional output vector and which take into account the presence of measurement noise and process uncertainty. To this effect a stochastic linear model has been developed that accounts for process parameter and initial uncertainty, measurement noise, and a restricted number of measurable outputs. Optimization with respect to the corresponding output feedback gains was then performed for both finite and infinite time performance indices without gradient computation by using Zangwill's modification of a procedure originally proposed by Powell.
Automated attendance accounting system
NASA Technical Reports Server (NTRS)
Chapman, C. P. (Inventor)
1973-01-01
An automated accounting system useful for applying data to a computer from any or all of a multiplicity of data terminals is disclosed. The system essentially includes a preselected number of data terminals which are each adapted to convert data words of decimal form to another form, i.e., binary, usable with the computer. Each data terminal may take the form of a keyboard unit having a number of depressable buttons or switches corresponding to selected data digits and/or function digits. A bank of data buffers, one of which is associated with each data terminal, is provided as a temporary storage. Data from the terminals is applied to the data buffers on a digit by digit basis for transfer via a multiplexer to the computer.
NASA Astrophysics Data System (ADS)
Ramakrishnan, N.; Tourdot, Richard W.; Eckmann, David M.; Ayyaswamy, Portonovo S.; Muzykantov, Vladimir R.; Radhakrishnan, Ravi
2016-06-01
In order to achieve selective targeting of affinity-ligand coated nanoparticles to the target tissue, it is essential to understand the key mechanisms that govern their capture by the target cell. Next-generation pharmacokinetic (PK) models that systematically account for proteomic and mechanical factors can accelerate the design, validation and translation of targeted nanocarriers (NCs) in the clinic. Towards this objective, we have developed a computational model to delineate the roles played by target protein expression and mechanical factors of the target cell membrane in determining the avidity of functionalized NCs to live cells. Model results show quantitative agreement with in vivo experiments when specific and non-specific contributions to NC binding are taken into account. The specific contributions are accounted for through extensive simulations of multivalent receptor-ligand interactions, membrane mechanics and entropic factors such as membrane undulations and receptor translation. The computed NC avidity is strongly dependent on ligand density, receptor expression, bending mechanics of the target cell membrane, as well as entropic factors associated with the membrane and the receptor motion. Our computational model can predict the in vivo targeting levels of the intracellular adhesion molecule-1 (ICAM1)-coated NCs targeted to the lung, heart, kidney, liver and spleen of mouse, when the contributions due to endothelial capture are accounted for. The effect of other cells (such as monocytes, etc.) do not improve the model predictions at steady state. We demonstrate the predictive utility of our model by predicting partitioning coefficients of functionalized NCs in mice and human tissues and report the statistical accuracy of our model predictions under different scenarios.
Functional CAR models for large spatially correlated functional datasets.
Zhang, Lin; Baladandayuthapani, Veerabhadran; Zhu, Hongxiao; Baggerly, Keith A; Majewski, Tadeusz; Czerniak, Bogdan A; Morris, Jeffrey S
2016-01-01
We develop a functional conditional autoregressive (CAR) model for spatially correlated data for which functions are collected on areal units of a lattice. Our model performs functional response regression while accounting for spatial correlations with potentially nonseparable and nonstationary covariance structure, in both the space and functional domains. We show theoretically that our construction leads to a CAR model at each functional location, with spatial covariance parameters varying and borrowing strength across the functional domain. Using basis transformation strategies, the nonseparable spatial-functional model is computationally scalable to enormous functional datasets, generalizable to different basis functions, and can be used on functions defined on higher dimensional domains such as images. Through simulation studies, we demonstrate that accounting for the spatial correlation in our modeling leads to improved functional regression performance. Applied to a high-throughput spatially correlated copy number dataset, the model identifies genetic markers not identified by comparable methods that ignore spatial correlations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borowik, Piotr, E-mail: pborow@poczta.onet.pl; Thobel, Jean-Luc, E-mail: jean-luc.thobel@iemn.univ-lille1.fr; Adamowicz, Leszek, E-mail: adamo@if.pw.edu.pl
Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron–electron (e–e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport propertiesmore » of degenerate electrons in graphene with e–e interactions. This required adapting the treatment of e–e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.« less
29 CFR 541.201 - Directly related to management or general business operations.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES Administrative Employees § 541.201 Directly related to... operations includes, but is not limited to, work in functional areas such as tax; finance; accounting...
29 CFR 541.201 - Directly related to management or general business operations.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES Administrative Employees § 541.201 Directly related to... operations includes, but is not limited to, work in functional areas such as tax; finance; accounting...
29 CFR 541.201 - Directly related to management or general business operations.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES Administrative Employees § 541.201 Directly related to... operations includes, but is not limited to, work in functional areas such as tax; finance; accounting...
29 CFR 541.201 - Directly related to management or general business operations.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES Administrative Employees § 541.201 Directly related to... operations includes, but is not limited to, work in functional areas such as tax; finance; accounting...
Localized overlap algorithm for unexpanded dispersion energies
NASA Astrophysics Data System (ADS)
Rob, Fazle; Misquitta, Alston J.; Podeszwa, Rafał; Szalewicz, Krzysztof
2014-03-01
First-principles-based, linearly scaling algorithm has been developed for calculations of dispersion energies from frequency-dependent density susceptibility (FDDS) functions with account of charge-overlap effects. The transition densities in FDDSs are fitted by a set of auxiliary atom-centered functions. The terms in the dispersion energy expression involving products of such functions are computed using either the unexpanded (exact) formula or from inexpensive asymptotic expansions, depending on the location of these functions relative to the dimer configuration. This approach leads to significant savings of computational resources. In particular, for a dimer consisting of two elongated monomers with 81 atoms each in a head-to-head configuration, the most favorable case for our algorithm, a 43-fold speedup has been achieved while the approximate dispersion energy differs by less than 1% from that computed using the standard unexpanded approach. In contrast, the dispersion energy computed from the distributed asymptotic expansion differs by dozens of percent in the van der Waals minimum region. A further increase of the size of each monomer would result in only small increased costs since all the additional terms would be computed from the asymptotic expansion.
Computational Prediction of Kinetic Rate Constants
2006-11-30
without requiring additional data. Zero-point energy ( ZPE ) anharmonicity has a large effect on the accuracy of approximate partition function estimates. If...the accurate ZPE is taken into account, separable approximation partition functions using the most accurate torsion treatment and harmonic treatments...for the remaining degrees of freedom agree with accurate QM partition functions to within a mean accuracy of 9%. If no ZPE anharmonicity correction
Concave utility, transaction costs, and risk in measuring discounting of delayed rewards.
Kirby, Kris N; Santiesteban, Mariana
2003-01-01
Research has consistently found that the decline in the present values of delayed rewards as delay increases is better fit by hyperbolic than by exponential delay-discounting functions. However, concave utility, transaction costs, and risk each could produce hyperbolic-looking data, even when the underlying discounting function is exponential. In Experiments 1 (N = 45) and 2 (N = 103), participants placed bids indicating their present values of real future monetary rewards in computer-based 2nd-price auctions. Both experiments suggest that utility is not sufficiently concave to account for the superior fit of hyperbolic functions. Experiment 2 provided no evidence that the effects of transaction costs and risk are large enough to account for the superior fit of hyperbolic functions.
Tawhai, M. H.; Clark, A. R.; Donovan, G. M.; Burrowes, K. S.
2011-01-01
Computational models of lung structure and function necessarily span multiple spatial and temporal scales, i.e., dynamic molecular interactions give rise to whole organ function, and the link between these scales cannot be fully understood if only molecular or organ-level function is considered. Here, we review progress in constructing multiscale finite element models of lung structure and function that are aimed at providing a computational framework for bridging the spatial scales from molecular to whole organ. These include structural models of the intact lung, embedded models of the pulmonary airways that couple to model lung tissue, and models of the pulmonary vasculature that account for distinct structural differences at the extra- and intra-acinar levels. Biophysically based functional models for tissue deformation, pulmonary blood flow, and airway bronchoconstriction are also described. The development of these advanced multiscale models has led to a better understanding of complex physiological mechanisms that govern regional lung perfusion and emergent heterogeneity during bronchoconstriction. PMID:22011236
NASA Astrophysics Data System (ADS)
Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek
2017-07-01
Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron-electron (e-e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport properties of degenerate electrons in graphene with e-e interactions. This required adapting the treatment of e-e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.
A computational substrate for incentive salience.
McClure, Samuel M; Daw, Nathaniel D; Montague, P Read
2003-08-01
Theories of dopamine function are at a crossroads. Computational models derived from single-unit recordings capture changes in dopaminergic neuron firing rate as a prediction error signal. These models employ the prediction error signal in two roles: learning to predict future rewarding events and biasing action choice. Conversely, pharmacological inhibition or lesion of dopaminergic neuron function diminishes the ability of an animal to motivate behaviors directed at acquiring rewards. These lesion experiments have raised the possibility that dopamine release encodes a measure of the incentive value of a contemplated behavioral act. The most complete psychological idea that captures this notion frames the dopamine signal as carrying 'incentive salience'. On the surface, these two competing accounts of dopamine function seem incommensurate. To the contrary, we demonstrate that both of these functions can be captured in a single computational model of the involvement of dopamine in reward prediction for the purpose of reward seeking.
Computational Thermodynamics Characterization of 7075, 7039, and 7020 Aluminum Alloys Using JMatPro
2011-09-01
parameters of temperature and time may be selected to simulate effects on microstructure during annealing , solution treating, quenching, and tempering...nucleation may be taken into account by use of a wetting angle function. Activation energy may be taken into account for rapidly quenched alloys...the stable forms of precipitates that result from solutionizing, annealing or intermediate heat treatment, and phase formation during nonequilibrium
Realistic simulated MRI and SPECT databases. Application to SPECT/MRI registration evaluation.
Aubert-Broche, Berengere; Grova, Christophe; Reilhac, Anthonin; Evans, Alan C; Collins, D Louis
2006-01-01
This paper describes the construction of simulated SPECT and MRI databases that account for realistic anatomical and functional variability. The data is used as a gold-standard to evaluate four SPECT/MRI similarity-based registration methods. Simulation realism was accounted for using accurate physical models of data generation and acquisition. MRI and SPECT simulations were generated from three subjects to take into account inter-subject anatomical variability. Functional SPECT data were computed from six functional models of brain perfusion. Previous models of normal perfusion and ictal perfusion observed in Mesial Temporal Lobe Epilepsy (MTLE) were considered to generate functional variability. We studied the impact noise and intensity non-uniformity in MRI simulations and SPECT scatter correction may have on registration accuracy. We quantified the amount of registration error caused by anatomical and functional variability. Registration involving ictal data was less accurate than registration involving normal data. MR intensity nonuniformity was the main factor decreasing registration accuracy. The proposed simulated database is promising to evaluate many functional neuroimaging methods, involving MRI and SPECT data.
Gintautas, Vadas; Ham, Michael I.; Kunsberg, Benjamin; Barr, Shawn; Brumby, Steven P.; Rasmussen, Craig; George, John S.; Nemenman, Ilya; Bettencourt, Luís M. A.; Kenyon, Garret T.
2011-01-01
Can lateral connectivity in the primary visual cortex account for the time dependence and intrinsic task difficulty of human contour detection? To answer this question, we created a synthetic image set that prevents sole reliance on either low-level visual features or high-level context for the detection of target objects. Rendered images consist of smoothly varying, globally aligned contour fragments (amoebas) distributed among groups of randomly rotated fragments (clutter). The time course and accuracy of amoeba detection by humans was measured using a two-alternative forced choice protocol with self-reported confidence and variable image presentation time (20-200 ms), followed by an image mask optimized so as to interrupt visual processing. Measured psychometric functions were well fit by sigmoidal functions with exponential time constants of 30-91 ms, depending on amoeba complexity. Key aspects of the psychophysical experiments were accounted for by a computational network model, in which simulated responses across retinotopic arrays of orientation-selective elements were modulated by cortical association fields, represented as multiplicative kernels computed from the differences in pairwise edge statistics between target and distractor images. Comparing the experimental and the computational results suggests that each iteration of the lateral interactions takes at least ms of cortical processing time. Our results provide evidence that cortical association fields between orientation selective elements in early visual areas can account for important temporal and task-dependent aspects of the psychometric curves characterizing human contour perception, with the remaining discrepancies postulated to arise from the influence of higher cortical areas. PMID:21998562
Computational complexity of Boolean functions
NASA Astrophysics Data System (ADS)
Korshunov, Aleksei D.
2012-02-01
Boolean functions are among the fundamental objects of discrete mathematics, especially in those of its subdisciplines which fall under mathematical logic and mathematical cybernetics. The language of Boolean functions is convenient for describing the operation of many discrete systems such as contact networks, Boolean circuits, branching programs, and some others. An important parameter of discrete systems of this kind is their complexity. This characteristic has been actively investigated starting from Shannon's works. There is a large body of scientific literature presenting many fundamental results. The purpose of this survey is to give an account of the main results over the last sixty years related to the complexity of computation (realization) of Boolean functions by contact networks, Boolean circuits, and Boolean circuits without branching. Bibliography: 165 titles.
PERFORMANCE OF A COMPUTER-BASED ASSESSMENT OF COGNITIVE FUNCTION MEASURES IN TWO COHORTS OF SENIORS
Espeland, Mark A.; Katula, Jeffrey A.; Rushing, Julia; Kramer, Arthur F.; Jennings, Janine M.; Sink, Kaycee M.; Nadkarni, Neelesh K.; Reid, Kieran F.; Castro, Cynthia M.; Church, Timothy; Kerwin, Diana R.; Williamson, Jeff D.; Marottoli, Richard A.; Rushing, Scott; Marsiske, Michael; Rapp, Stephen R.
2013-01-01
Background Computer-administered assessment of cognitive function is being increasingly incorporated in clinical trials, however its performance in these settings has not been systematically evaluated. Design The Seniors Health and Activity Research Program (SHARP) pilot trial (N=73) developed a computer-based tool for assessing memory performance and executive functioning. The Lifestyle Interventions and Independence for Seniors (LIFE) investigators incorporated this battery in a full scale multicenter clinical trial (N=1635). We describe relationships that test scores have with those from interviewer-administered cognitive function tests and risk factors for cognitive deficits and describe performance measures (completeness, intra-class correlations). Results Computer-based assessments of cognitive function had consistent relationships across the pilot and full scale trial cohorts with interviewer-administered assessments of cognitive function, age, and a measure of physical function. In the LIFE cohort, their external validity was further demonstrated by associations with other risk factors for cognitive dysfunction: education, hypertension, diabetes, and physical function. Acceptable levels of data completeness (>83%) were achieved on all computer-based measures, however rates of missing data were higher among older participants (odds ratio=1.06 for each additional year; p<0.001) and those who reported no current computer use (odds ratio=2.71; p<0.001). Intra-class correlations among clinics were at least as low (ICC≤0.013) as for interviewer measures (ICC≤0.023), reflecting good standardization. All cognitive measures loaded onto the first principal component (global cognitive function), which accounted for 40% of the overall variance. Conclusion Our results support the use of computer-based tools for assessing cognitive function in multicenter clinical trials of older individuals. PMID:23589390
A frequentist approach to computer model calibration
Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.
2016-05-05
The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates ofmore » convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.« less
Reduced state feedback gain computation. [optimization and control theory for aircraft control
NASA Technical Reports Server (NTRS)
Kaufman, H.
1976-01-01
Because application of conventional optimal linear regulator theory to flight controller design requires the capability of measuring and/or estimating the entire state vector, it is of interest to consider procedures for computing controls which are restricted to be linear feedback functions of a lower dimensional output vector and which take into account the presence of measurement noise and process uncertainty. Therefore, a stochastic linear model that was developed is presented which accounts for aircraft parameter and initial uncertainty, measurement noise, turbulence, pilot command and a restricted number of measurable outputs. Optimization with respect to the corresponding output feedback gains was performed for both finite and infinite time performance indices without gradient computation by using Zangwill's modification of a procedure originally proposed by Powell. Results using a seventh order process show the proposed procedures to be very effective.
Computation of output feedback gains for linear stochastic systems using the Zangnill-Powell Method
NASA Technical Reports Server (NTRS)
Kaufman, H.
1975-01-01
Because conventional optimal linear regulator theory results in a controller which requires the capability of measuring and/or estimating the entire state vector, it is of interest to consider procedures for computing controls which are restricted to be linear feedback functions of a lower dimensional output vector and which take into account the presence of measurement noise and process uncertainty. To this effect a stochastic linear model has been developed that accounts for process parameter and initial uncertainty, measurement noise, and a restricted number of measurable outputs. Optimization with respect to the corresponding output feedback gains was then performed for both finite and infinite time performance indices without gradient computation by using Zangwill's modification of a procedure originally proposed by Powell. Results using a seventh order process show the proposed procedures to be very effective.
East-West paths to unconventional computing.
Adamatzky, Andrew; Akl, Selim; Burgin, Mark; Calude, Cristian S; Costa, José Félix; Dehshibi, Mohammad Mahdi; Gunji, Yukio-Pegio; Konkoli, Zoran; MacLennan, Bruce; Marchal, Bruno; Margenstern, Maurice; Martínez, Genaro J; Mayne, Richard; Morita, Kenichi; Schumann, Andrew; Sergeyev, Yaroslav D; Sirakoulis, Georgios Ch; Stepney, Susan; Svozil, Karl; Zenil, Hector
2017-12-01
Unconventional computing is about breaking boundaries in thinking, acting and computing. Typical topics of this non-typical field include, but are not limited to physics of computation, non-classical logics, new complexity measures, novel hardware, mechanical, chemical and quantum computing. Unconventional computing encourages a new style of thinking while practical applications are obtained from uncovering and exploiting principles and mechanisms of information processing in and functional properties of, physical, chemical and living systems; in particular, efficient algorithms are developed, (almost) optimal architectures are designed and working prototypes of future computing devices are manufactured. This article includes idiosyncratic accounts of 'unconventional computing' scientists reflecting on their personal experiences, what attracted them to the field, their inspirations and discoveries. Copyright © 2017 Elsevier Ltd. All rights reserved.
Colour computer-generated holography for point clouds utilizing the Phong illumination model.
Symeonidou, Athanasia; Blinder, David; Schelkens, Peter
2018-04-16
A technique integrating the bidirectional reflectance distribution function (BRDF) is proposed to generate realistic high-quality colour computer-generated holograms (CGHs). We build on prior work, namely a fast computer-generated holography method for point clouds that handles occlusions. We extend the method by integrating the Phong illumination model so that the properties of the objects' surfaces are taken into account to achieve natural light phenomena such as reflections and shadows. Our experiments show that rendering holograms with the proposed algorithm provides realistic looking objects without any noteworthy increase to the computational cost.
Towards a neuro-computational account of prism adaptation.
Petitet, Pierre; O'Reilly, Jill X; O'Shea, Jacinta
2017-12-14
Prism adaptation has a long history as an experimental paradigm used to investigate the functional and neural processes that underlie sensorimotor control. In the neuropsychology literature, prism adaptation behaviour is typically explained by reference to a traditional cognitive psychology framework that distinguishes putative functions, such as 'strategic control' versus 'spatial realignment'. This theoretical framework lacks conceptual clarity, quantitative precision and explanatory power. Here, we advocate for an alternative computational framework that offers several advantages: 1) an algorithmic explanatory account of the computations and operations that drive behaviour; 2) expressed in quantitative mathematical terms; 3) embedded within a principled theoretical framework (Bayesian decision theory, state-space modelling); 4) that offers a means to generate and test quantitative behavioural predictions. This computational framework offers a route towards mechanistic neurocognitive explanations of prism adaptation behaviour. Thus it constitutes a conceptual advance compared to the traditional theoretical framework. In this paper, we illustrate how Bayesian decision theory and state-space models offer principled explanations for a range of behavioural phenomena in the field of prism adaptation (e.g. visual capture, magnitude of visual versus proprioceptive realignment, spontaneous recovery and dynamics of adaptation memory). We argue that this explanatory framework can advance understanding of the functional and neural mechanisms that implement prism adaptation behaviour, by enabling quantitative tests of hypotheses that go beyond merely descriptive mapping claims that 'brain area X is (somehow) involved in psychological process Y'. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
1986 Petroleum Software Directory. [800 mini, micro and mainframe computer software packages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1985-01-01
Pennwell's 1986 Petroleum Software Directory is a complete listing of software created specifically for the petroleum industry. Details are provided on over 800 mini, micro and mainframe computer software packages from more than 250 different companies. An accountant can locate programs to automate bookkeeping functions in large oil and gas production firms. A pipeline engineer will find programs designed to calculate line flow and wellbore pressure drop.
Design for cyclic loading endurance of composites
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Murthy, Pappu L. N.; Chamis, Christos C.; Liaw, Leslie D. G.
1993-01-01
The application of the computer code IPACS (Integrated Probabilistic Assessment of Composite Structures) to aircraft wing type structures is described. The code performs a complete probabilistic analysis for composites taking into account the uncertainties in geometry, boundary conditions, material properties, laminate lay-ups, and loads. Results of the analysis are presented in terms of cumulative distribution functions (CDF) and probability density function (PDF) of the fatigue life of a wing type composite structure under different hygrothermal environments subjected to the random pressure. The sensitivity of the fatigue life to a number of critical structural/material variables is also computed from the analysis.
Solid harmonic wavelet scattering for predictions of molecule properties
NASA Astrophysics Data System (ADS)
Eickenberg, Michael; Exarchakis, Georgios; Hirn, Matthew; Mallat, Stéphane; Thiry, Louis
2018-06-01
We present a machine learning algorithm for the prediction of molecule properties inspired by ideas from density functional theory (DFT). Using Gaussian-type orbital functions, we create surrogate electronic densities of the molecule from which we compute invariant "solid harmonic scattering coefficients" that account for different types of interactions at different scales. Multilinear regressions of various physical properties of molecules are computed from these invariant coefficients. Numerical experiments show that these regressions have near state-of-the-art performance, even with relatively few training examples. Predictions over small sets of scattering coefficients can reach a DFT precision while being interpretable.
Autism as a neural systems disorder: a theory of frontal-posterior underconnectivity.
Just, Marcel Adam; Keller, Timothy A; Malave, Vicente L; Kana, Rajesh K; Varma, Sashank
2012-04-01
The underconnectivity theory of autism attributes the disorder to lower anatomical and functional systems connectivity between frontal and more posterior cortical processing. Here we review evidence for the theory and present a computational model of an executive functioning task (Tower of London) implementing the assumptions of underconnectivity. We make two modifications to a previous computational account of performance and brain activity in typical individuals in the Tower of London task (Newman et al., 2003): (1) the communication bandwidth between frontal and parietal areas was decreased and (2) the posterior centers were endowed with more executive capability (i.e., more autonomy, an adaptation is proposed to arise in response to the lowered frontal-posterior bandwidth). The autism model succeeds in matching the lower frontal-posterior functional connectivity (lower synchronization of activation) seen in fMRI data, as well as providing insight into behavioral response time results. The theory provides a unified account of how a neural dysfunction can produce a neural systems disorder and a psychological disorder with the widespread and diverse symptoms of autism. Copyright © 2012 Elsevier Ltd. All rights reserved.
Autism as a neural systems disorder: A theory of frontal-posterior underconnectivity
Just, Marcel Adam; Keller, Timothy A.; Malave, Vicente L.; Kana, Rajesh K.; Varma, Sashank
2012-01-01
The underconnectivity theory of autism attributes the disorder to lower anatomical and functional systems connectivity between frontal and more posterior cortical processing. Here we review evidence for the theory and present a computational model of an executive functioning task (Tower of London) implementing the assumptions of underconnectivity. We make two modifications to a previous computational account of performance and brain activity in typical individuals in the Tower of London task (Newman et al., 2003): (1) the communication bandwidth between frontal and parietal areas was decreased and (2) the posterior centers were endowed with more executive capability (i.e., more autonomy, an adaptation is proposed to arise in response to the lowered frontal-posterior bandwidth). The autism model succeeds in matching the lower frontal-posterior functional connectivity (lower synchronization of activation) seen in fMRI data, as well as providing insight into behavioral response time results. The theory provides a unified account of how a neural dysfunction can produce a neural systems disorder and a psychological disorder with the widespread and diverse symptoms of autism. PMID:22353426
1993-08-01
pricing and sales, order processing , and purchasing. The class of manufacturing planning functions include aggregate production planning, materials...level. I Depending on the application, each control level will have a number of functions associated with it. For instance, order processing , purchasing...include accounting, sales forecasting, product costing, pricing and sales, order processing , and purchasing. The class of manufacturing planning functions
Estimating Effects of Multipath Propagation on GPS Signals
NASA Technical Reports Server (NTRS)
Byun, Sung; Hajj, George; Young, Lawrence
2005-01-01
Multipath Simulator Taking into Account Reflection and Diffraction (MUSTARD) is a computer program that simulates effects of multipath propagation on received Global Positioning System (GPS) signals. MUSTARD is a very efficient means of estimating multipath-induced position and phase errors as functions of time, given the positions and orientations of GPS satellites, the GPS receiver, and any structures near the receiver as functions of time. MUSTARD traces each signal from a GPS satellite to the receiver, accounting for all possible paths the signal can take, including all paths that include reflection and/or diffraction from surfaces of structures near the receiver and on the satellite. Reflection and diffraction are modeled by use of the geometrical theory of diffraction. The multipath signals are added to the direct signal after accounting for the gain of the receiving antenna. Then, in a simulation of a delay-lock tracking loop in the receiver, the multipath-induced range and phase errors as measured by the receiver are estimated. All of these computations are performed for both right circular polarization and left circular polarization of both the L1 (1.57542-GHz) and L2 (1.2276-GHz) GPS signals.
The free-energy self: a predictive coding account of self-recognition.
Apps, Matthew A J; Tsakiris, Manos
2014-04-01
Recognising and representing one's self as distinct from others is a fundamental component of self-awareness. However, current theories of self-recognition are not embedded within global theories of cortical function and therefore fail to provide a compelling explanation of how the self is processed. We present a theoretical account of the neural and computational basis of self-recognition that is embedded within the free-energy account of cortical function. In this account one's body is processed in a Bayesian manner as the most likely to be "me". Such probabilistic representation arises through the integration of information from hierarchically organised unimodal systems in higher-level multimodal areas. This information takes the form of bottom-up "surprise" signals from unimodal sensory systems that are explained away by top-down processes that minimise the level of surprise across the brain. We present evidence that this theoretical perspective may account for the findings of psychological and neuroimaging investigations into self-recognition and particularly evidence that representations of the self are malleable, rather than fixed as previous accounts of self-recognition might suggest. Copyright © 2013 Elsevier Ltd. All rights reserved.
The free-energy self: A predictive coding account of self-recognition
Apps, Matthew A.J.; Tsakiris, Manos
2013-01-01
Recognising and representing one’s self as distinct from others is a fundamental component of self-awareness. However, current theories of self-recognition are not embedded within global theories of cortical function and therefore fail to provide a compelling explanation of how the self is processed. We present a theoretical account of the neural and computational basis of self-recognition that is embedded within the free-energy account of cortical function. In this account one’s body is processed in a Bayesian manner as the most likely to be “me”. Such probabilistic representation arises through the integration of information from hierarchically organised unimodal systems in higher-level multimodal areas. This information takes the form of bottom-up “surprise” signals from unimodal sensory systems that are explained away by top-down processes that minimise the level of surprise across the brain. We present evidence that this theoretical perspective may account for the findings of psychological and neuroimaging investigations into self-recognition and particularly evidence that representations of the self are malleable, rather than fixed as previous accounts of self-recognition might suggest. PMID:23416066
Ohta, Shinri; Fukui, Naoki; Sakai, Kuniyoshi L.
2013-01-01
The nature of computational principles of syntax remains to be elucidated. One promising approach to this problem would be to construct formal and abstract linguistic models that parametrically predict the activation modulations in the regions specialized for linguistic processes. In this article, we review recent advances in theoretical linguistics and functional neuroimaging in the following respects. First, we introduce the two fundamental linguistic operations: Merge (which combines two words or phrases to form a larger structure) and Search (which searches and establishes a syntactic relation of two words or phrases). We also illustrate certain universal properties of human language, and present hypotheses regarding how sentence structures are processed in the brain. Hypothesis I is that the Degree of Merger (DoM), i.e., the maximum depth of merged subtrees within a given domain, is a key computational concept to properly measure the complexity of tree structures. Hypothesis II is that the basic frame of the syntactic structure of a given linguistic expression is determined essentially by functional elements, which trigger Merge and Search. We then present our recent functional magnetic resonance imaging experiment, demonstrating that the DoM is indeed a key syntactic factor that accounts for syntax-selective activations in the left inferior frontal gyrus and supramarginal gyrus. Hypothesis III is that the DoM domain changes dynamically in accordance with iterative Merge applications, the Search distances, and/or task requirements. We confirm that the DoM accounts for activations in various sentence types. Hypothesis III successfully explains activation differences between object- and subject-relative clauses, as well as activations during explicit syntactic judgment tasks. A future research on the computational principles of syntax will further deepen our understanding of uniquely human mental faculties. PMID:24385957
Ohta, Shinri; Fukui, Naoki; Sakai, Kuniyoshi L
2013-01-01
The nature of computational principles of syntax remains to be elucidated. One promising approach to this problem would be to construct formal and abstract linguistic models that parametrically predict the activation modulations in the regions specialized for linguistic processes. In this article, we review recent advances in theoretical linguistics and functional neuroimaging in the following respects. First, we introduce the two fundamental linguistic operations: Merge (which combines two words or phrases to form a larger structure) and Search (which searches and establishes a syntactic relation of two words or phrases). We also illustrate certain universal properties of human language, and present hypotheses regarding how sentence structures are processed in the brain. Hypothesis I is that the Degree of Merger (DoM), i.e., the maximum depth of merged subtrees within a given domain, is a key computational concept to properly measure the complexity of tree structures. Hypothesis II is that the basic frame of the syntactic structure of a given linguistic expression is determined essentially by functional elements, which trigger Merge and Search. We then present our recent functional magnetic resonance imaging experiment, demonstrating that the DoM is indeed a key syntactic factor that accounts for syntax-selective activations in the left inferior frontal gyrus and supramarginal gyrus. Hypothesis III is that the DoM domain changes dynamically in accordance with iterative Merge applications, the Search distances, and/or task requirements. We confirm that the DoM accounts for activations in various sentence types. Hypothesis III successfully explains activation differences between object- and subject-relative clauses, as well as activations during explicit syntactic judgment tasks. A future research on the computational principles of syntax will further deepen our understanding of uniquely human mental faculties.
Analysis and calculation of lightning-induced voltages in aircraft electrical circuits
NASA Technical Reports Server (NTRS)
Plumer, J. A.
1974-01-01
Techniques to calculate the transfer functions relating lightning-induced voltages in aircraft electrical circuits to aircraft physical characteristics and lightning current parameters are discussed. The analytical work was carried out concurrently with an experimental program of measurements of lightning-induced voltages in the electrical circuits of an F89-J aircraft. A computer program, ETCAL, developed earlier to calculate resistive and inductive transfer functions is refined to account for skin effect, providing results more valid over a wider range of lightning waveshapes than formerly possible. A computer program, WING, is derived to calculate the resistive and inductive transfer functions between a basic aircraft wing and a circuit conductor inside it. Good agreement is obtained between transfer inductances calculated by WING and those reduced from measured data by ETCAL. This computer program shows promise of expansion to permit eventual calculation of potential lightning-induced voltages in electrical circuits of complete aircraft in the design stage.
Lehar, Steven
2003-01-01
Visual illusions and perceptual grouping phenomena offer an invaluable tool for probing the computational mechanism of low-level visual processing. Some illusions, like the Kanizsa figure, reveal illusory contours that form edges collinear with the inducing stimulus. This kind of illusory contour has been modeled by neural network models by way of cells equipped with elongated spatial receptive fields designed to detect and complete the collinear alignment. There are, however, other illusory groupings which are not so easy to account for in neural network terms. The Ehrenstein illusion exhibits an illusory contour that forms a contour orthogonal to the stimulus instead of collinear with it. Other perceptual grouping effects reveal illusory contours that exhibit a sharp corner or vertex, and still others take the form of vertices defined by the intersection of three, four, or more illusory contours that meet at a point. A direct extension of the collinear completion models to account for these phenomena tends towards a combinatorial explosion, because it would suggest cells with specialized receptive fields configured to perform each of those completion types, each of which would have to be replicated at every location and every orientation across the visual field. These phenomena therefore challenge the adequacy of the neural network approach to account for these diverse perceptual phenomena. I have proposed elsewhere an alternative paradigm of neurocomputation in the harmonic resonance theory (Lehar 1999, see website), whereby pattern recognition and completion are performed by spatial standing waves across the neural substrate. The standing waves perform a computational function analogous to that of the spatial receptive fields of the neural network approach, except that, unlike that paradigm, a single resonance mechanism performs a function equivalent to a whole array of spatial receptive fields of different spatial configurations and of different orientations, and thereby avoids the combinatorial explosion inherent in the older paradigm. The present paper presents the directional harmonic model, a more specific development of the harmonic resonance theory, designed to account for specific perceptual grouping phenomena. Computer simulations of the directional harmonic model show that it can account for collinear contours as observed in the Kanizsa figure, orthogonal contours as seen in the Ehrenstein illusion, and a number of illusory vertex percepts composed of two, three, or more illusory contours that meet in a variety of configurations.
Introduction to Software Packages. [Final Report.
ERIC Educational Resources Information Center
Frankel, Sheila, Ed.; And Others
This document provides an introduction to applications computer software packages that support functional managers in government and encourages the use of such packages as an alternative to in-house development. A review of current application areas includes budget/project management, financial management/accounting, payroll, personnel,…
Relativistic effects in ab initio electron-nucleus scattering
NASA Astrophysics Data System (ADS)
Rocco, Noemi; Leidemann, Winfried; Lovato, Alessandro; Orlandini, Giuseppina
2018-05-01
The electromagnetic responses obtained from Green's function Monte Carlo (GFMC) calculations are based on realistic treatments of nuclear interactions and currents. The main limitations of this method comes from its nonrelativistic nature and its computational cost, the latter hampering the direct evaluation of the inclusive cross sections as measured by experiments. We extend the applicability of GFMC in the quasielastic region to intermediate momentum transfers by performing the calculations in a reference frame that minimizes nucleon momenta. Additional relativistic effects in the kinematics are accounted for employing the two-fragment model. In addition, we developed a novel algorithm, based on the concept of first-kind scaling, to compute the inclusive electromagnetic cross section of 4He through an accurate and reliable interpolation of the response functions. A very good agreement is obtained between theoretical and experimental cross sections for a variety of kinematical setups. This offers a promising prospect for the data analysis of neutrino-oscillation experiments that requires an accurate description of nuclear dynamics in which relativistic effects are fully accounted for.
Otto, A Ross; Gershman, Samuel J; Markman, Arthur B; Daw, Nathaniel D
2013-05-01
A number of accounts of human and animal behavior posit the operation of parallel and competing valuation systems in the control of choice behavior. In these accounts, a flexible but computationally expensive model-based reinforcement-learning system has been contrasted with a less flexible but more efficient model-free reinforcement-learning system. The factors governing which system controls behavior-and under what circumstances-are still unclear. Following the hypothesis that model-based reinforcement learning requires cognitive resources, we demonstrated that having human decision makers perform a demanding secondary task engenders increased reliance on a model-free reinforcement-learning strategy. Further, we showed that, across trials, people negotiate the trade-off between the two systems dynamically as a function of concurrent executive-function demands, and people's choice latencies reflect the computational expenses of the strategy they employ. These results demonstrate that competition between multiple learning systems can be controlled on a trial-by-trial basis by modulating the availability of cognitive resources.
NASA Astrophysics Data System (ADS)
Andersson, Robin; Torstensson, Peter T.; Kabo, Elena; Larsson, Fredrik
2015-11-01
A two-dimensional computational model for assessment of rolling contact fatigue induced by discrete rail surface irregularities, especially in the context of so-called squats, is presented. Dynamic excitation in a wide frequency range is considered in computationally efficient time-domain simulations of high-frequency dynamic vehicle-track interaction accounting for transient non-Hertzian wheel-rail contact. Results from dynamic simulations are mapped onto a finite element model to resolve the cyclic, elastoplastic stress response in the rail. Ratcheting under multiple wheel passages is quantified. In addition, low cycle fatigue impact is quantified using the Jiang-Sehitoglu fatigue parameter. The functionality of the model is demonstrated by numerical examples.
NASA Astrophysics Data System (ADS)
John, Christopher; Spura, Thomas; Habershon, Scott; Kühne, Thomas D.
2016-04-01
We present a simple and accurate computational method which facilitates ab initio path-integral molecular dynamics simulations, where the quantum-mechanical nature of the nuclei is explicitly taken into account, at essentially no additional computational cost in comparison to the corresponding calculation using classical nuclei. The predictive power of the proposed quantum ring-polymer contraction method is demonstrated by computing various static and dynamic properties of liquid water at ambient conditions using density functional theory. This development will enable routine inclusion of nuclear quantum effects in ab initio molecular dynamics simulations of condensed-phase systems.
Versino, Daniele; Bronkhorst, Curt Allan
2018-01-31
The computational formulation of a micro-mechanical material model for the dynamic failure of ductile metals is presented in this paper. The statistical nature of porosity initiation is accounted for by introducing an arbitrary probability density function which describes the pores nucleation pressures. Each micropore within the representative volume element is modeled as a thick spherical shell made of plastically incompressible material. The treatment of porosity by a distribution of thick-walled spheres also allows for the inclusion of micro-inertia effects under conditions of shock and dynamic loading. The second order ordinary differential equation governing the microscopic porosity evolution is solved withmore » a robust implicit procedure. A new Chebyshev collocation method is employed to approximate the porosity distribution and remapping is used to optimize memory usage. The adaptive approximation of the porosity distribution leads to a reduction of computational time and memory usage of up to two orders of magnitude. Moreover, the proposed model affords consistent performance: changing the nucleation pressure probability density function and/or the applied strain rate does not reduce accuracy or computational efficiency of the material model. The numerical performance of the model and algorithms presented is tested against three problems for high density tantalum: single void, one-dimensional uniaxial strain, and two-dimensional plate impact. Here, the results using the integration and algorithmic advances suggest a significant improvement in computational efficiency and accuracy over previous treatments for dynamic loading conditions.« less
NASA Astrophysics Data System (ADS)
Czerepicki, A.; Koniak, M.
2017-06-01
The paper presents a method of modelling the processes of aging lithium-ion batteries, its implementation as a computer application and results for battery state estimation. Authors use previously developed behavioural battery model, which was built using battery operating characteristics obtained from the experiment. This model was implemented in the form of a computer program using a database to store battery characteristics. Batteries aging process is a new extended functionality of the model. Algorithm of computer simulation uses a real measurements of battery capacity as a function of the battery charge and discharge cycles number. Simulation allows to take into account the incomplete cycles of charge or discharge battery, which are characteristic for transport powered by electricity. The developed model was used to simulate the battery state estimation for different load profiles, obtained by measuring the movement of the selected means of transport.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.
The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates ofmore » convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.« less
Wang, Menghua
2016-05-30
To understand and assess the effect of the sensor spectral response function (SRF) on the accuracy of the top of the atmosphere (TOA) Rayleigh-scattering radiance computation, new TOA Rayleigh radiance lookup tables (LUTs) over global oceans and inland waters have been generated. The new Rayleigh LUTs include spectral coverage of 335-2555 nm, all possible solar-sensor geometries, and surface wind speeds of 0-30 m/s. Using the new Rayleigh LUTs, the sensor SRF effect on the accuracy of the TOA Rayleigh radiance computation has been evaluated for spectral bands of the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (SNPP) satellite and the Joint Polar Satellite System (JPSS)-1, showing some important uncertainties for VIIRS-SNPP particularly for large solar- and/or sensor-zenith angles as well as for large Rayleigh optical thicknesses (i.e., short wavelengths) and bands with broad spectral bandwidths. To accurately account for the sensor SRF effect, a new correction algorithm has been developed for VIIRS spectral bands, which improves the TOA Rayleigh radiance accuracy to ~0.01% even for the large solar-zenith angles of 70°-80°, compared with the error of ~0.7% without applying the correction for the VIIRS-SNPP 410 nm band. The same methodology that accounts for the sensor SRF effect on the Rayleigh radiance computation can be used for other satellite sensors. In addition, with the new Rayleigh LUTs, the effect of surface atmospheric pressure variation on the TOA Rayleigh radiance computation can be calculated precisely, and no specific atmospheric pressure correction algorithm is needed. There are some other important applications and advantages to using the new Rayleigh LUTs for satellite remote sensing, including an efficient and accurate TOA Rayleigh radiance computation for hyperspectral satellite remote sensing, detector-based TOA Rayleigh radiance computation, Rayleigh radiance calculations for high altitude lakes, and the same Rayleigh LUTs are applicable for all satellite sensors over the global ocean and inland waters. The new Rayleigh LUTs have been implemented in the VIIRS-SNPP ocean color data processing for routine production of global ocean color and inland water products.
The Development of a Dynamic Geomagnetic Cutoff Rigidity Model for the International Space Station
NASA Technical Reports Server (NTRS)
Smart, D. F.; Shea, M. A.
1999-01-01
We have developed a computer model of geomagnetic vertical cutoffs applicable to the orbit of the International Space Station. This model accounts for the change in geomagnetic cutoff rigidity as a function of geomagnetic activity level. This model was delivered to NASA Johnson Space Center in July 1999 and tested on the Space Radiation Analysis Group DEC-Alpha computer system to ensure that it will properly interface with other software currently used at NASA JSC. The software was designed for ease of being upgraded as other improved models of geomagnetic cutoff as a function of magnetic activity are developed.
Automatic temperature computation for realistic IR simulation
NASA Astrophysics Data System (ADS)
Le Goff, Alain; Kersaudy, Philippe; Latger, Jean; Cathala, Thierry; Stolte, Nilo; Barillot, Philippe
2000-07-01
Polygon temperature computation in 3D virtual scenes is fundamental for IR image simulation. This article describes in detail the temperature calculation software and its current extensions, briefly presented in [1]. This software, called MURET, is used by the simulation workshop CHORALE of the French DGA. MURET is a one-dimensional thermal software, which accurately takes into account the material thermal attributes of three-dimensional scene and the variation of the environment characteristics (atmosphere) as a function of the time. Concerning the environment, absorbed incident fluxes are computed wavelength by wavelength, for each half an hour, druing 24 hours before the time of the simulation. For each polygon, incident fluxes are compsed of: direct solar fluxes, sky illumination (including diffuse solar fluxes). Concerning the materials, classical thermal attributes are associated to several layers, such as conductivity, absorption, spectral emissivity, density, specific heat, thickness and convection coefficients are taken into account. In the future, MURET will be able to simulate permeable natural materials (water influence) and vegetation natural materials (woods). This model of thermal attributes induces a very accurate polygon temperature computation for the complex 3D databases often found in CHORALE simulations. The kernel of MUET consists of an efficient ray tracer allowing to compute the history (over 24 hours) of the shadowed parts of the 3D scene and a library, responsible for the thermal computations. The great originality concerns the way the heating fluxes are computed. Using ray tracing, the flux received in each 3D point of the scene accurately takes into account the masking (hidden surfaces) between objects. By the way, this library supplies other thermal modules such as a thermal shows computation tool.
Student Financial Aid Delivery System.
ERIC Educational Resources Information Center
O'Neal, John R.; Carpenter, Catharine A.
1983-01-01
Ohio University's use of computer programing for the need analysis and internal accounting functions in financial aid is described. A substantial improvement of services resulted, with 6,000-10,000 students and the offices of financial aid, bursar, registration, student records, housing, admissions, and controller assisted in the process. Costs…
NASA Technical Reports Server (NTRS)
Cunningham, A. M., Jr.
1973-01-01
The method presented uses a collocation technique with the nonplanar kernel function to solve supersonic lifting surface problems with and without interference. A set of pressure functions are developed based on conical flow theory solutions which account for discontinuities in the supersonic pressure distributions. These functions permit faster solution convergence than is possible with conventional supersonic pressure functions. An improper integral of a 3/2 power singularity along the Mach hyperbola of the nonplanar supersonic kernel function is described and treated. The method is compared with other theories and experiment for a variety of cases.
On computing special functions in marine engineering
NASA Astrophysics Data System (ADS)
Constantinescu, E.; Bogdan, M.
2015-11-01
Important modeling applications in marine engineering conduct us to a special class of solutions for difficult differential equations with variable coefficients. In order to be able to solve and implement such models (in wave theory, in acoustics, in hydrodynamics, in electromagnetic waves, but also in many other engineering fields), it is necessary to compute so called special functions: Bessel functions, modified Bessel functions, spherical Bessel functions, Hankel functions. The aim of this paper is to develop numerical solutions in Matlab for the above mentioned special functions. Taking into account the main properties for Bessel and modified Bessel functions, we shortly present analytically solutions (where possible) in the form of series. Especially it is studied the behavior of these special functions using Matlab facilities: numerical solutions and plotting. Finally, it will be compared the behavior of the special functions and point out other directions for investigating properties of Bessel and spherical Bessel functions. The asymptotic forms of Bessel functions and modified Bessel functions allow determination of important properties of these functions. The modified Bessel functions tend to look more like decaying and growing exponentials.
NASA Astrophysics Data System (ADS)
Welty, N.; Rudolph, M.; Schäfer, F.; Apeldoorn, J.; Janovsky, R.
2013-07-01
This paper presents a computational methodology to predict the satellite system-level effects resulting from impacts of untrackable space debris particles. This approach seeks to improve on traditional risk assessment practices by looking beyond the structural penetration of the satellite and predicting the physical damage to internal components and the associated functional impairment caused by untrackable debris impacts. The proposed method combines a debris flux model with the Schäfer-Ryan-Lambert ballistic limit equation (BLE), which accounts for the inherent shielding of components positioned behind the spacecraft structure wall. Individual debris particle impact trajectories and component shadowing effects are considered and the failure probabilities of individual satellite components as a function of mission time are calculated. These results are correlated to expected functional impairment using a Boolean logic model of the system functional architecture considering the functional dependencies and redundancies within the system.
A machine learning approach for efficient uncertainty quantification using multiscale methods
NASA Astrophysics Data System (ADS)
Chan, Shing; Elsheikh, Ahmed H.
2018-02-01
Several multiscale methods account for sub-grid scale features using coarse scale basis functions. For example, in the Multiscale Finite Volume method the coarse scale basis functions are obtained by solving a set of local problems over dual-grid cells. We introduce a data-driven approach for the estimation of these coarse scale basis functions. Specifically, we employ a neural network predictor fitted using a set of solution samples from which it learns to generate subsequent basis functions at a lower computational cost than solving the local problems. The computational advantage of this approach is realized for uncertainty quantification tasks where a large number of realizations has to be evaluated. We attribute the ability to learn these basis functions to the modularity of the local problems and the redundancy of the permeability patches between samples. The proposed method is evaluated on elliptic problems yielding very promising results.
Computational Modeling in Liver Surgery
Christ, Bruno; Dahmen, Uta; Herrmann, Karl-Heinz; König, Matthias; Reichenbach, Jürgen R.; Ricken, Tim; Schleicher, Jana; Ole Schwen, Lars; Vlaic, Sebastian; Waschinsky, Navina
2017-01-01
The need for extended liver resection is increasing due to the growing incidence of liver tumors in aging societies. Individualized surgical planning is the key for identifying the optimal resection strategy and to minimize the risk of postoperative liver failure and tumor recurrence. Current computational tools provide virtual planning of liver resection by taking into account the spatial relationship between the tumor and the hepatic vascular trees, as well as the size of the future liver remnant. However, size and function of the liver are not necessarily equivalent. Hence, determining the future liver volume might misestimate the future liver function, especially in cases of hepatic comorbidities such as hepatic steatosis. A systems medicine approach could be applied, including biological, medical, and surgical aspects, by integrating all available anatomical and functional information of the individual patient. Such an approach holds promise for better prediction of postoperative liver function and hence improved risk assessment. This review provides an overview of mathematical models related to the liver and its function and explores their potential relevance for computational liver surgery. We first summarize key facts of hepatic anatomy, physiology, and pathology relevant for hepatic surgery, followed by a description of the computational tools currently used in liver surgical planning. Then we present selected state-of-the-art computational liver models potentially useful to support liver surgery. Finally, we discuss the main challenges that will need to be addressed when developing advanced computational planning tools in the context of liver surgery. PMID:29249974
DOE Office of Scientific and Technical Information (OSTI.GOV)
A.A. Bingham; R.M. Ferrer; A.M. ougouag
2009-09-01
An accurate and computationally efficient two or three-dimensional neutron diffusion model will be necessary for the development, safety parameters computation, and fuel cycle analysis of a prismatic Very High Temperature Reactor (VHTR) design under Next Generation Nuclear Plant Project (NGNP). For this purpose, an analytical nodal Green’s function solution for the transverse integrated neutron diffusion equation is developed in two and three-dimensional hexagonal geometry. This scheme is incorporated into HEXPEDITE, a code first developed by Fitzpatrick and Ougouag. HEXPEDITE neglects non-physical discontinuity terms that arise in the transverse leakage due to the transverse integration procedure application to hexagonal geometry andmore » cannot account for the effects of burnable poisons across nodal boundaries. The test code being developed for this document accounts for these terms by maintaining an inventory of neutrons by using the nodal balance equation as a constraint of the neutron flux equation. The method developed in this report is intended to restore neutron conservation and increase the accuracy of the code by adding these terms to the transverse integrated flux solution and applying the nodal Green’s function solution to the resulting equation to derive a semi-analytical solution.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poutanen, Juri, E-mail: juri.poutanen@utu.fi
Rosseland mean opacity plays an important role in theories of stellar evolution and X-ray burst models. In the high-temperature regime, when most of the gas is completely ionized, the opacity is dominated by Compton scattering. Our aim here is to critically evaluate previous works on this subject and to compute the exact Rosseland mean opacity for Compton scattering over a broad range of temperature and electron degeneracy parameter. We use relativistic kinetic equations for Compton scattering and compute the photon mean free path as a function of photon energy by solving the corresponding integral equation in the diffusion limit. Asmore » a byproduct we also demonstrate the way to compute photon redistribution functions in the case of degenerate electrons. We then compute the Rosseland mean opacity as a function of temperature and electron degeneracy and present useful approximate expressions. We compare our results to previous calculations and find a significant difference in the low-temperature regime and strong degeneracy. We then proceed to compute the flux mean opacity in both free-streaming and diffusion approximations, and show that the latter is nearly identical to the Rosseland mean opacity. We also provide a simple way to account for the true absorption in evaluating the Rosseland and flux mean opacities.« less
SARS: Safeguards Accounting and Reporting Software
NASA Astrophysics Data System (ADS)
Mohammedi, B.; Saadi, S.; Ait-Mohamed, S.
In order to satisfy the requirements of the SSAC (State System for Accounting and Control of nuclear materials), for recording and reporting objectives; this computer program comes to bridge the gape between nuclear facilities operators and national inspection verifying records and delivering reports. The SARS maintains and generates at-facility safeguards accounting records and generates International Atomic Energy Agency (IAEA) safeguards reports based on accounting data input by the user at any nuclear facility. A database structure is built and BORLAND DELPHI programming language has been used. The software is designed to be user-friendly, to make extensive and flexible management of menus and graphs. SARS functions include basic physical inventory tacking, transaction histories and reporting. Access controls are made by different passwords.
Statistical Approach To Extraction Of Texture In SAR
NASA Technical Reports Server (NTRS)
Rignot, Eric J.; Kwok, Ronald
1992-01-01
Improved statistical method of extraction of textural features in synthetic-aperture-radar (SAR) images takes account of effects of scheme used to sample raw SAR data, system noise, resolution of radar equipment, and speckle. Treatment of speckle incorporated into overall statistical treatment of speckle, system noise, and natural variations in texture. One computes speckle auto-correlation function from system transfer function that expresses effect of radar aperature and incorporates range and azimuth resolutions.
Bannwarth, Christoph; Seibert, Jakob; Grimme, Stefan
2016-05-01
The electronic circular dichroism (ECD) spectrum of the recently synthesized [16]helicene and a derivative comprising two triisopropylsilyloxy protection groups was computed by means of the very efficient simplified time-dependent density functional theory (sTD-DFT) approach. Different from many previous ECD studies of helicenes, nonequilibrium structure effects were accounted for by computing ECD spectra on "snapshots" obtained from a molecular dynamics (MD) simulation including solvent molecules. The trajectories are based on a molecule specific classical potential as obtained from the recently developed quantum chemically derived force field (QMDFF) scheme. The reduced computational cost in the MD simulation due to the use of the QMDFF (compared to ab-initio MD) as well as the sTD-DFT approach make realistic spectral simulations feasible for these compounds that comprise more than 100 atoms. While the ECD spectra of [16]helicene and its derivative computed vertically on the respective gas phase, equilibrium geometries show noticeable differences, these are "washed" out when nonequilibrium structures are taken into account. The computed spectra with two recommended density functionals (ωB97X and BHLYP) and extended basis sets compare very well with the experimental one. In addition we provide an estimate for the missing absolute intensities of the latter. The approach presented here could also be used in future studies to capture nonequilibrium effects, but also to systematically average ECD spectra over different conformations in more flexible molecules. Chirality 28:365-369, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Toward a Neurobiology of Delusions
Corlett, P.R.; Taylor, J.R.; Wang, X.-J.; Fletcher, P.C.; Krystal, J.H.
2013-01-01
Delusions are the false and often incorrigible beliefs that can cause severe suffering in mental illness. We cannot yet explain them in terms of underlying neurobiological abnormalities. However, by drawing on recent advances in the biological, computational and psychological processes of reinforcement learning, memory, and perception it may be feasible to account for delusions in terms of cognition and brain function. The account focuses on a particular parameter, prediction error – the mismatch between expectation and experience – that provides a computational mechanism common to cortical hierarchies, frontostriatal circuits and the amygdala as well as parietal cortices. We suggest that delusions result from aberrations in how brain circuits specify hierarchical predictions, and how they compute and respond to prediction errors. Defects in these fundamental brain mechanisms can vitiate perception, memory, bodily agency and social learning such that individuals with delusions experience an internal and external world that healthy individuals would find difficult to comprehend. The present model attempts to provide a framework through which we can build a mechanistic and translational understanding of these puzzling symptoms. PMID:20558235
Quantum Gauss-Jordan Elimination and Simulation of Accounting Principles on Quantum Computers
NASA Astrophysics Data System (ADS)
Diep, Do Ngoc; Giang, Do Hoang; Van Minh, Nguyen
2017-06-01
The paper is devoted to a version of Quantum Gauss-Jordan Elimination and its applications. In the first part, we construct the Quantum Gauss-Jordan Elimination (QGJE) Algorithm and estimate the complexity of computation of Reduced Row Echelon Form (RREF) of N × N matrices. The main result asserts that QGJE has computation time is of order 2 N/2. The second part is devoted to a new idea of simulation of accounting by quantum computing. We first expose the actual accounting principles in a pure mathematics language. Then, we simulate the accounting principles on quantum computers. We show that, all accounting actions are exhousted by the described basic actions. The main problems of accounting are reduced to some system of linear equations in the economic model of Leontief. In this simulation, we use our constructed Quantum Gauss-Jordan Elimination to solve the problems and the complexity of quantum computing is a square root order faster than the complexity in classical computing.
Advancing Creative Visual Thinking with Constructive Function-Based Modelling
ERIC Educational Resources Information Center
Pasko, Alexander; Adzhiev, Valery; Malikova, Evgeniya; Pilyugin, Victor
2013-01-01
Modern education technologies are destined to reflect the realities of a modern digital age. The juxtaposition of real and synthetic (computer-generated) worlds as well as a greater emphasis on visual dimension are especially important characteristics that have to be taken into account in learning and teaching. We describe the ways in which an…
GABE: A Cloud Brokerage System for Service Selection, Accountability and Enforcement
ERIC Educational Resources Information Center
Sundareswaran, Smitha
2014-01-01
Much like its meteorological counterpart, "Cloud Computing" is an amorphous agglomeration of entities. It is amorphous in that the exact layout of the servers, the load balancers and their functions are neither known nor fixed. Its an agglomerate in that multiple service providers and vendors often coordinate to form a multitenant system…
Functional analysis from visual and compositional data. An artificial intelligence approach.
NASA Astrophysics Data System (ADS)
Barceló, J. A.; Moitinho de Almeida, V.
Why archaeological artefacts are the way they are? In this paper we try to solve such a question by investigating the relationship between form and function. We propose new ways of studying the way behaviour in the past can be asserted on the examination of archaeological observables in the present. In any case, we take into account that there are also non-visual features characterizing ancient objects and materials (i.e., compositional information based on mass spectrometry data, chronological information based on radioactive decay measurements, etc.). Information that should make us aware of many functional properties of objects is multidimensional in nature: size, which makes reference to height, length, depth, weight and mass; shape and form, which make reference to the geometry of contours and volumes; texture, which refers to the microtopography (roughness, waviness, and lay) and visual appearance (colour variations, brightness, reflectivity and transparency) of surfaces; and finally material, meaning the combining of distinct compositional elements and properties to form a whole. With the exception of material data, the other relevant aspects for functional reasoning have been traditionally described in rather ambiguous terms, without taking into account the advantages of quantitative measurements of shape/form, and texture. Reasoning about the functionality of archaeological objects recovered at the archaeological site requires a cross-disciplinary investigation, which may also range from recognition techniques used in computer vision and robotics to reasoning, representation, and learning methods in artificial intelligence. The approach we adopt here is to follow current computational theories of object perception to ameliorate the way archaeology can deal with the explanation of human behaviour in the past (function) from the analysis of visual and non-visual data, taking into account that visual appearances and even compositional characteristics only constrain the way an object may be used, but never fully determine it.
Software for Probabilistic Risk Reduction
NASA Technical Reports Server (NTRS)
Hensley, Scott; Michel, Thierry; Madsen, Soren; Chapin, Elaine; Rodriguez, Ernesto
2004-01-01
A computer program implements a methodology, denoted probabilistic risk reduction, that is intended to aid in planning the development of complex software and/or hardware systems. This methodology integrates two complementary prior methodologies: (1) that of probabilistic risk assessment and (2) a risk-based planning methodology, implemented in a prior computer program known as Defect Detection and Prevention (DDP), in which multiple requirements and the beneficial effects of risk-mitigation actions are taken into account. The present methodology and the software are able to accommodate both process knowledge (notably of the efficacy of development practices) and product knowledge (notably of the logical structure of a system, the development of which one seeks to plan). Estimates of the costs and benefits of a planned development can be derived. Functional and non-functional aspects of software can be taken into account, and trades made among them. It becomes possible to optimize the planning process in the sense that it becomes possible to select the best suite of process steps and design choices to maximize the expectation of success while remaining within budget.
A distributed, hierarchical and recurrent framework for reward-based choice
Hunt, Laurence T.; Hayden, Benjamin Y.
2017-01-01
Many accounts of reward-based choice argue for distinct component processes that are serial and functionally localized. In this article, we argue for an alternative viewpoint, in which choices emerge from repeated computations that are distributed across many brain regions. We emphasize how several features of neuroanatomy may support the implementation of choice, including mutual inhibition in recurrent neural networks and the hierarchical organisation of timescales for information processing across the cortex. This account also suggests that certain correlates of value may be emergent rather than represented explicitly in the brain. PMID:28209978
Computerizing the Accounting Curriculum.
ERIC Educational Resources Information Center
Nash, John F.; England, Thomas G.
1986-01-01
Discusses the use of computers in college accounting courses. Argues that the success of new efforts in using computers in teaching accounting is dependent upon increasing instructors' computer skills, and choosing appropriate hardware and software, including commercially available business software packages. (TW)
Solanka, Lukas; van Rossum, Mark CW; Nolan, Matthew F
2015-01-01
Neural computations underlying cognitive functions require calibration of the strength of excitatory and inhibitory synaptic connections and are associated with modulation of gamma frequency oscillations in network activity. However, principles relating gamma oscillations, synaptic strength and circuit computations are unclear. We address this in attractor network models that account for grid firing and theta-nested gamma oscillations in the medial entorhinal cortex. We show that moderate intrinsic noise massively increases the range of synaptic strengths supporting gamma oscillations and grid computation. With moderate noise, variation in excitatory or inhibitory synaptic strength tunes the amplitude and frequency of gamma activity without disrupting grid firing. This beneficial role for noise results from disruption of epileptic-like network states. Thus, moderate noise promotes independent control of multiplexed firing rate- and gamma-based computational mechanisms. Our results have implications for tuning of normal circuit function and for disorders associated with changes in gamma oscillations and synaptic strength. DOI: http://dx.doi.org/10.7554/eLife.06444.001 PMID:26146940
Inference of reaction rate parameters based on summary statistics from experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin
Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less
Inference of reaction rate parameters based on summary statistics from experiments
Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin; ...
2016-10-15
Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less
Photoelectron wave function in photoionization: plane wave or Coulomb wave?
Gozem, Samer; Gunina, Anastasia O; Ichino, Takatoshi; Osborn, David L; Stanton, John F; Krylov, Anna I
2015-11-19
The calculation of absolute total cross sections requires accurate wave functions of the photoelectron and of the initial and final states of the system. The essential information contained in the latter two can be condensed into a Dyson orbital. We employ correlated Dyson orbitals and test approximate treatments of the photoelectron wave function, that is, plane and Coulomb waves, by comparing computed and experimental photoionization and photodetachment spectra. We find that in anions, a plane wave treatment of the photoelectron provides a good description of photodetachment spectra. For photoionization of neutral atoms or molecules with one heavy atom, the photoelectron wave function must be treated as a Coulomb wave to account for the interaction of the photoelectron with the +1 charge of the ionized core. For larger molecules, the best agreement with experiment is often achieved by using a Coulomb wave with a partial (effective) charge smaller than unity. This likely derives from the fact that the effective charge at the centroid of the Dyson orbital, which serves as the origin of the spherical wave expansion, is smaller than the total charge of a polyatomic cation. The results suggest that accurate molecular photoionization cross sections can be computed with a modified central potential model that accounts for the nonspherical charge distribution of the core by adjusting the charge in the center of the expansion.
ERIC Educational Resources Information Center
Laing, Gregory Kenneth; Perrin, Ronald William
2012-01-01
This paper presents the findings of a field study conducted to ascertain the perceptions of first year accounting students concerning the integration of computer applications in the accounting curriculum. The results indicate that both student cohorts perceived the computer as a valuable educational tool. The use of computers to enhance the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lutsker, V.; Niehaus, T. A., E-mail: thomas.niehaus@physik.uni-regensburg.de; Aradi, B.
2015-11-14
Bridging the gap between first principles methods and empirical schemes, the density functional based tight-binding method (DFTB) has become a versatile tool in predictive atomistic simulations over the past years. One of the major restrictions of this method is the limitation to local or gradient corrected exchange-correlation functionals. This excludes the important class of hybrid or long-range corrected functionals, which are advantageous in thermochemistry, as well as in the computation of vibrational, photoelectron, and optical spectra. The present work provides a detailed account of the implementation of DFTB for a long-range corrected functional in generalized Kohn-Sham theory. We apply themore » method to a set of organic molecules and compare ionization potentials and electron affinities with the original DFTB method and higher level theory. The new scheme cures the significant overpolarization in electric fields found for local DFTB, which parallels the functional dependence in first principles density functional theory (DFT). At the same time, the computational savings with respect to full DFT calculations are not compromised as evidenced by numerical benchmark data.« less
A computational and neural model of momentary subjective well-being
Rutledge, Robb B.; Skandali, Nikolina; Dayan, Peter; Dolan, Raymond J.
2014-01-01
The subjective well-being or happiness of individuals is an important metric for societies. Although happiness is influenced by life circumstances and population demographics such as wealth, we know little about how the cumulative influence of daily life events are aggregated into subjective feelings. Using computational modeling, we show that emotional reactivity in the form of momentary happiness in response to outcomes of a probabilistic reward task is explained not by current task earnings, but by the combined influence of recent reward expectations and prediction errors arising from those expectations. The robustness of this account was evident in a large-scale replication involving 18,420 participants. Using functional MRI, we show that the very same influences account for task-dependent striatal activity in a manner akin to the influences underpinning changes in happiness. PMID:25092308
Atmospheric solar heating rate in the water vapor bands
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah
1986-01-01
The total absorption of solar radiation by water vapor in clear atmospheres is parameterized as a simple function of the scaled water vapor amount. For applications to cloudy and hazy atmospheres, the flux-weighted k-distribution functions are computed for individual absorption bands and for the total near-infrared region. The parameterization is based upon monochromatic calculations and follows essentially the scaling approximation of Chou and Arking, but the effect of temperature variation with height is taken into account in order to enhance the accuracy. Furthermore, the spectral range is extended to cover the two weak bands centered at 0.72 and 0.82 micron. Comparisons with monochromatic calculations show that the atmospheric heating rate and the surface radiation can be accurately computed from the parameterization. Comparisons are also made with other parameterizations. It is found that the absorption of solar radiation can be computed reasonably well using the Goody band model and the Curtis-Godson approximation.
[HYGIENIC REGULATION OF THE USE OF ELECTRONIC EDUCATIONAL RESOURCES IN THE MODERN SCHOOL].
Stepanova, M I; Aleksandrova, I E; Sazanyuk, Z I; Voronova, B Z; Lashneva, L P; Shumkova, T V; Berezina, N O
2015-01-01
We studied the effect of academic studies with the use a notebook computer and interactive whiteboard on the functional state of an organism of schoolchildren. Using a complex of hygienic and physiological methods of the study we established that regulation of the computer activity of students must take into account not only duration but its intensity either. Design features of a notebook computer were shown both to impede keeping the optimal working posture in primary school children and increase the risk offormation of disorders of vision and musculoskeletal system. There were established the activating influence of the interactive whiteboard on performance activities and favorable dynamics of indices of the functional state of the organism of students under keeping optimal density of the academic study and the duration of its use. There are determined safety regulations of the work of schoolchildren with electronic resources in the educational process.
47 CFR 32.2000 - Instructions for telecommunications plant accounts.
Code of Federal Regulations, 2011 CFR
2011-10-01
... equipment; 2122, Furniture; 2123, Office equipment; 2124, General purpose computers, costing $2,000 or less... for personal computers falling within Account 2124. Personal computers classifiable to Account 2124..., power, construction quarters, office space and equipment directly related to the construction project...
47 CFR 32.2000 - Instructions for telecommunications plant accounts.
Code of Federal Regulations, 2014 CFR
2014-10-01
... equipment; 2122, Furniture; 2123, Office equipment; 2124, General purpose computers, costing $2,000 or less... for personal computers falling within Account 2124. Personal computers classifiable to Account 2124..., power, construction quarters, office space and equipment directly related to the construction project...
47 CFR 32.2000 - Instructions for telecommunications plant accounts.
Code of Federal Regulations, 2012 CFR
2012-10-01
... equipment; 2122, Furniture; 2123, Office equipment; 2124, General purpose computers, costing $2,000 or less... for personal computers falling within Account 2124. Personal computers classifiable to Account 2124..., power, construction quarters, office space and equipment directly related to the construction project...
47 CFR 32.2000 - Instructions for telecommunications plant accounts.
Code of Federal Regulations, 2013 CFR
2013-10-01
... equipment; 2122, Furniture; 2123, Office equipment; 2124, General purpose computers, costing $2,000 or less... for personal computers falling within Account 2124. Personal computers classifiable to Account 2124..., power, construction quarters, office space and equipment directly related to the construction project...
Polytopol computing for multi-core and distributed systems
NASA Astrophysics Data System (ADS)
Spaanenburg, Henk; Spaanenburg, Lambert; Ranefors, Johan
2009-05-01
Multi-core computing provides new challenges to software engineering. The paper addresses such issues in the general setting of polytopol computing, that takes multi-core problems in such widely differing areas as ambient intelligence sensor networks and cloud computing into account. It argues that the essence lies in a suitable allocation of free moving tasks. Where hardware is ubiquitous and pervasive, the network is virtualized into a connection of software snippets judiciously injected to such hardware that a system function looks as one again. The concept of polytopol computing provides a further formalization in terms of the partitioning of labor between collector and sensor nodes. Collectors provide functions such as a knowledge integrator, awareness collector, situation displayer/reporter, communicator of clues and an inquiry-interface provider. Sensors provide functions such as anomaly detection (only communicating singularities, not continuous observation), they are generally powered or self-powered, amorphous (not on a grid) with generation-and-attrition, field re-programmable, and sensor plug-and-play-able. Together the collector and the sensor are part of the skeleton injector mechanism, added to every node, and give the network the ability to organize itself into some of many topologies. Finally we will discuss a number of applications and indicate how a multi-core architecture supports the security aspects of the skeleton injector.
An Energy-Based Limit State Function for Estimation of Structural Reliability in Shock Environments
Guthrie, Michael A.
2013-01-01
limit state function is developed for the estimation of structural reliability in shock environments. This limit state function uses peak modal strain energies to characterize environmental severity and modal strain energies at failure to characterize the structural capacity. The Hasofer-Lind reliability index is briefly reviewed and its computation for the energy-based limit state function is discussed. Applications to two degree of freedom mass-spring systems and to a simple finite element model are considered. For these examples, computation of the reliability index requires little effort beyond a modal analysis, but still accounts for relevant uncertainties in both the structure and environment.more » For both examples, the reliability index is observed to agree well with the results of Monte Carlo analysis. In situations where fast, qualitative comparison of several candidate designs is required, the reliability index based on the proposed limit state function provides an attractive metric which can be used to compare and control reliability.« less
Assessment of the Accounting and Joint Accounting/Computer Information Systems Programs.
ERIC Educational Resources Information Center
Appiah, John; Cernigliaro, James; Davis, Jeffrey; Gordon, Millicent; Richards, Yves; Santamaria, Fernando; Siegel, Annette; Lytle, Namy; Wharton, Patrick
This document presents City University of New York LaGuardia Community College's Department of Accounting and Managerial Studies assessment of its accounting and joint accounting/computer information systems programs report, and includes the following items: (1) description of the mission and goals of the Department of Accounting and Managerial…
A projection method for coupling two-phase VOF and fluid structure interaction simulations
NASA Astrophysics Data System (ADS)
Cerroni, Daniele; Da Vià, Roberto; Manservisi, Sandro
2018-02-01
The study of Multiphase Fluid Structure Interaction (MFSI) is becoming of great interest in many engineering applications. In this work we propose a new algorithm for coupling a FSI problem to a multiphase interface advection problem. An unstructured computational grid and a Cartesian mesh are used for the FSI and the VOF problem, respectively. The coupling between these two different grids is obtained by interpolating the velocity field into the Cartesian grid through a projection operator that can take into account the natural movement of the FSI domain. The piecewise color function is interpolated back on the unstructured grid with a Galerkin interpolation to obtain a point-wise function which allows the direct computation of the surface tension forces.
Quasi-static earthquake cycle simulation based on nonlinear viscoelastic finite element analyses
NASA Astrophysics Data System (ADS)
Agata, R.; Ichimura, T.; Hyodo, M.; Barbot, S.; Hori, T.
2017-12-01
To explain earthquake generation processes, simulation methods of earthquake cycles have been studied. For such simulations, the combination of the rate- and state-dependent friction law at the fault plane and the boundary integral method based on Green's function in an elastic half space is widely used (e.g. Hori 2009; Barbot et al. 2012). In this approach, stress change around the fault plane due to crustal deformation can be computed analytically, while the effects of complex physics such as mantle rheology and gravity are generally not taken into account. To consider such effects, we seek to develop an earthquake cycle simulation combining crustal deformation computation based on the finite element (FE) method with the rate- and state-dependent friction law. Since the drawback of this approach is the computational cost associated with obtaining numerical solutions, we adopt a recently developed fast and scalable FE solver (Ichimura et al. 2016), which assumes use of supercomputers, to solve the problem in a realistic time. As in the previous approach, we solve the governing equations consisting of the rate- and state-dependent friction law. In solving the equations, we compute stress changes along the fault plane due to crustal deformation using FE simulation, instead of computing them by superimposing slip response function as in the previous approach. In stress change computation, we take into account nonlinear viscoelastic deformation in the asthenosphere. In the presentation, we will show simulation results in a normative three-dimensional problem, where a circular-shaped velocity-weakening area is set in a square-shaped fault plane. The results with and without nonlinear viscosity in the asthenosphere will be compared. We also plan to apply the developed code to simulate the post-earthquake deformation of a megathrust earthquake, such as the 2011 Tohoku earthquake. Acknowledgment: The results were obtained using the K computer at the RIKEN (Proposal number hp160221).
Fluid Structure Interaction Techniques For Extrusion And Mixing Processes
NASA Astrophysics Data System (ADS)
Valette, Rudy; Vergnes, Bruno; Coupez, Thierry
2007-05-01
This work focuses on the development of numerical techniques devoted to the simulation of mixing processes of complex fluids such as twin-screw extrusion or batch mixing. In mixing process simulation, the absence of symmetry of the moving boundaries (the screws or the rotors) implies that their rigid body motion has to be taken into account by using a special treatment We therefore use a mesh immersion technique (MIT), which consists in using a P1+/P1-based (MINI-element) mixed finite element method for solving the velocity-pressure problem and then solving the problem in the whole barrel cavity by imposing a rigid motion (rotation) to nodes found located inside the so called immersed domain, each sub-domain (screw, rotor) being represented by a surface CAD mesh (or its mathematical equation in simple cases). The independent meshes are immersed into a unique background computational mesh by computing the distance function to their boundaries. Intersections of meshes are accounted for, allowing to compute a fill factor usable as for the VOF methodology. This technique, combined with the use of parallel computing, allows to compute the time-dependent flow of generalized Newtonian fluids including yield stress fluids in a complex system such as a twin screw extruder, including moving free surfaces, which are treated by a "level set" and Hamilton-Jacobi method.
ERIC Educational Resources Information Center
Owhoso, Vincent; Malgwi, Charles A.; Akpomi, Margaret
2014-01-01
The authors examine whether students who completed a computer-based intervention program, designed to help them develop abilities and skills in introductory accounting, later declared accounting as a major. A sample of 1,341 students participated in the study, of which 74 completed the intervention program (computer-based assisted learning [CBAL])…
Coordination dynamics in a socially situated nervous system
Coey, Charles A.; Varlet, Manuel; Richardson, Michael J.
2012-01-01
Traditional theories of cognitive science have typically accounted for the organization of human behavior by detailing requisite computational/representational functions and identifying neurological mechanisms that might perform these functions. Put simply, such approaches hold that neural activity causes behavior. This same general framework has been extended to accounts of human social behavior via concepts such as “common-coding” and “co-representation” and much recent neurological research has been devoted to brain structures that might execute these social-cognitive functions. Although these neural processes are unquestionably involved in the organization and control of human social interactions, there is good reason to question whether they should be accorded explanatory primacy. Alternatively, we propose that a full appreciation of the role of neural processes in social interactions requires appropriately situating them in their context of embodied-embedded constraints. To this end, we introduce concepts from dynamical systems theory and review research demonstrating that the organization of human behavior, including social behavior, can be accounted for in terms of self-organizing processes and lawful dynamics of animal-environment systems. Ultimately, we hope that these alternative concepts can complement the recent advances in cognitive neuroscience and thereby provide opportunities to develop a complete and coherent account of human social interaction. PMID:22701413
Hierarchical Ensemble Methods for Protein Function Prediction
2014-01-01
Protein function prediction is a complex multiclass multilabel classification problem, characterized by multiple issues such as the incompleteness of the available annotations, the integration of multiple sources of high dimensional biomolecular data, the unbalance of several functional classes, and the difficulty of univocally determining negative examples. Moreover, the hierarchical relationships between functional classes that characterize both the Gene Ontology and FunCat taxonomies motivate the development of hierarchy-aware prediction methods that showed significantly better performances than hierarchical-unaware “flat” prediction methods. In this paper, we provide a comprehensive review of hierarchical methods for protein function prediction based on ensembles of learning machines. According to this general approach, a separate learning machine is trained to learn a specific functional term and then the resulting predictions are assembled in a “consensus” ensemble decision, taking into account the hierarchical relationships between classes. The main hierarchical ensemble methods proposed in the literature are discussed in the context of existing computational methods for protein function prediction, highlighting their characteristics, advantages, and limitations. Open problems of this exciting research area of computational biology are finally considered, outlining novel perspectives for future research. PMID:25937954
An objective function exploiting suboptimal solutions in metabolic networks
2013-01-01
Background Flux Balance Analysis is a theoretically elegant, computationally efficient, genome-scale approach to predicting biochemical reaction fluxes. Yet FBA models exhibit persistent mathematical degeneracy that generally limits their predictive power. Results We propose a novel objective function for cellular metabolism that accounts for and exploits degeneracy in the metabolic network to improve flux predictions. In our model, regulation drives metabolism toward a region of flux space that allows nearly optimal growth. Metabolic mutants deviate minimally from this region, a function represented mathematically as a convex cone. Near-optimal flux configurations within this region are considered equally plausible and not subject to further optimizing regulation. Consistent with relaxed regulation near optimality, we find that the size of the near-optimal region predicts flux variability under experimental perturbation. Conclusion Accounting for suboptimal solutions can improve the predictive power of metabolic FBA models. Because fluctuations of enzyme and metabolite levels are inevitable, tolerance for suboptimality may support a functionally robust metabolic network. PMID:24088221
Domain Immersion Technique And Free Surface Computations Applied To Extrusion And Mixing Processes
NASA Astrophysics Data System (ADS)
Valette, Rudy; Vergnes, Bruno; Basset, Olivier; Coupez, Thierry
2007-04-01
This work focuses on the development of numerical techniques devoted to the simulation of mixing processes of complex fluids such as twin-screw extrusion or batch mixing. In mixing process simulation, the absence of symmetry of the moving boundaries (the screws or the rotors) implies that their rigid body motion has to be taken into account by using a special treatment. We therefore use a mesh immersion technique (MIT), which consists in using a P1+/P1-based (MINI-element) mixed finite element method for solving the velocity-pressure problem and then solving the problem in the whole barrel cavity by imposing a rigid motion (rotation) to nodes found located inside the so called immersed domain, each subdomain (screw, rotor) being represented by a surface CAD mesh (or its mathematical equation in simple cases). The independent meshes are immersed into a unique backgound computational mesh by computing the distance function to their boundaries. Intersections of meshes are accounted for, allowing to compute a fill factor usable as for the VOF methodology. This technique, combined with the use of parallel computing, allows to compute the time-dependent flow of generalized Newtonian fluids including yield stress fluids in a complex system such as a twin screw extruder, including moving free surfaces, which are treated by a "level set" and Hamilton-Jacobi method.
Incorporating engine health monitoring capability into the SSME Block II controller
NASA Astrophysics Data System (ADS)
Clarke, James W.; Copa, Roderick J.
An account is given of the architecture of the SSME's Block II controller's architecture, its incorporation of smart input electronics (SIE), and the potential benefits of this technology in SSME health-monitoring capabilities. SIE allows the Block II controller to conduct its control functions while simultaneously furnishing the computational capabilities and sensor input interface for any newly defined health-monitoring functions. It is expected that the SIE technology may be directly transferred to any follow-on engine design.
Ab-initio study of electronic structure and elastic properties of ZrC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mund, H. S., E-mail: hmoond@gmail.com; Ahuja, B. L.
2016-05-23
The electronic and elastic properties of ZrC have been investigated using the linear combination of atomic orbitals method within the framework of density functional theory. Different exchange-correlation functionals are taken into account within generalized gradient approximation. We have computed energy bands, density of states, elastic constants, bulk modulus, shear modulus, Young’s modulus, Poisson’s ratio, lattice parameters and pressure derivative of the bulk modulus by calculating ground state energy of the rock salt structure type ZrC.
26 CFR 1.9002-1 - Purpose, applicability, and definitions.
Code of Federal Regulations, 2010 CFR
2010-04-01
... compute, taxable income under an accrual method of accounting, and (2) treated dealer reserve income (or portions thereof) which should have been taken into account (under the accrual method of accounting) for... accounting or who was not required to compute taxable income under the accrual method of accounting. An...
49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).
Code of Federal Regulations, 2012 CFR
2012-10-01
... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2012-10-01 2012-10-01 false Computers and data processing equipment (account...
49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).
Code of Federal Regulations, 2013 CFR
2013-10-01
... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2013-10-01 2013-10-01 false Computers and data processing equipment (account...
49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).
Code of Federal Regulations, 2011 CFR
2011-10-01
... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2011-10-01 2011-10-01 false Computers and data processing equipment (account...
49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).
Code of Federal Regulations, 2014 CFR
2014-10-01
... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2014-10-01 2014-10-01 false Computers and data processing equipment (account...
49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).
Code of Federal Regulations, 2010 CFR
2010-10-01
... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2010-10-01 2010-10-01 false Computers and data processing equipment (account...
Modeling the Proton Radiation Belt With Van Allen Probes Relativistic Electron-Proton Telescope Data
NASA Technical Reports Server (NTRS)
Kanekal, S. G.; Li, X.; Baker, D. N.; Selesnick, R. S.; Hoxie, V. C.
2018-01-01
An empirical model of the proton radiation belt is constructed from data taken during 2013-2017 by the Relativistic Electron-Proton Telescopes on the Van Allen Probes satellites. The model intensity is a function of time, kinetic energy in the range 18-600 megaelectronvolts, equatorial pitch angle, and L shell of proton guiding centers. Data are selected, on the basis of energy deposits in each of the nine silicon detectors, to reduce background caused by hard proton energy spectra at low L. Instrument response functions are computed by Monte Carlo integration, using simulated proton paths through a simplified structural model, to account for energy loss in shielding material for protons outside the nominal field of view. Overlap of energy channels, their wide angular response, and changing satellite orientation require the model dependencies on all three independent variables be determined simultaneously. This is done by least squares minimization with a customized steepest descent algorithm. Model uncertainty accounts for statistical data error and systematic error in the simulated instrument response. A proton energy spectrum is also computed from data taken during the 8 January 2014 solar event, to illustrate methods for the simpler case of an isotropic and homogeneous model distribution. Radiation belt and solar proton results are compared to intensities computed with a simplified, on-axis response that can provide a good approximation under limited circumstances.
Modeling the Proton Radiation Belt With Van Allen Probes Relativistic Electron-Proton Telescope Data
NASA Astrophysics Data System (ADS)
Selesnick, R. S.; Baker, D. N.; Kanekal, S. G.; Hoxie, V. C.; Li, X.
2018-01-01
An empirical model of the proton radiation belt is constructed from data taken during 2013-2017 by the Relativistic Electron-Proton Telescopes on the Van Allen Probes satellites. The model intensity is a function of time, kinetic energy in the range 18-600 MeV, equatorial pitch angle, and L shell of proton guiding centers. Data are selected, on the basis of energy deposits in each of the nine silicon detectors, to reduce background caused by hard proton energy spectra at low L. Instrument response functions are computed by Monte Carlo integration, using simulated proton paths through a simplified structural model, to account for energy loss in shielding material for protons outside the nominal field of view. Overlap of energy channels, their wide angular response, and changing satellite orientation require the model dependencies on all three independent variables be determined simultaneously. This is done by least squares minimization with a customized steepest descent algorithm. Model uncertainty accounts for statistical data error and systematic error in the simulated instrument response. A proton energy spectrum is also computed from data taken during the 8 January 2014 solar event, to illustrate methods for the simpler case of an isotropic and homogeneous model distribution. Radiation belt and solar proton results are compared to intensities computed with a simplified, on-axis response that can provide a good approximation under limited circumstances.
On the equilibrium charge density at tilt grain boundaries
NASA Astrophysics Data System (ADS)
Srikant, V.; Clarke, D. R.
1998-05-01
The equilibrium charge density and free energy of tilt grain boundaries as a function of their misorientation is computed using a Monte Carlo simulation that takes into account both the electrostatic and configurational energies associated with charges at the grain boundary. The computed equilibrium charge density increases with the grain-boundary angle and approaches a saturation value. The equilibrium charge density at large-angle grain boundaries compares well with experimental values for large-angle tilt boundaries in GaAs. The computed grain-boundary electrostatic energy is in agreement with the analytical solution to a one-dimensional Poisson equation at high donor densities but indicates that the analytical solution overestimates the electrostatic energy at lower donor densities.
Computer imaging and workflow systems in the business office.
Adams, W T; Veale, F H; Helmick, P M
1999-05-01
Computer imaging and workflow technology automates many business processes that currently are performed using paper processes. Documents are scanned into the imaging system and placed in electronic patient account folders. Authorized users throughout the organization, including preadmission, verification, admission, billing, cash posting, customer service, and financial counseling staff, have online access to the information they need when they need it. Such streamlining of business functions can increase collections and customer satisfaction while reducing labor, supply, and storage costs. Because the costs of a comprehensive computer imaging and workflow system can be considerable, healthcare organizations should consider implementing parts of such systems that can be cost-justified or include implementation as part of a larger strategic technology initiative.
A useful approximation for the flat surface impulse response
NASA Technical Reports Server (NTRS)
Brown, Gary S.
1989-01-01
The flat surface impulse response (FSIR) is a very useful quantity in computing the mean return power for near-nadir-oriented short-pulse radar altimeters. However, for very small antenna beamwidths and relatively large pointing angles, previous analytical descriptions become very difficult to compute accurately. An asymptotic approximation is developed to overcome these computational problems. Since accuracy is of key importance, a condition is developed under which this solution is within 2 percent of the exact answer. The asymptotic solution is shown to be in functional agreement with a conventional clutter power result and gives a 1.25-dB correction to this formula to account properly for the antenna-pattern variation over the illuminated area.
The development of acoustic experiments for off-campus teaching and learning
NASA Astrophysics Data System (ADS)
Wild, Graham; Swan, Geoff
2011-05-01
In this article, we show the implementation of a computer-based digital storage oscilloscope (DSO) and function generator (FG) using the computer's soundcard for off-campus acoustic experiments. The microphone input is used for the DSO, and a speaker jack is used as the FG. In an effort to reduce the cost of implementing the experiment, we examine software available for free, online. A small number of applications were compared in terms of their interface and functionality, for both the DSO and the FG. The software was then used to investigate standing waves in pipes using the computer-based DSO. Standing wave theory taught in high school and in first year physics is based on a one-dimensional model. With the use of the DSO's fast Fourier transform function, the experimental uncertainly alone was not sufficient to account for the difference observed between the measure and the calculated frequencies. Hence the original experiment was expanded upon to include the end correction effect. The DSO was also used for other simple acoustics experiments, in areas such as the physics of music.
Computer work duration and its dependence on the used pause definition.
Richter, Janneke M; Slijper, Harm P; Over, Eelco A B; Frens, Maarten A
2008-11-01
Several ergonomic studies have estimated computer work duration using registration software. In these studies, an arbitrary pause definition (Pd; the minimal time between two computer events to constitute a pause) is chosen and the resulting duration of computer work is estimated. In order to uncover the relationship between the used pause definition and the computer work duration (PWT), we used registration software to record usage patterns of 571 computer users across almost 60,000 working days. For a large range of Pds (1-120 s), we found a shallow, log-linear relationship between PWT and Pds. For keyboard and mouse use, a second-order function fitted the data best. We found that these relationships were dependent on the amount of computer work and subject characteristics. Comparison of exposure duration from studies using different pause definitions should take this into account, since it could lead to misclassification. Software manufacturers and ergonomists assessing computer work duration could use the found relationships for software design and study comparison.
User Account Passwords | High-Performance Computing | NREL
Account Passwords User Account Passwords For NREL's high-performance computing (HPC) systems, learn about user account password requirements and how to set up, log in, and change passwords. Password Logging In the First Time After you request an HPC user account, you'll receive a temporary password. Set
The minimal work cost of information processing
NASA Astrophysics Data System (ADS)
Faist, Philippe; Dupuis, Frédéric; Oppenheim, Jonathan; Renner, Renato
2015-07-01
Irreversible information processing cannot be carried out without some inevitable thermodynamical work cost. This fundamental restriction, known as Landauer's principle, is increasingly relevant today, as the energy dissipation of computing devices impedes the development of their performance. Here we determine the minimal work required to carry out any logical process, for instance a computation. It is given by the entropy of the discarded information conditional to the output of the computation. Our formula takes precisely into account the statistically fluctuating work requirement of the logical process. It enables the explicit calculation of practical scenarios, such as computational circuits or quantum measurements. On the conceptual level, our result gives a precise and operational connection between thermodynamic and information entropy, and explains the emergence of the entropy state function in macroscopic thermodynamics.
Dynamic Optical Networks for Future Internet Environments
NASA Astrophysics Data System (ADS)
Matera, Francesco
2014-05-01
This article reports an overview on the evolution of the optical network scenario taking into account the exponential growth of connected devices, big data, and cloud computing that is driving a concrete transformation impacting the information and communication technology world. This hyper-connected scenario is deeply affecting relationships between individuals, enterprises, citizens, and public administrations, fostering innovative use cases in practically any environment and market, and introducing new opportunities and new challenges. The successful realization of this hyper-connected scenario depends on different elements of the ecosystem. In particular, it builds on connectivity and functionalities allowed by converged next-generation networks and their capacity to support and integrate with the Internet of Things, machine-to-machine, and cloud computing. This article aims at providing some hints of this scenario to contribute to analyze impacts on optical system and network issues and requirements. In particular, the role of the software-defined network is investigated by taking into account all scenarios regarding data centers, cloud computing, and machine-to-machine and trying to illustrate all the advantages that could be introduced by advanced optical communications.
Accounting & Computing Curriculum Guide.
ERIC Educational Resources Information Center
Avani, Nathan T.; And Others
This curriculum guide consists of materials for use in teaching a competency-based accounting and computing course that is designed to prepare students for employability in the following occupational areas: inventory control clerk, invoice clerk, payroll clerk, traffic clerk, general ledger bookkeeper, accounting clerk, account information clerk,…
Gozem, Samer; Gunina, Anastasia O.; Ichino, Takatoshi; ...
2015-10-28
The calculation of absolute total cross sections requires accurate wave functions of the photoelectron and of the initial and final states of the system. The essential information contained in the latter two can be condensed into a Dyson orbital. We employ correlated Dyson orbitals and test approximate treatments of the photoelectron wave function, that is, plane and Coulomb waves, by comparing computed and experimental photoionization and photodetachment spectra. We find that in anions, a plane wave treatment of the photoelectron provides a good description of photodetachment spectra. For photoionization of neutral atoms or molecules with one heavy atom, the photoelectronmore » wave function must be treated as a Coulomb wave to account for the interaction of the photoelectron with the +1 charge of the ionized core. For larger molecules, the best agreement with experiment is often achieved by using a Coulomb wave with a partial (effective) charge smaller than unity. This likely derives from the fact that the effective charge at the centroid of the Dyson orbital, which serves as the origin of the spherical wave expansion, is smaller than the total charge of a polyatomic cation. Finally, the results suggest that accurate molecular photoionization cross sections can be computed with a modified central potential model that accounts for the nonspherical charge distribution of the core by adjusting the charge in the center of the expansion.« less
An Approach to Economic Dispatch with Multiple Fuels Based on Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Sriyanyong, Pichet
2011-06-01
Particle Swarm Optimization (PSO), a stochastic optimization technique, shows superiority to other evolutionary computation techniques in terms of less computation time, easy implementation with high quality solution, stable convergence characteristic and independent from initialization. For this reason, this paper proposes the application of PSO to the Economic Dispatch (ED) problem, which occurs in the operational planning of power systems. In this study, ED problem can be categorized according to the different characteristics of its cost function that are ED problem with smooth cost function and ED problem with multiple fuels. Taking the multiple fuels into account will make the problem more realistic. The experimental results show that the proposed PSO algorithm is more efficient than previous approaches under consideration as well as highly promising in real world applications.
Leake, S.A.; Galloway, D.L.
2007-01-01
A new computer program was developed to simulate vertical compaction in models of regional ground-water flow. The program simulates ground-water storage changes and compaction in discontinuous interbeds or in extensive confining units, accounting for stress-dependent changes in storage properties. The new program is a package for MODFLOW, the U.S. Geological Survey modular finite-difference ground-water flow model. Several features of the program make it useful for application in shallow, unconfined flow systems. Geostatic stress can be treated as a function of water-table elevation, and compaction is a function of computed changes in effective stress at the bottom of a model layer. Thickness of compressible sediments in an unconfined model layer can vary in proportion to saturated thickness.
Computing Visitors who do not need a HEP linux account Visitors with laptops can use wireless network HEP linux account Step 1: Click Here for New Account Application After submitting the application, you
DOE Office of Scientific and Technical Information (OSTI.GOV)
Medasani, Bharat; Ovanesyan, Zaven; Thomas, Dennis G.
In this article we present a classical density functional theory for electrical double layers of spherical macroions that extends the capabilities of conventional approaches by accounting for electrostatic ion correlations, size asymmetry and excluded volume effects. The approach is based on a recent approximation introduced by Hansen-Goos and Roth for the hard sphere excess free energy of inhomogeneous fluids (J. Chem. Phys. 124, 154506). It accounts for the proper and efficient description of the effects of ionic asymmetry and solvent excluded volume, especially at high ion concentrations and size asymmetry ratios including those observed in experimental studies. Additionally, we utilizemore » a leading functional Taylor expansion approximation of the ion density profiles. In addition, we use the Mean Spherical Approximation for multi-component charged hard sphere fluids to account for the electrostatic ion correlation effects. These approximations are implemented in our theoretical formulation into a suitable decomposition of the excess free energy which plays a key role in capturing the complex interplay between charge correlations and excluded volume effects. We perform Monte Carlo simulations in various scenarios to validate the proposed approach, obtaining a good compromise between accuracy and computational cost. We use the proposed computational approach to study the effects of ion size, ion size asymmetry and solvent excluded volume on the ion profiles, integrated charge, mean electrostatic potential, and ionic coordination number around spherical macroions in various electrolyte mixtures. Our results show that both solvent hard sphere diameter and density play a dominant role in the distribution of ions around spherical macroions, mainly for experimental water molarity and size values where the counterion distribution is characterized by a tight binding to the macroion, similar to that predicted by the Stern model.« less
Transient upset models in computer systems
NASA Technical Reports Server (NTRS)
Mason, G. M.
1983-01-01
Essential factors for the design of transient upset monitors for computers are discussed. The upset is a system level event that is software dependent. It can occur in the program flow, the opcode set, the opcode address domain, the read address domain, and the write address domain. Most upsets are in the program flow. It is shown that simple, external monitors functioning transparently relative to the system operations can be built if a detailed accounting is made of the characteristics of the faults that can happen. Sample applications are provided for different states of the Z-80 and 8085 based system.
LANDSAT 4 band 6 data evaluation
NASA Technical Reports Server (NTRS)
1983-01-01
Satellite data collected over Lake Ontario were processed to observed surface temperature values. This involved computing apparent radiance values for each point where surface temperatures were known from averaged digital count values. These radiance values were then converted by using the LOWTRAN 5A atmospheric propagation model. This model was modified by incorporating a spectral response function for the LANDSAT band 6 sensors. A downwelled radiance term derived from LOWTRAN was included to account for reflected sky radiance. A blackbody equivalent source radiance was computed. Measured temperatures were plotted against the predicted temperature. The RMS error between the data sets is 0.51K.
Integrating Computer Concepts into Principles of Accounting.
ERIC Educational Resources Information Center
Beck, Henry J.; Parrish, Roy James, Jr.
A package of instructional materials for an undergraduate principles of accounting course at Danville Community College was developed based upon the following assumptions: (1) the principles of accounting student does not need to be able to write computer programs; (2) computerized accounting concepts should be presented in this course; (3)…
NASA Technical Reports Server (NTRS)
Ruo, S. Y.
1978-01-01
A computer program was developed to account approximately for the effects of finite wing thickness in transonic potential flow over an oscillation wing of finite span. The program is based on the original sonic box computer program for planar wing which was extended to account for the effect of wing thickness. Computational efficiency and accuracy were improved and swept trailing edges were accounted for. Account for the nonuniform flow caused by finite thickness was made by application of the local linearization concept with appropriate coordinate transformation. A brief description of each computer routine and the applications of cubic spline and spline surface data fitting techniques used in the program are given, and the method of input was shown in detail. Sample calculations as well as a complete listing of the computer program listing are presented.
New Mexico district work-effort analysis computer program
Hiss, W.L.; Trantolo, A.P.; Sparks, J.L.
1972-01-01
The computer program (CAN 2) described in this report is one of several related programs used in the New Mexico District cost-analysis system. The work-effort information used in these programs is accumulated and entered to the nearest hour on forms completed by each employee. Tabulating cards are punched directly from these forms after visual examinations for errors are made. Reports containing detailed work-effort data itemized by employee within each project and account and by account and project for each employee are prepared for both current-month and year-to-date periods by the CAN 2 computer program. An option allowing preparation of reports for a specified 3-month period is provided. The total number of hours worked on each account and project and a grand total of hours worked in the New Mexico District is computed and presented in a summary report for each period. Work effort not chargeable directly to individual projects or accounts is considered as overhead and can be apportioned to the individual accounts and projects on the basis of the ratio of the total hours of work effort for the individual accounts or projects to the total New Mexico District work effort at the option of the user. The hours of work performed by a particular section, such as General Investigations or Surface Water, are prorated and charged to the projects or accounts within the particular section. A number of surveillance or buffer accounts are employed to account for the hours worked on special events or on those parts of large projects or accounts that require a more detailed analysis. Any part of the New Mexico District operation can be separated and analyzed in detail by establishing an appropriate buffer account. With the exception of statements associated with word size, the computer program is written in FORTRAN IV in a relatively low and standard language level to facilitate its use on different digital computers. The program has been run only on a Control Data Corporation 6600 computer system. Central processing computer time has seldom exceeded 5 minutes on the longest year-to-date runs.
Genre Analysis of Tax Computation Letters: How and Why Tax Accountants Write the Way They Do
ERIC Educational Resources Information Center
Flowerdew, John; Wan, Alina
2006-01-01
This study is a genre analysis which explores the specific discourse community of tax accountants. Tax computation letters from one international accounting firm in Hong Kong were analyzed and compared. To probe deeper into the tax accounting discourse community, a group of tax accountants from the same firm was observed and questioned. The texts…
A generalized threshold model for computing bed load grain size distribution
NASA Astrophysics Data System (ADS)
Recking, Alain
2016-12-01
For morphodynamic studies, it is important to compute not only the transported volumes of bed load, but also the size of the transported material. A few bed load equations compute fractional transport (i.e., both the volume and grain size distribution), but many equations compute only the bulk transport (a volume) with no consideration of the transported grain sizes. To fill this gap, a method is proposed to compute the bed load grain size distribution separately to the bed load flux. The method is called the Generalized Threshold Model (GTM), because it extends the flow competence method for threshold of motion of the largest transported grain size to the full bed surface grain size distribution. This was achieved by replacing dimensional diameters with their size indices in the standard hiding function, which offers a useful framework for computation, carried out for each indices considered in the range [1, 100]. New functions are also proposed to account for partial transport. The method is very simple to implement and is sufficiently flexible to be tested in many environments. In addition to being a good complement to standard bulk bed load equations, it could also serve as a framework to assist in analyzing the physics of bed load transport in future research.
ERIC Educational Resources Information Center
Cano, Diana Wright
2017-01-01
State Education Agencies (SEAs) face challenges to the implementation of computer-based accountability assessments. The change in the accountability assessments from paper-based to computer-based demands action from the states to enable schools and districts to build their technical capacity, train the staff, provide practice opportunities to the…
Colour-dressed hexagon tessellations for correlation functions and non-planar corrections
NASA Astrophysics Data System (ADS)
Eden, Burkhard; Jiang, Yunfeng; le Plat, Dennis; Sfondrini, Alessandro
2018-02-01
We continue the study of four-point correlation functions by the hexagon tessellation approach initiated in [38] and [39]. We consider planar tree-level correlation functions in N=4 supersymmetric Yang-Mills theory involving two non-protected operators. We find that, in order to reproduce the field theory result, it is necessary to include SU( N) colour factors in the hexagon formalism; moreover, we find that the hexagon approach as it stands is naturally tailored to the single-trace part of correlation functions, and does not account for multi-trace admixtures. We discuss how to compute correlators involving double-trace operators, as well as more general 1 /N effects; in particular we compute the whole next-to-leading order in the large- N expansion of tree-level BMN two-point functions by tessellating a torus with punctures. Finally, we turn to the issue of "wrapping", Lüscher-like corrections. We show that SU( N) colour-dressing reproduces an earlier empirical rule for incorporating single-magnon wrapping, and we provide a direct interpretation of such wrapping processes in terms of N=2 supersymmetric Feynman diagrams.
20 CFR 901.11 - Enrollment procedures.
Code of Federal Regulations, 2011 CFR
2011-04-01
.... Examples include economics, computer programs, pension accounting, investment and finance, risk theory... Columbia responsible for the issuance of a license in the field of actuarial science, insurance, accounting... include economics, computer programming, pension accounting, investment and finance, risk theory...
NASA Astrophysics Data System (ADS)
Nadège Ilembe Badouna, Audrey; Veres, Cristina; Haddy, Nadia; Bidault, François; Lefkopoulos, Dimitri; Chavaudra, Jean; Bridier, André; de Vathaire, Florent; Diallo, Ibrahima
2012-01-01
The aim of this paper was to determine anthropometric parameters leading to the least uncertain estimate of heart size when connecting a computational phantom to an external beam radiation therapy (EBRT) patient. From computed tomography images, we segmented the heart and calculated its total volume (THV) in a population of 270 EBRT patients of both sexes, aged 0.7-83 years. Our data were fitted using logistic growth functions. The patient age, height, weight, body mass index and body surface area (BSA) were used as explanatory variables. For both genders, good fits were obtained with both weight (R2 = 0.89 for males and 0.83 for females) and BSA (R2 = 0.90 for males and 0.84 for females). These results demonstrate that, among anthropometric parameters, weight plays an important role in predicting THV. These findings should be taken into account when assigning a computational phantom to a patient.
PPPC 4 DM ID: a poor particle physicist cookbook for dark matter indirect detection
NASA Astrophysics Data System (ADS)
Cirelli, Marco; Corcella, Gennaro; Hektor, Andi; Hütsi, Gert; Kadastik, Mario; Panci, Paolo; Raidal, Martti; Sala, Filippo; Strumia, Alessandro
2011-03-01
We provide ingredients and recipes for computing signals of TeV-scale Dark Matter annihilations and decays in the Galaxy and beyond. For each DM channel, we present the energy spectra of at production, computed by high-statistics simulations. We estimate the Monte Carlo uncertainty by comparing the results yielded by the Pythia and Herwig event generators. We then provide the propagation functions for charged particles in the Galaxy, for several DM distribution profiles and sets of propagation parameters. Propagation of e± is performed with an improved semi-analytic method that takes into account position-dependent energy losses in the Milky Way. Using such propagation functions, we compute the energy spectra of e±,bar p and bar d at the location of the Earth. We then present the gamma ray fluxes, both from prompt emission and from Inverse Compton scattering in the galactic halo. Finally, we provide the spectra of extragalactic gamma rays. All results are available in numerical form and ready to be consumed.
Eddington's demon: inferring galaxy mass functions and other distributions from uncertain data
NASA Astrophysics Data System (ADS)
Obreschkow, D.; Murray, S. G.; Robotham, A. S. G.; Westmeier, T.
2018-03-01
We present a general modified maximum likelihood (MML) method for inferring generative distribution functions from uncertain and biased data. The MML estimator is identical to, but easier and many orders of magnitude faster to compute than the solution of the exact Bayesian hierarchical modelling of all measurement errors. As a key application, this method can accurately recover the mass function (MF) of galaxies, while simultaneously dealing with observational uncertainties (Eddington bias), complex selection functions and unknown cosmic large-scale structure. The MML method is free of binning and natively accounts for small number statistics and non-detections. Its fast implementation in the R-package dftools is equally applicable to other objects, such as haloes, groups, and clusters, as well as observables other than mass. The formalism readily extends to multidimensional distribution functions, e.g. a Choloniewski function for the galaxy mass-angular momentum distribution, also handled by dftools. The code provides uncertainties and covariances for the fitted model parameters and approximate Bayesian evidences. We use numerous mock surveys to illustrate and test the MML method, as well as to emphasize the necessity of accounting for observational uncertainties in MFs of modern galaxy surveys.
QEDMOD: Fortran program for calculating the model Lamb-shift operator
NASA Astrophysics Data System (ADS)
Shabaev, V. M.; Tupitsyn, I. I.; Yerokhin, V. A.
2018-02-01
We present Fortran package QEDMOD for computing the model QED operator hQED that can be used to account for the Lamb shift in accurate atomic-structure calculations. The package routines calculate the matrix elements of hQED with the user-specified one-electron wave functions. The operator can be used to calculate Lamb shift in many-electron atomic systems with a typical accuracy of few percent, either by evaluating the matrix element of hQED with the many-electron wave function, or by adding hQED to the Dirac-Coulomb-Breit Hamiltonian.
Social effects of an anthropomorphic help agent: humans versus computers.
David, Prabu; Lu, Tingting; Kline, Susan; Cai, Li
2007-06-01
The purpose of this study was to examine perceptions of fairness of a computer-administered quiz as a function of the anthropomorphic features of the help agent offered within the quiz environment. The addition of simple anthropomorphic cues to a computer help agent reduced the perceived friendliness of the agent, perceived intelligence of the agent, and the perceived fairness of the quiz. These differences were observed only for male anthropomorphic cues, but not for female anthropomorphic cues. The results were not explained by the social attraction of the anthropomorphic agents used in the quiz or by gender identification with the agents. Priming of visual cues provides the best account of the data. Practical implications of the study are discussed.
A Scalable Implementation of Van der Waals Density Functionals
NASA Astrophysics Data System (ADS)
Wu, Jun; Gygi, Francois
2010-03-01
Recently developed Van der Waals density functionals[1] offer the promise to account for weak intermolecular interactions that are not described accurately by local exchange-correlation density functionals. In spite of recent progress [2], the computational cost of such calculations remains high. We present a scalable parallel implementation of the functional proposed by Dion et al.[1]. The method is implemented in the Qbox first-principles simulation code (http://eslab.ucdavis.edu/software/qbox). Application to large molecular systems will be presented. [4pt] [1] M. Dion et al. Phys. Rev. Lett. 92, 246401 (2004).[0pt] [2] G. Roman-Perez and J. M. Soler, Phys. Rev. Lett. 103, 096102 (2009).
Broering, N C
1983-01-01
Georgetown University's Library Information System (LIS), an integrated library system designed and implemented at the Dahlgren Memorial Library, is broadly described from an administrative point of view. LIS' functional components consist of eight "user-friendly" modules: catalog, circulation, serials, bibliographic management (including Mini-MEDLINE), acquisitions, accounting, networking, and computer-assisted instruction. This article touches on emerging library services, user education, and computer information services, which are also changing the role of staff librarians. The computer's networking capability brings the library directly to users through personal or institutional computers at remote sites. The proposed Integrated Medical Center Information System at Georgetown University will include interface with LIS through a network mechanism. LIS is being replicated at other libraries, and a microcomputer version is being tested for use in a hospital setting. PMID:6688749
Linking and integrating computers for maternity care.
Lumb, M; Fawdry, R
1990-12-01
Functionally separate computer systems have been developed for many different areas relevant to maternity care, e.g. maternity data collection, pathology and imaging reports, staff rostering, personnel, accounting, audit, primary care etc. Using land lines, modems and network gateways, many such quite distinct computer programs or databases can be made accessible from a single terminal. If computer systems are to attain their full potential for the improvement of the maternity care, there will be a need not only for terminal emulation but also for more complex integration. Major obstacles must be overcome before such integration is widely achieved. Technical and conceptual progress towards overcoming these problems is discussed, with particular reference to the OSI (open systems interconnection) initiative, to the Read clinical classification and to the MUMMIES CBS (Common Basic Specification) Maternity Care Project. The issue of confidentiality is also briefly explored.
Sherman, Maxwell A; Lee, Shane; Law, Robert; Haegens, Saskia; Thorn, Catherine A; Hämäläinen, Matti S; Moore, Christopher I; Jones, Stephanie R
2016-08-16
Human neocortical 15-29-Hz beta oscillations are strong predictors of perceptual and motor performance. However, the mechanistic origin of beta in vivo is unknown, hindering understanding of its functional role. Combining human magnetoencephalography (MEG), computational modeling, and laminar recordings in animals, we present a new theory that accounts for the origin of spontaneous neocortical beta. In our MEG data, spontaneous beta activity from somatosensory and frontal cortex emerged as noncontinuous beta events typically lasting <150 ms with a stereotypical waveform. Computational modeling uniquely designed to infer the electrical currents underlying these signals showed that beta events could emerge from the integration of nearly synchronous bursts of excitatory synaptic drive targeting proximal and distal dendrites of pyramidal neurons, where the defining feature of a beta event was a strong distal drive that lasted one beta period (∼50 ms). This beta mechanism rigorously accounted for the beta event profiles; several other mechanisms did not. The spatial location of synaptic drive in the model to supragranular and infragranular layers was critical to the emergence of beta events and led to the prediction that beta events should be associated with a specific laminar current profile. Laminar recordings in somatosensory neocortex from anesthetized mice and awake monkeys supported these predictions, suggesting this beta mechanism is conserved across species and recording modalities. These findings make several predictions about optimal states for perceptual and motor performance and guide causal interventions to modulate beta for optimal function.
"Truth be told" - Semantic memory as the scaffold for veridical communication.
Hayes, Brett K; Ramanan, Siddharth; Irish, Muireann
2018-01-01
Theoretical accounts placing episodic memory as central to constructive and communicative functions neglect the role of semantic memory. We argue that the decontextualized nature of semantic schemas largely supersedes the computational bottleneck and error-prone nature of episodic memory. Rather, neuroimaging and neuropsychological evidence of episodic-semantic interactions suggest that an integrative framework more accurately captures the mechanisms underpinning social communication.
Space shuttle entry terminal area energy management
NASA Technical Reports Server (NTRS)
Moore, Thomas E.
1991-01-01
A historical account of the development for Shuttle's Terminal Area Energy Management (TAEM) is presented. A derivation and explanation of logic and equations are provided as a supplement to the well documented guidance computation requirements contained within the official Functional Subsystem Software Requirements (FSSR) published by Rockwell for NASA. The FSSR contains the full set of equations and logic, whereas this document addresses just certain areas for amplification.
Charge Transport Properties of Durene Crystals from First-Principles.
Motta, Carlo; Sanvito, Stefano
2014-10-14
We establish a rigorous computational scheme for constructing an effective Hamiltonian to be used for the determination of the charge carrier mobility of pure organic crystals at finite temperature, which accounts for van der Waals interactions, and it includes vibrational contributions from the entire phonon spectrum of the crystal. Such an approach is based on the ab initio framework provided by density functional theory and the construction of a tight-binding effective model via Wannier transformation. The final Hamiltonian includes coupling of the electrons to the crystals phonons, which are also calculated from density functional theory. We apply this methodology to the case of durene, a small π-conjugated molecule, which forms a high-mobility herringbone-stacked crystal. We show that accounting correctly for dispersive forces is fundamental for obtaining a high-quality phonon spectrum, in agreement with experiments. Then, the mobility as a function of temperature is calculated along different crystallographic directions and the phonons most responsible for the scattering are identified.
Basic concepts and development of an all-purpose computer interface for ROC/FROC observer study.
Shiraishi, Junji; Fukuoka, Daisuke; Hara, Takeshi; Abe, Hiroyuki
2013-01-01
In this study, we initially investigated various aspects of requirements for a computer interface employed in receiver operating characteristic (ROC) and free-response ROC (FROC) observer studies which involve digital images and ratings obtained by observers (radiologists). Secondly, by taking into account these aspects, an all-purpose computer interface utilized for these observer performance studies was developed. Basically, the observer studies can be classified into three paradigms, such as one rating for one case without an identification of a signal location, one rating for one case with an identification of a signal location, and multiple ratings for one case with identification of signal locations. For these paradigms, display modes on the computer interface can be used for single/multiple views of a static image, continuous viewing with cascade images (i.e., CT, MRI), and dynamic viewing of movies (i.e., DSA, ultrasound). Various functions on these display modes, which include windowing (contrast/level), magnifications, and annotations, are needed to be selected by an experimenter corresponding to the purpose of the research. In addition, the rules of judgment for distinguishing between true positives and false positives are an important factor for estimating diagnostic accuracy in an observer study. We developed a computer interface which runs on a Windows operating system by taking into account all aspects required for various observer studies. This computer interface requires experimenters to have sufficient knowledge about ROC/FROC observer studies, but allows its use for any purpose of the observer studies. This computer interface will be distributed publicly in the near future.
The theory of constructed emotion: an active inference account of interoception and categorization
2017-01-01
Abstract The science of emotion has been using folk psychology categories derived from philosophy to search for the brain basis of emotion. The last two decades of neuroscience research have brought us to the brink of a paradigm shift in understanding the workings of the brain, however, setting the stage to revolutionize our understanding of what emotions are and how they work. In this article, we begin with the structure and function of the brain, and from there deduce what the biological basis of emotions might be. The answer is a brain-based, computational account called the theory of constructed emotion. PMID:27798257
NASA Astrophysics Data System (ADS)
Leow, Alex D.; Zhu, Siwei
2008-03-01
Diffusion weighted MR imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitizing gradients along a minimum of 6 directions, second-order tensors (represetnted by 3-by-3 positive definiite matrices) can be computed to model dominant diffusion processes. However, it has been shown that conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g. crossing fiber tracts. More recently, High Angular Resolution Diffusion Imaging (HARDI) seeks to address this issue by employing more than 6 gradient directions. To account for fiber crossing when analyzing HARDI data, several methodologies have been introduced. For example, q-ball imaging was proposed to approximate Orientation Diffusion Function (ODF). Similarly, the PAS method seeks to reslove the angular structure of displacement probability functions using the maximum entropy principle. Alternatively, deconvolution methods extract multiple fiber tracts by computing fiber orientations using a pre-specified single fiber response function. In this study, we introduce Tensor Distribution Function (TDF), a probability function defined on the space of symmetric and positive definite matrices. Using calculus of variations, we solve for the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, ODF can easily be computed by analytical integration of the resulting displacement probability function. Moreover, principle fiber directions can also be directly derived from the TDF.
ERIC Educational Resources Information Center
Lenard, Mary Jane; Wessels, Susan; Khanlarian, Cindi
2010-01-01
Using a model developed by Young (2000), this paper explores the relationship between performance in the Accounting Information Systems course, self-assessed computer skills, and attitudes toward computers. Results show that after taking the AIS course, students experience a change in perception about their use of computers. Females'…
Hourd, Paul; Medcalf, Nicholas; Segal, Joel; Williams, David J
2015-01-01
Computer-aided 3D printing approaches to the industrial production of customized 3D functional living constructs for restoration of tissue and organ function face significant regulatory challenges. Using the manufacture of a customized, 3D-bioprinted nasal implant as a well-informed but hypothetical exemplar, we examine how these products might be regulated. Existing EU and USA regulatory frameworks do not account for the differences between 3D printing and conventional manufacturing methods or the ability to create individual customized products using mechanized rather than craft approaches. Already subject to extensive regulatory control, issues related to control of the computer-aided design to manufacture process and the associated software system chain present additional scientific and regulatory challenges for manufacturers of these complex 3D-bioprinted advanced combination products.
Computer Modelling of Functional Aspects of Noise in Endogenously Oscillating Neurons
NASA Astrophysics Data System (ADS)
Huber, M. T.; Dewald, M.; Voigt, K.; Braun, H. A.; Moss, F.
1998-03-01
Membrane potential oscillations are a widespread feature of neuronal activity. When such oscillations operate close to the spike-triggering threshold, noise can become an essential property of spike-generation. According to that, we developed a minimal Hodgkin-Huxley-type computer model which includes a noise term. This model accounts for experimental data from quite different cells ranging from mammalian cortical neurons to fish electroreceptors. With slight modifications of the parameters, the model's behavior can be tuned to bursting activity, which additionally allows it to mimick temperature encoding in peripheral cold receptors including transitions to apparently chaotic dynamics as indicated by methods for the detection of unstable periodic orbits. Under all conditions, cooperative effects between noise and nonlinear dynamics can be shown which, beyond stochastic resonance, might be of functional significance for stimulus encoding and neuromodulation.
A computational exploration of the McCoy-Tracy-Wu solutions of the third Painlevé equation
NASA Astrophysics Data System (ADS)
Fasondini, Marco; Fornberg, Bengt; Weideman, J. A. C.
2018-01-01
The method recently developed by the authors for the computation of the multivalued Painlevé transcendents on their Riemann surfaces (Fasondini et al., 2017) is used to explore families of solutions to the third Painlevé equation that were identified by McCoy et al. (1977) and which contain a pole-free sector. Limiting cases, in which the solutions are singular functions of the parameters, are also investigated and it is shown that a particular set of limiting solutions is expressible in terms of special functions. Solutions that are single-valued, logarithmically (infinitely) branched and algebraically branched, with any number of distinct sheets, are encountered. The algebraically branched solutions have multiple pole-free sectors on their Riemann surfaces that are accounted for by using asymptotic formulae and Bäcklund transformations.
Silvetti, Massimo; Alexander, William; Verguts, Tom; Brown, Joshua W
2014-10-01
The role of the medial prefrontal cortex (mPFC) and especially the anterior cingulate cortex has been the subject of intense debate for the last decade. A number of theories have been proposed to account for its function. Broadly speaking, some emphasize cognitive control, whereas others emphasize value processing; specific theories concern reward processing, conflict detection, error monitoring, and volatility detection, among others. Here we survey and evaluate them relative to experimental results from neurophysiological, anatomical, and cognitive studies. We argue for a new conceptualization of mPFC, arising from recent computational modeling work. Based on reinforcement learning theory, these new models propose that mPFC is an Actor-Critic system. This system is aimed to predict future events including rewards, to evaluate errors in those predictions, and finally, to implement optimal skeletal-motor and visceromotor commands to obtain reward. This framework provides a comprehensive account of mPFC function, accounting for and predicting empirical results across different levels of analysis, including monkey neurophysiology, human ERP, human neuroimaging, and human behavior. Copyright © 2013 Elsevier Ltd. All rights reserved.
Computational Neuropsychology and Bayesian Inference.
Parr, Thomas; Rees, Geraint; Friston, Karl J
2018-01-01
Computational theories of brain function have become very influential in neuroscience. They have facilitated the growth of formal approaches to disease, particularly in psychiatric research. In this paper, we provide a narrative review of the body of computational research addressing neuropsychological syndromes, and focus on those that employ Bayesian frameworks. Bayesian approaches to understanding brain function formulate perception and action as inferential processes. These inferences combine 'prior' beliefs with a generative (predictive) model to explain the causes of sensations. Under this view, neuropsychological deficits can be thought of as false inferences that arise due to aberrant prior beliefs (that are poor fits to the real world). This draws upon the notion of a Bayes optimal pathology - optimal inference with suboptimal priors - and provides a means for computational phenotyping. In principle, any given neuropsychological disorder could be characterized by the set of prior beliefs that would make a patient's behavior appear Bayes optimal. We start with an overview of some key theoretical constructs and use these to motivate a form of computational neuropsychology that relates anatomical structures in the brain to the computations they perform. Throughout, we draw upon computational accounts of neuropsychological syndromes. These are selected to emphasize the key features of a Bayesian approach, and the possible types of pathological prior that may be present. They range from visual neglect through hallucinations to autism. Through these illustrative examples, we review the use of Bayesian approaches to understand the link between biology and computation that is at the heart of neuropsychology.
Computational Neuropsychology and Bayesian Inference
Parr, Thomas; Rees, Geraint; Friston, Karl J.
2018-01-01
Computational theories of brain function have become very influential in neuroscience. They have facilitated the growth of formal approaches to disease, particularly in psychiatric research. In this paper, we provide a narrative review of the body of computational research addressing neuropsychological syndromes, and focus on those that employ Bayesian frameworks. Bayesian approaches to understanding brain function formulate perception and action as inferential processes. These inferences combine ‘prior’ beliefs with a generative (predictive) model to explain the causes of sensations. Under this view, neuropsychological deficits can be thought of as false inferences that arise due to aberrant prior beliefs (that are poor fits to the real world). This draws upon the notion of a Bayes optimal pathology – optimal inference with suboptimal priors – and provides a means for computational phenotyping. In principle, any given neuropsychological disorder could be characterized by the set of prior beliefs that would make a patient’s behavior appear Bayes optimal. We start with an overview of some key theoretical constructs and use these to motivate a form of computational neuropsychology that relates anatomical structures in the brain to the computations they perform. Throughout, we draw upon computational accounts of neuropsychological syndromes. These are selected to emphasize the key features of a Bayesian approach, and the possible types of pathological prior that may be present. They range from visual neglect through hallucinations to autism. Through these illustrative examples, we review the use of Bayesian approaches to understand the link between biology and computation that is at the heart of neuropsychology. PMID:29527157
NASA Astrophysics Data System (ADS)
Jolivet, R.; Simons, M.
2018-02-01
Interferometric synthetic aperture radar time series methods aim to reconstruct time-dependent ground displacements over large areas from sets of interferograms in order to detect transient, periodic, or small-amplitude deformation. Because of computational limitations, most existing methods consider each pixel independently, ignoring important spatial covariances between observations. We describe a framework to reconstruct time series of ground deformation while considering all pixels simultaneously, allowing us to account for spatial covariances, imprecise orbits, and residual atmospheric perturbations. We describe spatial covariances by an exponential decay function dependent of pixel-to-pixel distance. We approximate the impact of imprecise orbit information and residual long-wavelength atmosphere as a low-order polynomial function. Tests on synthetic data illustrate the importance of incorporating full covariances between pixels in order to avoid biased parameter reconstruction. An example of application to the northern Chilean subduction zone highlights the potential of this method.
Infrared conductivity of cuprates using Yang-Rice-Zhang ansatz: Review of our recent investigations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Navinder; Sharma, Raman
2015-05-15
A review of our recent investigations related to the ac transport properties in the psedogapped state of cuprate high temperature superconductors is presented. For our theoretical calculations we use a phenomenological Green’s function proposed by Yang, Rice and Zhang (YRZ). This is based upon the renormalized mean-field theory of the Hubbard model and takes into account the strong electron-electron interaction present in Cuprates. The pseudogap is also taken into account through a proposed self energy. We have tested the form of the Green’s function by computing ac conductivity of cuprates and then compared with experimental results. We found agreement betweenmore » theory and experiment in reproducing the doping evolution of ac conductivity but there is a problem with absolute magnitudes and their frequency dependence. This shows a partial success of the YRZ ansatz. The ways to rectify it are suggested and worked out.« less
Computing black hole partition functions from quasinormal modes
Arnold, Peter; Szepietowski, Phillip; Vaman, Diana
2016-07-07
We propose a method of computing one-loop determinants in black hole space-times (with emphasis on asymptotically anti-de Sitter black holes) that may be used for numerics when completely-analytic results are unattainable. The method utilizes the expression for one-loop determinants in terms of quasinormal frequencies determined by Denef, Hartnoll and Sachdev in [1]. A numerical evaluation must face the fact that the sum over the quasinormal modes, indexed by momentum and overtone numbers, is divergent. A necessary ingredient is then a regularization scheme to handle the divergent contributions of individual fixed-momentum sectors to the partition function. To this end, we formulatemore » an effective two-dimensional problem in which a natural refinement of standard heat kernel techniques can be used to account for contributions to the partition function at fixed momentum. We test our method in a concrete case by reproducing the scalar one-loop determinant in the BTZ black hole background. Furthermore, we then discuss the application of such techniques to more complicated spacetimes.« less
NASA Technical Reports Server (NTRS)
Rilee, Michael Lee; Kuo, Kwo-Sen
2017-01-01
The SpatioTemporal Adaptive Resolution Encoding (STARE) is a unifying scheme encoding geospatial and temporal information for organizing data on scalable computing/storage resources, minimizing expensive data transfers. STARE provides a compact representation that turns set-logic functions into integer operations, e.g. conditional sub-setting, taking into account representative spatiotemporal resolutions of the data in the datasets. STARE geo-spatiotemporally aligns data placements of diverse data on massive parallel resources to maximize performance. Automating important scientific functions (e.g. regridding) and computational functions (e.g. data placement) allows scientists to focus on domain-specific questions instead of expending their efforts and expertise on data processing. With STARE-enabled automation, SciDB (Scientific Database) plus STARE provides a database interface, reducing costly data preparation, increasing the volume and variety of interoperable data, and easing result sharing. Using SciDB plus STARE as part of an integrated analysis infrastructure dramatically eases combining diametrically different datasets.
Howes, Andrew; Lewis, Richard L; Vera, Alonso
2009-10-01
The authors assume that individuals adapt rationally to a utility function given constraints imposed by their cognitive architecture and the local task environment. This assumption underlies a new approach to modeling and understanding cognition-cognitively bounded rational analysis-that sharpens the predictive acuity of general, integrated theories of cognition and action. Such theories provide the necessary computational means to explain the flexible nature of human behavior but in doing so introduce extreme degrees of freedom in accounting for data. The new approach narrows the space of predicted behaviors through analysis of the payoff achieved by alternative strategies, rather than through fitting strategies and theoretical parameters to data. It extends and complements established approaches, including computational cognitive architectures, rational analysis, optimal motor control, bounded rationality, and signal detection theory. The authors illustrate the approach with a reanalysis of an existing account of psychological refractory period (PRP) dual-task performance and the development and analysis of a new theory of ordered dual-task responses. These analyses yield several novel results, including a new understanding of the role of strategic variation in existing accounts of PRP and the first predictive, quantitative account showing how the details of ordered dual-task phenomena emerge from the rational control of a cognitive system subject to the combined constraints of internal variance, motor interference, and a response selection bottleneck.
NASA Astrophysics Data System (ADS)
Motamarri, Phani; Gavini, Vikram
2018-04-01
We derive the expressions for configurational forces in Kohn-Sham density functional theory, which correspond to the generalized variational force computed as the derivative of the Kohn-Sham energy functional with respect to the position of a material point x . These configurational forces that result from the inner variations of the Kohn-Sham energy functional provide a unified framework to compute atomic forces as well as stress tensor for geometry optimization. Importantly, owing to the variational nature of the formulation, these configurational forces inherently account for the Pulay corrections. The formulation presented in this work treats both pseudopotential and all-electron calculations in a single framework, and employs a local variational real-space formulation of Kohn-Sham density functional theory (DFT) expressed in terms of the nonorthogonal wave functions that is amenable to reduced-order scaling techniques. We demonstrate the accuracy and performance of the proposed configurational force approach on benchmark all-electron and pseudopotential calculations conducted using higher-order finite-element discretization. To this end, we examine the rates of convergence of the finite-element discretization in the computed forces and stresses for various materials systems, and, further, verify the accuracy from finite differencing the energy. Wherever applicable, we also compare the forces and stresses with those obtained from Kohn-Sham DFT calculations employing plane-wave basis (pseudopotential calculations) and Gaussian basis (all-electron calculations). Finally, we verify the accuracy of the forces on large materials systems involving a metallic aluminum nanocluster containing 666 atoms and an alkane chain containing 902 atoms, where the Kohn-Sham electronic ground state is computed using a reduced-order scaling subspace projection technique [P. Motamarri and V. Gavini, Phys. Rev. B 90, 115127 (2014), 10.1103/PhysRevB.90.115127].
streamgap-pepper: Effects of peppering streams with many small impacts
NASA Astrophysics Data System (ADS)
Bovy, Jo; Erkal, Denis; Sanders, Jason
2017-02-01
streamgap-pepper computes the effect of subhalo fly-bys on cold tidal streams based on the action-angle representation of streams. A line-of-parallel-angle approach is used to calculate the perturbed distribution function of a given stream segment by undoing the effect of all impacts. This approach allows one to compute the perturbed stream density and track in any coordinate system in minutes for realizations of the subhalo distribution down to 10^5 Msun, accounting for the stream's internal dispersion and overlapping impacts. This code uses galpy (ascl:1411.008) and the streampepperdf.py galpy extension, which implements the fast calculation of the perturbed stream structure.
NASA Astrophysics Data System (ADS)
Taghavifar, Hadi; Khalilarya, Shahram; Jafarmadar, Samad; Taghavifar, Hamid
2016-08-01
A multidimensional computational fluid dynamic code was developed and integrated with probability density function combustion model to give the detailed account of multiphase fluid flow. The vapor phase within injector domain is treated with Reynolds-averaged Navier-Stokes technique. A new parameter is proposed which is an index of plane-cut spray propagation and takes into account two parameters of spray penetration length and cone angle at the same time. It was found that spray propagation factor (SPI) tends to increase at lower r/ d ratios, although the spray penetration tends to decrease. The results of SPI obtained by empirical correlation of Hay and Jones were compared with the simulation computation as a function of respective r/ d ratio. Based on the results of this study, the spray distribution on plane area has proportional correlation with heat release amount, NO x emission mass fraction, and soot concentration reduction. Higher cavitation is attributed to the sharp edge of nozzle entrance, yielding better liquid jet disintegration and smaller spray droplet that reduces soot mass fraction of late combustion process. In order to have better insight of cavitation phenomenon, turbulence magnitude in nozzle and combustion chamber was acquired and depicted along with spray velocity.
Centralized Authentication with Kerberos 5, Part I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wachsmann, A
Account administration in a distributed Unix/Linux environment can become very complicated and messy if done by hand. Large sites use special tools to deal with this problem. I will describe how even very small installations like your three computer network at home can take advantage of the very same tools. The problem in a distributed environment is that password and shadow files need to be changed individually on each machine if an account change occurs. Account changes include: password change, addition/removal of accounts, name change of an account (UID/GID changes are a big problem in any case), additional or removedmore » login privileges to a (group of) computer(s), etc. In this article, I will show how Kerberos 5 solves the authentication problem in a distributed computing environment. A second article will describe a solution for the authorization problem.« less
17 CFR 1.32 - Segregated account; daily computation and record.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 17 Commodity and Securities Exchanges 1 2011-04-01 2011-04-01 false Segregated account; daily computation and record. 1.32 Section 1.32 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION GENERAL REGULATIONS UNDER THE COMMODITY EXCHANGE ACT Recordkeeping § 1.32 Segregated account...
47 CFR 32.6124 - General purpose computers expense.
Code of Federal Regulations, 2013 CFR
2013-10-01
... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...
47 CFR 32.6124 - General purpose computers expense.
Code of Federal Regulations, 2014 CFR
2014-10-01
... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...
47 CFR 32.6124 - General purpose computers expense.
Code of Federal Regulations, 2011 CFR
2011-10-01
... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...
47 CFR 32.6124 - General purpose computers expense.
Code of Federal Regulations, 2012 CFR
2012-10-01
... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...
47 CFR 32.6124 - General purpose computers expense.
Code of Federal Regulations, 2010 CFR
2010-10-01
... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...
Computational Models of Anterior Cingulate Cortex: At the Crossroads between Prediction and Effort.
Vassena, Eliana; Holroyd, Clay B; Alexander, William H
2017-01-01
In the last two decades the anterior cingulate cortex (ACC) has become one of the most investigated areas of the brain. Extensive neuroimaging evidence suggests countless functions for this region, ranging from conflict and error coding, to social cognition, pain and effortful control. In response to this burgeoning amount of data, a proliferation of computational models has tried to characterize the neurocognitive architecture of ACC. Early seminal models provided a computational explanation for a relatively circumscribed set of empirical findings, mainly accounting for EEG and fMRI evidence. More recent models have focused on ACC's contribution to effortful control. In parallel to these developments, several proposals attempted to explain within a single computational framework a wider variety of empirical findings that span different cognitive processes and experimental modalities. Here we critically evaluate these modeling attempts, highlighting the continued need to reconcile the array of disparate ACC observations within a coherent, unifying framework.
Precision digital control systems
NASA Astrophysics Data System (ADS)
Vyskub, V. G.; Rozov, B. S.; Savelev, V. I.
This book is concerned with the characteristics of digital control systems of great accuracy. A classification of such systems is considered along with aspects of stabilization, programmable control applications, digital tracking systems and servomechanisms, and precision systems for the control of a scanning laser beam. Other topics explored are related to systems of proportional control, linear devices and methods for increasing precision, approaches for further decreasing the response time in the case of high-speed operation, possibilities for the implementation of a logical control law, and methods for the study of precision digital control systems. A description is presented of precision automatic control systems which make use of electronic computers, taking into account the existing possibilities for an employment of computers in automatic control systems, approaches and studies required for including a computer in such control systems, and an analysis of the structure of automatic control systems with computers. Attention is also given to functional blocks in the considered systems.
Computational principles of working memory in sentence comprehension.
Lewis, Richard L; Vasishth, Shravan; Van Dyke, Julie A
2006-10-01
Understanding a sentence requires a working memory of the partial products of comprehension, so that linguistic relations between temporally distal parts of the sentence can be rapidly computed. We describe an emerging theoretical framework for this working memory system that incorporates several independently motivated principles of memory: a sharply limited attentional focus, rapid retrieval of item (but not order) information subject to interference from similar items, and activation decay (forgetting over time). A computational model embodying these principles provides an explanation of the functional capacities and severe limitations of human processing, as well as accounts of reading times. The broad implication is that the detailed nature of cross-linguistic sentence processing emerges from the interaction of general principles of human memory with the specialized task of language comprehension.
DeRobertis, Christopher V.; Lu, Yantian T.
2010-02-23
A method, system, and program storage device for creating a new user account or user group with a unique identification number in a computing environment having multiple user registries is provided. In response to receiving a command to create a new user account or user group, an operating system of a clustered computing environment automatically checks multiple registries configured for the operating system to determine whether a candidate identification number for the new user account or user group has been assigned already to one or more existing user accounts or groups, respectively. The operating system automatically assigns the candidate identification number to the new user account or user group created in a target user registry if the checking indicates that the candidate identification number has not been assigned already to any of the existing user accounts or user groups, respectively.
Teaching Accounting with Computers.
ERIC Educational Resources Information Center
Shaoul, Jean
This paper addresses the numerous ways that computers may be used to enhance the teaching of accounting and business topics. It focuses on the pedagogical use of spreadsheet software to improve the conceptual coverage of accounting principles and practice, increase student understanding by involvement in the solution process, and reduce the amount…
Evaluation of Farm Accounting Software. Improved Decision Making.
ERIC Educational Resources Information Center
Lovell, Ashley C., Comp.
This guide contains information on 36 computer programs used for farm and ranch accounting. This information and assessment of software features were provided by the manufacturers and vendors. Information is provided on the following items, among others: program name, vendor's name and address, computer and operating system, type of accounting and…
Functional Risk Modeling for Lunar Surface Systems
NASA Technical Reports Server (NTRS)
Thomson, Fraser; Mathias, Donovan; Go, Susie; Nejad, Hamed
2010-01-01
We introduce an approach to risk modeling that we call functional modeling , which we have developed to estimate the capabilities of a lunar base. The functional model tracks the availability of functions provided by systems, in addition to the operational state of those systems constituent strings. By tracking functions, we are able to identify cases where identical functions are provided by elements (rovers, habitats, etc.) that are connected together on the lunar surface. We credit functional diversity in those cases, and in doing so compute more realistic estimates of operational mode availabilities. The functional modeling approach yields more realistic estimates of the availability of the various operational modes provided to astronauts by the ensemble of surface elements included in a lunar base architecture. By tracking functional availability the effects of diverse backup, which often exists when two or more independent elements are connected together, is properly accounted for.
Bernstein, Leslie R; Trahiotis, Constantine
2014-12-01
Binaural detection was measured as a function of the center frequency, bandwidth, and interaural correlation of masking noise. Thresholds were obtained for 500-Hz or 125-Hz Sπ tonal signals and for the latter stimuli (noise or signal-plus-noise) transposed to 4 kHz. A primary goal was assessment of the generality of van der Heijden and Trahiotis' [J. Acoust. Soc. Am. 101, 1019-1022 (1997)] hypothesis that thresholds could be accounted for by the "additive" masking effects of the underlying No and Nπ components of a masker having an interaural correlation of ρ. Results indicated that (1) the overall patterning of the data depended neither upon center frequency nor whether information was conveyed via the waveform or by its envelope; (2) thresholds for transposed stimuli improved relative to their low-frequency counterparts as bandwidth of the masker was increased; (3) the additivity approach accounted well for the data across stimulus conditions but consistently overestimated MLDs, especially for narrowband maskers; (4) a quantitative approach explicitly taking into account the distributions of time-varying ITD-based lateral positions produced by masker-alone and signal-plus-masker waveforms proved more successful, albeit while employing a larger set of assumptions, parameters, and computational complexity.
Analytical fitting model for rough-surface BRDF.
Renhorn, Ingmar G E; Boreman, Glenn D
2008-08-18
A physics-based model is developed for rough surface BRDF, taking into account angles of incidence and scattering, effective index, surface autocovariance, and correlation length. Shadowing is introduced on surface correlation length and reflectance. Separate terms are included for surface scatter, bulk scatter and retroreflection. Using the FindFit function in Mathematica, the functional form is fitted to BRDF measurements over a wide range of incident angles. The model has fourteen fitting parameters; once these are fixed, the model accurately describes scattering data over two orders of magnitude in BRDF without further adjustment. The resulting analytical model is convenient for numerical computations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lovato, A.; Gandolfi, S.; Carlson, J.
Here, the longitudinal and transverse electromagnetic response functions ofmore » $$^{12}$$C are computed in a ``first-principles'' Green's function Monte Carlo calculation, based on realistic two- and three-nucleon interactions and associated one- and two-body currents. We find excellent agreement between theory and experiment and, in particular, no evidence for the quenching of measured versus calculated longitudinal response. This is further corroborated by a re-analysis of the Coulomb sum rule, in which the contributions from the low-lying $$J^\\pi\\,$$=$$\\, 2^+$$, $0^+$ (Hoyle), and $4^+$ states in $$^{12}$$C are accounted for explicitly in evaluating the total inelastic strength.« less
Accurate Energies and Orbital Description in Semi-Local Kohn-Sham DFT
NASA Astrophysics Data System (ADS)
Lindmaa, Alexander; Kuemmel, Stephan; Armiento, Rickard
2015-03-01
We present our progress on a scheme in semi-local Kohn-Sham density-functional theory (KS-DFT) for improving the orbital description while still retaining the level of accuracy of the usual semi-local exchange-correlation (xc) functionals. DFT is a widely used tool for first-principles calculations of properties of materials. A given task normally requires a balance of accuracy and computational cost, which is well achieved with semi-local DFT. However, commonly used semi-local xc functionals have important shortcomings which often can be attributed to features of the corresponding xc potential. One shortcoming is an overly delocalized representation of localized orbitals. Recently a semi-local GGA-type xc functional was constructed to address these issues, however, it has the trade-off of lower accuracy of the total energy. We discuss the source of this error in terms of a surplus energy contribution in the functional that needs to be accounted for, and offer a remedy for this issue which formally stays within KS-DFT, and, which does not harshly increase the computational effort. The end result is a scheme that combines accurate total energies (e.g., relaxed geometries) with an improved orbital description (e.g., improved band structure).
Determination of acoustical transfer functions using an impulse method
NASA Astrophysics Data System (ADS)
MacPherson, J.
1985-02-01
The Transfer Function of a system may be defined as the relationship of the output response to the input of a system. Whilst recent advances in digital processing systems have enabled Impulse Transfer Functions to be determined by computation of the Fast Fourier Transform, there has been little work done in applying these techniques to room acoustics. Acoustical Transfer Functions have been determined for auditoria, using an impulse method. The technique is based on the computation of the Fast Fourier Transform (FFT) of a non-ideal impulsive source, both at the source and at the receiver point. The Impulse Transfer Function (ITF) is obtained by dividing the FFT at the receiver position by the FFT of the source. This quantity is presented both as linear frequency scale plots and also as synthesized one-third octave band data. The technique enables a considerable quantity of data to be obtained from a small number of impulsive signals recorded in the field, thereby minimizing the time and effort required on site. As the characteristics of the source are taken into account in the calculation, the choice of impulsive source is non-critical. The digital analysis equipment required for the analysis is readily available commercially.
Hidden relationships between metalloproteins unveiled by structural comparison of their metal sites
NASA Astrophysics Data System (ADS)
Valasatava, Yana; Andreini, Claudia; Rosato, Antonio
2015-03-01
Metalloproteins account for a substantial fraction of all proteins. They incorporate metal atoms, which are required for their structure and/or function. Here we describe a new computational protocol to systematically compare and classify metal-binding sites on the basis of their structural similarity. These sites are extracted from the MetalPDB database of minimal functional sites (MFSs) in metal-binding biological macromolecules. Structural similarity is measured by the scoring function of the available MetalS2 program. Hierarchical clustering was used to organize MFSs into clusters, for each of which a representative MFS was identified. The comparison of all representative MFSs provided a thorough structure-based classification of the sites analyzed. As examples, the application of the proposed computational protocol to all heme-binding proteins and zinc-binding proteins of known structure highlighted the existence of structural subtypes, validated known evolutionary links and shed new light on the occurrence of similar sites in systems at different evolutionary distances. The present approach thus makes available an innovative viewpoint on metalloproteins, where the functionally crucial metal sites effectively lead the discovery of structural and functional relationships in a largely protein-independent manner.
NASA Technical Reports Server (NTRS)
Daniele, C. J.; Lorenzo, C. F.
1979-01-01
Lumped volume dynamic equations are derived using an energy state formulation. This technique requires that kinetic and potential energy state functions be written for the physical system being investigated. To account for losses in the system, a Rayleigh dissipation function is formed. Using these functions, a Lagrangian is formed and using Lagrange's equation, the equations of motion for the system are derived. The results of the application of this technique to a lumped volume are used to derive a model for the free piston Stirling engine. The model was simplified and programmed on an analog computer. Results are given comparing the model response with experimental data.
NASA Technical Reports Server (NTRS)
Daniele, C. J.; Lorenzo, C. F.
1979-01-01
Lumped volume dynamic equations are derived using an energy-state formulation. This technique requires that kinetic and potential energy state functions be written for the physical system being investigated. To account for losses in the system, a Rayleigh dissipation function is also formed. Using these functions, a Lagrangian is formed and using Lagrange's equation, the equations of motion for the system are derived. The results of the application of this technique to a lumped volume are used to derive a model for the free-piston Stirling engine. The model was simplified and programmed on an analog computer. Results are given comparing the model response with experimental data.
Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence
NASA Technical Reports Server (NTRS)
Mark, W. D.
1981-01-01
A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.
Accurate multiple sequence-structure alignment of RNA sequences using combinatorial optimization.
Bauer, Markus; Klau, Gunnar W; Reinert, Knut
2007-07-27
The discovery of functional non-coding RNA sequences has led to an increasing interest in algorithms related to RNA analysis. Traditional sequence alignment algorithms, however, fail at computing reliable alignments of low-homology RNA sequences. The spatial conformation of RNA sequences largely determines their function, and therefore RNA alignment algorithms have to take structural information into account. We present a graph-based representation for sequence-structure alignments, which we model as an integer linear program (ILP). We sketch how we compute an optimal or near-optimal solution to the ILP using methods from combinatorial optimization, and present results on a recently published benchmark set for RNA alignments. The implementation of our algorithm yields better alignments in terms of two published scores than the other programs that we tested: This is especially the case with an increasing number of input sequences. Our program LARA is freely available for academic purposes from http://www.planet-lisa.net.
Computational cognitive modeling of the temporal dynamics of fatigue from sleep loss.
Walsh, Matthew M; Gunzelmann, Glenn; Van Dongen, Hans P A
2017-12-01
Computational models have become common tools in psychology. They provide quantitative instantiations of theories that seek to explain the functioning of the human mind. In this paper, we focus on identifying deep theoretical similarities between two very different models. Both models are concerned with how fatigue from sleep loss impacts cognitive processing. The first is based on the diffusion model and posits that fatigue decreases the drift rate of the diffusion process. The second is based on the Adaptive Control of Thought - Rational (ACT-R) cognitive architecture and posits that fatigue decreases the utility of candidate actions leading to microlapses in cognitive processing. A biomathematical model of fatigue is used to control drift rate in the first account and utility in the second. We investigated the predicted response time distributions of these two integrated computational cognitive models for performance on a psychomotor vigilance test under conditions of total sleep deprivation, simulated shift work, and sustained sleep restriction. The models generated equivalent predictions of response time distributions with excellent goodness-of-fit to the human data. More importantly, although the accounts involve different modeling approaches and levels of abstraction, they represent the effects of fatigue in a functionally equivalent way: in both, fatigue decreases the signal-to-noise ratio in decision processes and decreases response inhibition. This convergence suggests that sleep loss impairs psychomotor vigilance performance through degradation of the quality of cognitive processing, which provides a foundation for systematic investigation of the effects of sleep loss on other aspects of cognition. Our findings illustrate the value of treating different modeling formalisms as vehicles for discovery.
Computational mechanisms underlying cortical responses to the affordance properties of visual scenes
Epstein, Russell A.
2018-01-01
Biologically inspired deep convolutional neural networks (CNNs), trained for computer vision tasks, have been found to predict cortical responses with remarkable accuracy. However, the internal operations of these models remain poorly understood, and the factors that account for their success are unknown. Here we develop a set of techniques for using CNNs to gain insights into the computational mechanisms underlying cortical responses. We focused on responses in the occipital place area (OPA), a scene-selective region of dorsal occipitoparietal cortex. In a previous study, we showed that fMRI activation patterns in the OPA contain information about the navigational affordances of scenes; that is, information about where one can and cannot move within the immediate environment. We hypothesized that this affordance information could be extracted using a set of purely feedforward computations. To test this idea, we examined a deep CNN with a feedforward architecture that had been previously trained for scene classification. We found that responses in the CNN to scene images were highly predictive of fMRI responses in the OPA. Moreover the CNN accounted for the portion of OPA variance relating to the navigational affordances of scenes. The CNN could thus serve as an image-computable candidate model of affordance-related responses in the OPA. We then ran a series of in silico experiments on this model to gain insights into its internal operations. These analyses showed that the computation of affordance-related features relied heavily on visual information at high-spatial frequencies and cardinal orientations, both of which have previously been identified as low-level stimulus preferences of scene-selective visual cortex. These computations also exhibited a strong preference for information in the lower visual field, which is consistent with known retinotopic biases in the OPA. Visualizations of feature selectivity within the CNN suggested that affordance-based responses encoded features that define the layout of the spatial environment, such as boundary-defining junctions and large extended surfaces. Together, these results map the sensory functions of the OPA onto a fully quantitative model that provides insights into its visual computations. More broadly, they advance integrative techniques for understanding visual cortex across multiple level of analysis: from the identification of cortical sensory functions to the modeling of their underlying algorithms. PMID:29684011
Ernst, Marielle; Boers, Anna M M; Aigner, Annette; Berkhemer, Olvert A; Yoo, Albert J; Roos, Yvo B; Dippel, Diederik W J; van der Lugt, Aad; van Oostenbrugge, Robert J; van Zwam, Wim H; Fiehler, Jens; Marquering, Henk A; Majoie, Charles B L M
2017-09-01
Ischemic lesion volume (ILV) assessed by follow-up noncontrast computed tomography correlates only moderately with clinical end points, such as the modified Rankin Scale (mRS). We hypothesized that the association between follow-up noncontrast computed tomography ILV and outcome as assessed with mRS 3 months after stroke is strengthened when taking the mRS relevance of the infarct location into account. An anatomic atlas with 66 areas was registered to the follow-up noncontrast computed tomographic images of 254 patients from the MR CLEAN trial (Multicenter Randomized Clinical Trial of Endovascular Treatment of Acute Ischemic Stroke in the Netherlands). The anatomic brain areas were divided into brain areas of high, moderate, and low mRS relevance as reported in the literature. Based on this distinction, the ILV in brain areas of high, moderate, and low mRS relevance was assessed for each patient. Binary and ordinal logistic regression analyses with and without adjustment for known confounders were performed to assess the association between the ILVs of different mRS relevance and outcome. The odds for a worse outcome (higher mRS) were markedly higher given an increase of ILV in brain areas of high mRS relevance (odds ratio, 1.42; 95% confidence interval, 1.31-1.55 per 10 mL) compared with an increase in total ILV (odds ratios, 1.16; 95% confidence interval, 1.12-1.19 per 10 mL). Regression models using ILV in brain areas of high mRS relevance instead of total ILV showed a higher quality. The association between follow-up noncontrast computed tomography ILV and outcome as assessed with mRS 3 months after stroke is strengthened by accounting for the mRS relevance of the affected brain areas. Future prediction models should account for the ILV in brain areas of high mRS relevance. © 2017 American Heart Association, Inc.
Functional interaction-based nonlinear models with application to multiplatform genomics data.
Davenport, Clemontina A; Maity, Arnab; Baladandayuthapani, Veerabhadran
2018-05-07
Functional regression allows for a scalar response to be dependent on a functional predictor; however, not much work has been done when a scalar exposure that interacts with the functional covariate is introduced. In this paper, we present 2 functional regression models that account for this interaction and propose 2 novel estimation procedures for the parameters in these models. These estimation methods allow for a noisy and/or sparsely observed functional covariate and are easily extended to generalized exponential family responses. We compute standard errors of our estimators, which allows for further statistical inference and hypothesis testing. We compare the performance of the proposed estimators to each other and to one found in the literature via simulation and demonstrate our methods using a real data example. Copyright © 2018 John Wiley & Sons, Ltd.
Introducing the Microcomputer into Undergraduate Tax Courses.
ERIC Educational Resources Information Center
Dillaway, Manson P.; Savage, Allan H.
Although accountants have used computers for tax planning and tax return preparation for many years, tax education has been slow to reflect the increasing role of computers in tax accounting. The following are only some of the tasks that a business education department offering undergraduate tax courses for accounting majors should perform when…
26 CFR 1.164-1 - Deduction for taxes.
Code of Federal Regulations, 2010 CFR
2010-04-01
... the taxable year within which paid or accrued, according to the method of accounting used in computing... thereto, during the taxable year even though the taxpayer uses the accrual method of accounting for other... for the taxable year in which paid or accrued, according to the method of accounting used in computing...
Basire, Marie; Borgis, Daniel; Vuilleumier, Rodolphe
2013-08-14
Langevin dynamics coupled to a quantum thermal bath (QTB) allows for the inclusion of vibrational quantum effects in molecular dynamics simulations at virtually no additional computer cost. We investigate here the ability of the QTB method to reproduce the quantum Wigner distribution of a variety of model potentials, designed to assess the performances and limits of the method. We further compute the infrared spectrum of a multidimensional model of proton transfer in the gas phase and in solution, using classical trajectories sampled initially from the Wigner distribution. It is shown that for this type of system involving large anharmonicities and strong nonlinear coupling to the environment, the quantum thermal bath is able to sample the Wigner distribution satisfactorily and to account for both zero point energy and tunneling effects. It leads to quantum time correlation functions having the correct short-time behavior, and the correct associated spectral frequencies, but that are slightly too overdamped. This is attributed to the classical propagation approximation rather than the generation of the quantized initial conditions themselves.
Vitale, Valerio; Dziedzic, Jacek; Dubois, Simon M-M; Fangohr, Hans; Skylaris, Chris-Kriton
2015-07-14
Density functional theory molecular dynamics (DFT-MD) provides an efficient framework for accurately computing several types of spectra. The major benefit of DFT-MD approaches lies in the ability to naturally take into account the effects of temperature and anharmonicity, without having to introduce any ad hoc or a posteriori corrections. Consequently, computational spectroscopy based on DFT-MD approaches plays a pivotal role in the understanding and assignment of experimental peaks and bands at finite temperature, particularly in the case of floppy molecules. Linear-scaling DFT methods can be used to study large and complex systems, such as peptides, DNA strands, amorphous solids, and molecules in solution. Here, we present the implementation of DFT-MD IR spectroscopy in the ONETEP linear-scaling code. In addition, two methods for partitioning the dipole moment within the ONETEP framework are presented. Dipole moment partitioning allows us to compute spectra of molecules in solution, which fully include the effects of the solvent, while at the same time removing the solvent contribution from the spectra.
Distributed Accounting on the Grid
NASA Technical Reports Server (NTRS)
Thigpen, William; Hacker, Thomas J.; McGinnis, Laura F.; Athey, Brian D.
2001-01-01
By the late 1990s, the Internet was adequately equipped to move vast amounts of data between HPC (High Performance Computing) systems, and efforts were initiated to link together the national infrastructure of high performance computational and data storage resources together into a general computational utility 'grid', analogous to the national electrical power grid infrastructure. The purpose of the Computational grid is to provide dependable, consistent, pervasive, and inexpensive access to computational resources for the computing community in the form of a computing utility. This paper presents a fully distributed view of Grid usage accounting and a methodology for allocating Grid computational resources for use on a Grid computing system.
2012-05-15
subroutine by adding time-dependence to the thermal expansion coefficient. The user subroutine was written in Intel Visual Fortran that is compatible...temperature history dependent expansion and contraction, and the molds were modeled as elastic taking into account both mechanical and thermal strain. In...behavior was approximated by assuming the thermal coefficient of expansion to be a fourth order polynomial function of temperature. The authors
Magnetization dynamics driven by spin-polarized current in nanomagnets
NASA Astrophysics Data System (ADS)
Carpentieri, M.; Torres, L.; Azzerboni, B.; Finocchio, G.; Consolo, G.; Lopez-Diaz, L.
2007-09-01
In this report, micromagnetic simulations of magnetization dynamics driven by spin-polarized currents (SPCs) on magnetic nanopillars of permalloy/Cu/permalloy with different rectangular cross-sections are presented. Complete dynamical stability diagrams from initial parallel and antiparallel states have been computed for 100 ns. The effects of a space-dependent polarization function together with the presence of magnetostatic coupling from the fixed layer and classical Ampere field have been taken into account.
Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John N.
1997-01-01
A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.
Computer-Tutors and a Freshman Writer: A Protocol Study.
ERIC Educational Resources Information Center
Strickland, James
Although there are many retrospective accounts from teachers and professional writers concerning the effect of computers on their writing, there are few real-time accounts of students struggling to simultaneously develop as writers and cope with computers. To fill this void in "testimonial data," a study examining talking-aloud protocols from a…
ERIC Educational Resources Information Center
Basile, Anthony; D'Aquila, Jill M.
2002-01-01
Accounting students received either traditional instruction (n=46) or used computer-mediated communication and WebCT course management software. There were no significant differences in attitudes about the course. However, computer users were more positive about course delivery and course management tools. (Contains 17 references.) (SK)
Dimitriadis, Stavros I.; Zouridakis, George; Rezaie, Roozbeh; Babajani-Feremi, Abbas; Papanicolaou, Andrew C.
2015-01-01
Mild traumatic brain injury (mTBI) may affect normal cognition and behavior by disrupting the functional connectivity networks that mediate efficient communication among brain regions. In this study, we analyzed brain connectivity profiles from resting state Magnetoencephalographic (MEG) recordings obtained from 31 mTBI patients and 55 normal controls. We used phase-locking value estimates to compute functional connectivity graphs to quantify frequency-specific couplings between sensors at various frequency bands. Overall, normal controls showed a dense network of strong local connections and a limited number of long-range connections that accounted for approximately 20% of all connections, whereas mTBI patients showed networks characterized by weak local connections and strong long-range connections that accounted for more than 60% of all connections. Comparison of the two distinct general patterns at different frequencies using a tensor representation for the connectivity graphs and tensor subspace analysis for optimal feature extraction showed that mTBI patients could be separated from normal controls with 100% classification accuracy in the alpha band. These encouraging findings support the hypothesis that MEG-based functional connectivity patterns may be used as biomarkers that can provide more accurate diagnoses, help guide treatment, and monitor effectiveness of intervention in mTBI. PMID:26640764
DOE Office of Scientific and Technical Information (OSTI.GOV)
1996-05-01
The Network Information System (NWIS) was initially implemented in May 1996 as a system in which computing devices could be recorded so that unique names could be generated for each device. Since then the system has grown to be an enterprise wide information system which is integrated with other systems to provide the seamless flow of data through the enterprise. The system Iracks data for two main entities: people and computing devices. The following are the type of functions performed by NWIS for these two entities: People Provides source information to the enterprise person data repository for select contractors andmore » visitors Generates and tracks unique usernames and Unix user IDs for every individual granted cyber access Tracks accounts for centrally managed computing resources, and monitors and controls the reauthorization of the accounts in accordance with the DOE mandated interval Computing Devices Generates unique names for all computing devices registered in the system Tracks the following information for each computing device: manufacturer, make, model, Sandia property number, vendor serial number, operating system and operating system version, owner, device location, amount of memory, amount of disk space, and level of support provided for the machine Tracks the hardware address for network cards Tracks the P address registered to computing devices along with the canonical and alias names for each address Updates the Dynamic Domain Name Service (DDNS) for canonical and alias names Creates the configuration files for DHCP to control the DHCP ranges and allow access to only properly registered computers Tracks and monitors classified security plans for stand-alone computers Tracks the configuration requirements used to setup the machine Tracks the roles people have on machines (system administrator, administrative access, user, etc...) Allows systems administrators to track changes made on the machine (both hardware and software) Generates an adjustment history of changes on selected fields« less
Garsson, B
1988-01-01
Remember that computer software is designed for accrual accounting, whereas your business operates and reports income on a cash basis. The rules of tax law stipulate that professional practices may use the cash method of accounting, but if accrual accounting is ever used to report taxable income the government may not permit a switch back to cash accounting. Therefore, always consider the computer as a bookkeeper, not a substitute for a qualified accountant. (Your accountant will have readily accessible payroll and general ledger data available for analysis and tax reports, thanks to the magic of computer processing.) Accounts Payable reports are interfaced with the general ledger and are of interest for transaction detail, open invoice and cash flow analysis, and for a record of payments by vendor. Payroll reports, including check register and withholding detail are provided and interfaced with the general ledger. The use of accounting software expands the use of in-office computers to areas beyond professional billing and insurance form generation. It simplifies payroll recordkeeping; maintains payables details; integrates payables, receivables, and payroll with general ledger files; provides instantaneous information on all aspects of the business office; and creates a continuous "audit-trail" following the entering of data. The availability of packaged accounting software allows the professional business office an array of choices. The person(s) responsible for bookkeeping and accounting should choose carefully, ensuring that any system is easy to use, has been thoroughly tested, and provides at least as much control over office records as has been outlined in this article.
Martínez, Carlos Alberto; Khare, Kshitij; Banerjee, Arunava; Elzo, Mauricio A
2017-03-21
It is important to consider heterogeneity of marker effects and allelic frequencies in across population genome-wide prediction studies. Moreover, all regression models used in genome-wide prediction overlook randomness of genotypes. In this study, a family of hierarchical Bayesian models to perform across population genome-wide prediction modeling genotypes as random variables and allowing population-specific effects for each marker was developed. Models shared a common structure and differed in the priors used and the assumption about residual variances (homogeneous or heterogeneous). Randomness of genotypes was accounted for by deriving the joint probability mass function of marker genotypes conditional on allelic frequencies and pedigree information. As a consequence, these models incorporated kinship and genotypic information that not only permitted to account for heterogeneity of allelic frequencies, but also to include individuals with missing genotypes at some or all loci without the need for previous imputation. This was possible because the non-observed fraction of the design matrix was treated as an unknown model parameter. For each model, a simpler version ignoring population structure, but still accounting for randomness of genotypes was proposed. Implementation of these models and computation of some criteria for model comparison were illustrated using two simulated datasets. Theoretical and computational issues along with possible applications, extensions and refinements were discussed. Some features of the models developed in this study make them promising for genome-wide prediction, the use of information contained in the probability distribution of genotypes is perhaps the most appealing. Further studies to assess the performance of the models proposed here and also to compare them with conventional models used in genome-wide prediction are needed. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Doronin, Alexander; Rushmeier, Holly E.; Meglinski, Igor; Bykov, Alexander V.
2016-03-01
We present a new Monte Carlo based approach for the modelling of Bidirectional Scattering-Surface Reflectance Distribution Function (BSSRDF) for accurate rendering of human skin appearance. The variations of both skin tissues structure and the major chromophores are taken into account correspondingly to the different ethnic and age groups. The computational solution utilizes HTML5, accelerated by the graphics processing units (GPUs), and therefore is convenient for the practical use at the most of modern computer-based devices and operating systems. The results of imitation of human skin reflectance spectra, corresponding skin colours and examples of 3D faces rendering are presented and compared with the results of phantom studies.
The 1-loop effective potential for the Standard Model in curved spacetime
NASA Astrophysics Data System (ADS)
Markkanen, Tommi; Nurmi, Sami; Rajantie, Arttu; Stopyra, Stephen
2018-06-01
The renormalisation group improved Standard Model effective potential in an arbitrary curved spacetime is computed to one loop order in perturbation theory. The loop corrections are computed in the ultraviolet limit, which makes them independent of the choice of the vacuum state and allows the derivation of the complete set of β-functions. The potential depends on the spacetime curvature through the direct non-minimal Higgs-curvature coupling, curvature contributions to the loop diagrams, and through the curvature dependence of the renormalisation scale. Together, these lead to significant curvature dependence, which needs to be taken into account in cosmological applications, which is demonstrated with the example of vacuum stability in de Sitter space.
NASA Technical Reports Server (NTRS)
Wu, R. W.; Witmer, E. A.
1972-01-01
A user-oriented FORTRAN 4 computer program, called JET 3, is presented. The JET 3 program, which employs the spatial finite-element and timewise finite-difference method, can be used to predict the large two-dimensional elastic-plastic transient Kirchhoff-type deformations of a complete or partial structural ring, with various support conditions and restraints, subjected to a variety of initial velocity distributions and externally-applied transient forcing functions. The geometric shapes of the structural ring can be circular or arbitrarily curved and with variable thickness. Strain-hardening and strain-rate effects of the material are taken into account.
NASA Technical Reports Server (NTRS)
Noor, A. K. (Editor); Housner, J. M.
1983-01-01
The mechanics of materials and material characterization are considered, taking into account micromechanics, the behavior of steel structures at elevated temperatures, and an anisotropic plasticity model for inelastic multiaxial cyclic deformation. Other topics explored are related to advances and trends in finite element technology, classical analytical techniques and their computer implementation, interactive computing and computational strategies for nonlinear problems, advances and trends in numerical analysis, database management systems and CAD/CAM, space structures and vehicle crashworthiness, beams, plates and fibrous composite structures, design-oriented analysis, artificial intelligence and optimization, contact problems, random waves, and lifetime prediction. Earthquake-resistant structures and other advanced structural applications are also discussed, giving attention to cumulative damage in steel structures subjected to earthquake ground motions, and a mixed domain analysis of nuclear containment structures using impulse functions.
Correction to "Energy Transport in the Thermosphere During the Solar Storms of April 2002"
NASA Technical Reports Server (NTRS)
Mlynczak, Martin G.; Martin-Torres, F. Javier; Russell, James M., III
2007-01-01
We present corrected computations of the infrared power and energy radiated by nitric oxide (NO) and carbon dioxide (CO2) during the solar storm event of April 2002. The computations in our previous paper underestimated the radiated power due to improper weighting of the radiated power and energy with respect to area as a function of latitude. We now find that the radiation by NO during the April 2002 storm period accounts for 50% of the estimated energy input to the atmosphere from the solar storm. The prior estimate was 28.5%. Emission computed for CO2 is also correspondingly increased, but the relative roles of CO2 and NO remain unchanged. NO emission enhancement is still, far and away, the dominant infrared response to the solar storms of April 2002.
System Access | High-Performance Computing | NREL
) systems. Photo of man looking at a large computer monitor with a colorful, visual display of data. System secure shell gateway (SSH) or virtual private network (VPN). User Accounts Request a user account
Transient Reliability Analysis Capability Developed for CARES/Life
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.
2001-01-01
The CARES/Life software developed at the NASA Glenn Research Center provides a general-purpose design tool that predicts the probability of the failure of a ceramic component as a function of its time in service. This award-winning software has been widely used by U.S. industry to establish the reliability and life of a brittle material (e.g., ceramic, intermetallic, and graphite) structures in a wide variety of 21st century applications.Present capabilities of the NASA CARES/Life code include probabilistic life prediction of ceramic components subjected to fast fracture, slow crack growth (stress corrosion), and cyclic fatigue failure modes. Currently, this code can compute the time-dependent reliability of ceramic structures subjected to simple time-dependent loading. For example, in slow crack growth failure conditions CARES/Life can handle sustained and linearly increasing time-dependent loads, whereas in cyclic fatigue applications various types of repetitive constant-amplitude loads can be accounted for. However, in real applications applied loads are rarely that simple but vary with time in more complex ways such as engine startup, shutdown, and dynamic and vibrational loads. In addition, when a given component is subjected to transient environmental and or thermal conditions, the material properties also vary with time. A methodology has now been developed to allow the CARES/Life computer code to perform reliability analysis of ceramic components undergoing transient thermal and mechanical loading. This means that CARES/Life will be able to analyze finite element models of ceramic components that simulate dynamic engine operating conditions. The methodology developed is generalized to account for material property variation (on strength distribution and fatigue) as a function of temperature. This allows CARES/Life to analyze components undergoing rapid temperature change in other words, components undergoing thermal shock. In addition, the capability has been developed to perform reliability analysis for components that undergo proof testing involving transient loads. This methodology was developed for environmentally assisted crack growth (crack growth as a function of time and loading), but it will be extended to account for cyclic fatigue (crack growth as a function of load cycles) as well.
NASA Astrophysics Data System (ADS)
Frolova, Irina; Agakhanov, Murad
2018-03-01
The development of computing techniques to analyze underground structures, buildings in high-rise construction that would fully take account of the conditions of their design and operation, as well as the real material properties, is one of the important trends in structural mechanics. For the territory in high-rise construction it is necessary to monitor the deformations of the soil surface. When high-rise construction is recommended to take into account the rheological properties and temperature deformations of the soil, the effect of temperature on the mechanical characteristics of the surrounding massif. Similar tasks also arise in the creation and operation of underground parts of high-rise construction, which are used for various purposes. These parts of the structures are surrounded by rock massifs of various materials. The actual mechanical characteristics of such materials must be taken into account. The objective property of nearly all materials is their non-homogeneity, both natural and technological. The work addresses the matters of building nonhomogeneous media initial models based on the experimental evidence. This made it possible to approximate real dependencies and obtain the appropriate functions in a simple and convenient way.
Signatures of van der Waals binding: A coupling-constant scaling analysis
NASA Astrophysics Data System (ADS)
Jiao, Yang; Schröder, Elsebeth; Hyldgaard, Per
2018-02-01
The van der Waals (vdW) density functional (vdW-DF) method [Rep. Prog. Phys. 78, 066501 (2015), 10.1088/0034-4885/78/6/066501] describes dispersion or vdW binding by tracking the effects of an electrodynamic coupling among pairs of electrons and their associated exchange-correlation holes. This is done in a nonlocal-correlation energy term Ecnl, which permits density functional theory calculation in the Kohn-Sham scheme. However, to map the nature of vdW forces in a fully interacting materials system, it is necessary to also account for associated kinetic-correlation energy effects. Here, we present a coupling-constant scaling analysis, which permits us to compute the kinetic-correlation energy Tcnl that is specific to the vdW-DF account of nonlocal correlations. We thus provide a more complete spatially resolved analysis of the electrodynamical-coupling nature of nonlocal-correlation binding, including vdW attraction, in both covalently and noncovalently bonded systems. We find that kinetic-correlation energy effects play a significant role in the account of vdW or dispersion interactions among molecules. Furthermore, our mapping shows that the total nonlocal-correlation binding is concentrated to pockets in the sparse electron distribution located between the material fragments.
Two-component hybrid time-dependent density functional theory within the Tamm-Dancoff approximation.
Kühn, Michael; Weigend, Florian
2015-01-21
We report the implementation of a two-component variant of time-dependent density functional theory (TDDFT) for hybrid functionals that accounts for spin-orbit effects within the Tamm-Dancoff approximation (TDA) for closed-shell systems. The influence of the admixture of Hartree-Fock exchange on excitation energies is investigated for several atoms and diatomic molecules by comparison to numbers for pure density functionals obtained previously [M. Kühn and F. Weigend, J. Chem. Theory Comput. 9, 5341 (2013)]. It is further related to changes upon switching to the local density approximation or using the full TDDFT formalism instead of TDA. Efficiency is demonstrated for a comparably large system, Ir(ppy)3 (61 atoms, 1501 basis functions, lowest 10 excited states), which is a prototype molecule for organic light-emitting diodes, due to its "spin-forbidden" triplet-singlet transition.
Common Accounting System for Monitoring the ATLAS Distributed Computing Resources
NASA Astrophysics Data System (ADS)
Karavakis, E.; Andreeva, J.; Campana, S.; Gayazov, S.; Jezequel, S.; Saiz, P.; Sargsyan, L.; Schovancova, J.; Ueda, I.; Atlas Collaboration
2014-06-01
This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.
ERIC Educational Resources Information Center
Peng, Jacob C.
2009-01-01
The author investigated whether students' effort in working on homework problems was affected by their need for cognition, their perception of the system, and their computer efficacy when instructors used an online system to collect accounting homework. Results showed that individual intrinsic motivation and computer efficacy are important factors…
A fast method to compute Three-Dimensional Infrared Radiative Transfer in non scattering medium
NASA Astrophysics Data System (ADS)
Makke, Laurent; Musson-Genon, Luc; Carissimo, Bertrand
2014-05-01
The Atmospheric Radiation field has seen the development of more accurate and faster methods to take into account absoprtion in participating media. Radiative fog appears with clear sky condition due to a significant cooling during the night, so scattering is left out. Fog formation modelling requires accurate enough method to compute cooling rates. Thanks to High Performance Computing, multi-spectral approach of Radiative Transfer Equation resolution is most often used. Nevertheless, the coupling of three-dimensionnal radiative transfer with fluid dynamics is very detrimental to the computational cost. To reduce the time spent in radiation calculations, the following method uses analytical absorption functions fitted by Sasamori (1968) on Yamamoto's charts (Yamamoto,1956) to compute a local linear absorption coefficient. By averaging radiative properties, this method eliminates the spectral integration. For an isothermal atmosphere, analytical calculations lead to an explicit formula between emissivities functions and linear absorption coefficient. In the case of cooling to space approximation, this analytical expression gives very accurate results compared to correlated k-distribution. For non homogeneous paths, we propose a two steps algorithm. One-dimensional radiative quantities and linear absorption coefficient are computed by a two-flux method. Then, three-dimensional RTE under the grey medium assumption is solved with the DOM. Comparisons with measurements of radiative quantities during ParisFOG field (2006) shows the cability of this method to handle strong vertical variations of pressure/temperature and gases concentrations.
Working-memory capacity protects model-based learning from stress.
Otto, A Ross; Raio, Candace M; Chiang, Alice; Phelps, Elizabeth A; Daw, Nathaniel D
2013-12-24
Accounts of decision-making have long posited the operation of separate, competing valuation systems in the control of choice behavior. Recent theoretical and experimental advances suggest that this classic distinction between habitual and goal-directed (or more generally, automatic and controlled) choice may arise from two computational strategies for reinforcement learning, called model-free and model-based learning. Popular neurocomputational accounts of reward processing emphasize the involvement of the dopaminergic system in model-free learning and prefrontal, central executive-dependent control systems in model-based choice. Here we hypothesized that the hypothalamic-pituitary-adrenal (HPA) axis stress response--believed to have detrimental effects on prefrontal cortex function--should selectively attenuate model-based contributions to behavior. To test this, we paired an acute stressor with a sequential decision-making task that affords distinguishing the relative contributions of the two learning strategies. We assessed baseline working-memory (WM) capacity and used salivary cortisol levels to measure HPA axis stress response. We found that stress response attenuates the contribution of model-based, but not model-free, contributions to behavior. Moreover, stress-induced behavioral changes were modulated by individual WM capacity, such that low-WM-capacity individuals were more susceptible to detrimental stress effects than high-WM-capacity individuals. These results enrich existing accounts of the interplay between acute stress, working memory, and prefrontal function and suggest that executive function may be protective against the deleterious effects of acute stress.
2015-01-01
The 0–0 energies of 80 medium and large molecules have been computed with a large panel of theoretical formalisms. We have used an approach computationally tractable for large molecules, that is, the structural and vibrational parameters are obtained with TD-DFT, the solvent effects are accounted for with the PCM model, whereas the total and transition energies have been determined with TD-DFT and with five wave function approaches accounting for contributions from double excitations, namely, CIS(D), ADC(2), CC2, SCS-CC2, and SOS-CC2, as well as Green’s function based BSE/GW approach. Atomic basis sets including diffuse functions have been systematically applied, and several variations of the PCM have been evaluated. Using solvent corrections obtained with corrected linear-response approach, we found that three schemes, namely, ADC(2), CC2, and BSE/GW allow one to reach a mean absolute deviation smaller than 0.15 eV compared to the measurements, the two former yielding slightly better correlation with experiments than the latter. CIS(D), SCS-CC2, and SOS-CC2 provide significantly larger deviations, though the latter approach delivers highly consistent transition energies. In addition, we show that (i) ADC(2) and CC2 values are extremely close to each other but for systems absorbing at low energies; (ii) the linear-response PCM scheme tends to overestimate solvation effects; and that (iii) the average impact of nonequilibrium correction on 0–0 energies is negligible. PMID:26574326
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sahu, Sivabrata, E-mail: siva1987@iopb.res.in; Parashar, S. K. S., E-mail: sksparashar@yahoo.com; Rout, G. C., E-mail: gcr@iopb.res.in
We address here a tight-binding theoretical model calculation for AA-stacked bi-layer graphene taking into account of a biased potential between two layers to study the density of states and the band dispersion within the total Brillouin zone. We have calculated the electronic Green’s function for electron operator corresponding to A and B sub lattices by Zubarev’s Green’s function technique from which the electronic density of states and the electron band energy dispersion are calculated. The numerically computed density of states and band energy dispersions are investigated by tuning the biased potential to exhibit the band gap by varying the differentmore » physical parameters.« less
Electromagnetic response of C 12 : A first-principles calculation
Lovato, A.; Gandolfi, S.; Carlson, J.; ...
2016-08-15
Here, the longitudinal and transverse electromagnetic response functions ofmore » $$^{12}$$C are computed in a ``first-principles'' Green's function Monte Carlo calculation, based on realistic two- and three-nucleon interactions and associated one- and two-body currents. We find excellent agreement between theory and experiment and, in particular, no evidence for the quenching of measured versus calculated longitudinal response. This is further corroborated by a re-analysis of the Coulomb sum rule, in which the contributions from the low-lying $$J^\\pi\\,$$=$$\\, 2^+$$, $0^+$ (Hoyle), and $4^+$ states in $$^{12}$$C are accounted for explicitly in evaluating the total inelastic strength.« less
Innovative microwave design leads to smart, small EW systems
NASA Astrophysics Data System (ADS)
Niehenke, Edward C.
1988-02-01
An account is given of the state-of-the-art in microwave component and system design for EW systems, whose size and weight has been progressively reduced in recent years as a result of continuing design innovation in microwave circuitry. Typically, AI-function computers are employed to control microwave functions in a way that allows rapid RAM or ROM software modification to meet new performance requirements, thereby obviating hardware modifications. Attention is given to high-isolation GaAs MMIC filters, switches and amplifiers, frequency converters, instantaneous frequency measurement systems, frequency translators, digital RF memories, and high effective radiated power solid-state active antenna arrays.
Direct pair production in heavy-ion--atom collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anholt, R.; Jakubassa-Amundsen, D.H.; Amundsen, P.A.
1983-02-01
Direct pair production in approx.5-MeV/amu heavy-ion--atom collisions with uranium target atoms is calculated with the plane-wave Born approximation and the semiclassical approximation. Briggs's approximation is used to obtain the electron and positron wave functions. Since pair production involves high momentum transfer q from the moving projectile to the vacuum, use is made of a high-q approximation to greatly simplify the numerical computations. Coulomb deflection of the projectile, the effect of finite nuclear size on the elec- tronic wave functions, and the energy loss by the projectile exciting the pair are all taken into account in these calculations.
Benchmark Linelists and Radiative Cooling Functions for LiH Isotopologues
NASA Astrophysics Data System (ADS)
Diniz, Leonardo G.; Alijah, Alexander; Mohallem, José R.
2018-04-01
Linelists and radiative cooling functions in the local thermodynamic equilibrium limit have been computed for the six most important isotopologues of lithium hydride, 7LiH, 6LiH, 7LiD, 6LiD, 7LiT, and 6LiT. The data are based on the most accurate dipole moment and potential energy curves presently available, the latter including adiabatic and leading relativistic corrections. Distance-dependent reduced vibrational masses are used to account for non-adiabatic corrections of the rovibrational energy levels. Even for 7LiH, for which linelists have been reported previously, the present linelist is more accurate. Among all isotopologues, 7LiH and 6LiH are the best coolants, as shown by the radiative cooling functions.
Computational principles underlying recognition of acoustic signals in grasshoppers and crickets.
Ronacher, Bernhard; Hennig, R Matthias; Clemens, Jan
2015-01-01
Grasshoppers and crickets independently evolved hearing organs and acoustic communication. They differ considerably in the organization of their auditory pathways, and the complexity of their songs, which are essential for mate attraction. Recent approaches aimed at describing the behavioral preference functions of females in both taxa by a simple modeling framework. The basic structure of the model consists of three processing steps: (1) feature extraction with a bank of 'LN models'-each containing a linear filter followed by a nonlinearity, (2) temporal integration, and (3) linear combination. The specific properties of the filters and nonlinearities were determined using a genetic learning algorithm trained on a large set of different song features and the corresponding behavioral response scores. The model showed an excellent prediction of the behavioral responses to the tested songs. Most remarkably, in both taxa the genetic algorithm found Gabor-like functions as the optimal filter shapes. By slight modifications of Gabor filters several types of preference functions could be modeled, which are observed in different cricket species. Furthermore, this model was able to explain several so far enigmatic results in grasshoppers. The computational approach offered a remarkably simple framework that can account for phenotypically rather different preference functions across several taxa.
A draft annotation and overview of the human genome
Wright, Fred A; Lemon, William J; Zhao, Wei D; Sears, Russell; Zhuo, Degen; Wang, Jian-Ping; Yang, Hee-Yung; Baer, Troy; Stredney, Don; Spitzner, Joe; Stutz, Al; Krahe, Ralf; Yuan, Bo
2001-01-01
Background The recent draft assembly of the human genome provides a unified basis for describing genomic structure and function. The draft is sufficiently accurate to provide useful annotation, enabling direct observations of previously inferred biological phenomena. Results We report here a functionally annotated human gene index placed directly on the genome. The index is based on the integration of public transcript, protein, and mapping information, supplemented with computational prediction. We describe numerous global features of the genome and examine the relationship of various genetic maps with the assembly. In addition, initial sequence analysis reveals highly ordered chromosomal landscapes associated with paralogous gene clusters and distinct functional compartments. Finally, these annotation data were synthesized to produce observations of gene density and number that accord well with historical estimates. Such a global approach had previously been described only for chromosomes 21 and 22, which together account for 2.2% of the genome. Conclusions We estimate that the genome contains 65,000-75,000 transcriptional units, with exon sequences comprising 4%. The creation of a comprehensive gene index requires the synthesis of all available computational and experimental evidence. PMID:11516338
49 CFR 1245.5 - Classification of job titles.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., Computer Programmer, Computer Analyst, Market Analyst, Pricing Analyst, Employment Supervisor, Research..., Traveling Auditors or Accountants Title is descriptive Traveling Auditor, Accounting Specialist Auditors... 21; adds new titles. 207 Supervising and Chief Claim Agents Title is descriptive Chief Claim Agent...
49 CFR 1245.5 - Classification of job titles.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., Computer Programmer, Computer Analyst, Market Analyst, Pricing Analyst, Employment Supervisor, Research..., Traveling Auditors or Accountants Title is descriptive Traveling Auditor, Accounting Specialist Auditors... 21; adds new titles. 207 Supervising and Chief Claim Agents Title is descriptive Chief Claim Agent...
49 CFR 1245.5 - Classification of job titles.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., Computer Programmer, Computer Analyst, Market Analyst, Pricing Analyst, Employment Supervisor, Research..., Traveling Auditors or Accountants Title is descriptive Traveling Auditor, Accounting Specialist Auditors... 21; adds new titles. 207 Supervising and Chief Claim Agents Title is descriptive Chief Claim Agent...
49 CFR 1245.5 - Classification of job titles.
Code of Federal Regulations, 2014 CFR
2014-10-01
..., Computer Programmer, Computer Analyst, Market Analyst, Pricing Analyst, Employment Supervisor, Research..., Traveling Auditors or Accountants Title is descriptive Traveling Auditor, Accounting Specialist Auditors... 21; adds new titles. 207 Supervising and Chief Claim Agents Title is descriptive Chief Claim Agent...
49 CFR 1245.5 - Classification of job titles.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., Computer Programmer, Computer Analyst, Market Analyst, Pricing Analyst, Employment Supervisor, Research..., Traveling Auditors or Accountants Title is descriptive Traveling Auditor, Accounting Specialist Auditors... 21; adds new titles. 207 Supervising and Chief Claim Agents Title is descriptive Chief Claim Agent...
Computer-Based Resource Accounting Model for Automobile Technology Impact Assessment
DOT National Transportation Integrated Search
1976-10-01
A computer-implemented resource accounting model has been developed for assessing resource impacts of future automobile technology options. The resources tracked are materials, energy, capital, and labor. The model has been used in support of the Int...
Reciprocal-space mapping of epitaxic thin films with crystallite size and shape polydispersity.
Boulle, A; Conchon, F; Guinebretière, R
2006-01-01
A development is presented that allows the simulation of reciprocal-space maps (RSMs) of epitaxic thin films exhibiting fluctuations in the size and shape of the crystalline domains over which diffraction is coherent (crystallites). Three different crystallite shapes are studied, namely parallelepipeds, trigonal prisms and hexagonal prisms. For each shape, two cases are considered. Firstly, the overall size is allowed to vary but with a fixed thickness/width ratio. Secondly, the thickness and width are allowed to vary independently. The calculations are performed assuming three different size probability density functions: the normal distribution, the lognormal distribution and a general histogram distribution. In all cases considered, the computation of the RSM only requires a two-dimensional Fourier integral and the integrand has a simple analytical expression, i.e. there is no significant increase in computing times by taking size and shape fluctuations into account. The approach presented is compatible with most lattice disorder models (dislocations, inclusions, mosaicity, ...) and allows a straightforward account of the instrumental resolution. The applicability of the model is illustrated with the case of an yttria-stabilized zirconia film grown on sapphire.
Etienne, Thibaud; Very, Thibaut; Perpète, Eric A; Monari, Antonio; Assfeld, Xavier
2013-05-02
We present a time-dependent density functional theory computation of the absorption spectra of one β-carboline system: the harmane molecule in its neutral and cationic forms. The spectra are computed in aqueous solution. The interaction of cationic harmane with DNA is also studied. In particular, the use of hybrid quantum mechanics/molecular mechanics methods is discussed, together with its coupling to a molecular dynamics strategy to take into account dynamic effects of the environment and the vibrational degrees of freedom of the chromophore. Different levels of treatment of the environment are addressed starting from purely mechanical embedding to electrostatic and polarizable embedding. We show that a static description of the spectrum based on equilibrium geometry only is unable to give a correct agreement with experimental results, and dynamic effects need to be taken into account. The presence of two stable noncovalent interaction modes between harmane and DNA is also presented, as well as the associated absorption spectrum of harmane cation.
Inferring brain-computational mechanisms with models of activity measurements
Diedrichsen, Jörn
2016-01-01
High-resolution functional imaging is providing increasingly rich measurements of brain activity in animals and humans. A major challenge is to leverage such data to gain insight into the brain's computational mechanisms. The first step is to define candidate brain-computational models (BCMs) that can perform the behavioural task in question. We would then like to infer which of the candidate BCMs best accounts for measured brain-activity data. Here we describe a method that complements each BCM by a measurement model (MM), which simulates the way the brain-activity measurements reflect neuronal activity (e.g. local averaging in functional magnetic resonance imaging (fMRI) voxels or sparse sampling in array recordings). The resulting generative model (BCM-MM) produces simulated measurements. To avoid having to fit the MM to predict each individual measurement channel of the brain-activity data, we compare the measured and predicted data at the level of summary statistics. We describe a novel particular implementation of this approach, called probabilistic representational similarity analysis (pRSA) with MMs, which uses representational dissimilarity matrices (RDMs) as the summary statistics. We validate this method by simulations of fMRI measurements (locally averaging voxels) based on a deep convolutional neural network for visual object recognition. Results indicate that the way the measurements sample the activity patterns strongly affects the apparent representational dissimilarities. However, modelling of the measurement process can account for these effects, and different BCMs remain distinguishable even under substantial noise. The pRSA method enables us to perform Bayesian inference on the set of BCMs and to recognize the data-generating model in each case. This article is part of the themed issue ‘Interpreting BOLD: a dialogue between cognitive and cellular neuroscience’. PMID:27574316
Statistical substantiation of the van der Waals theory of inhomogeneous fluids
NASA Astrophysics Data System (ADS)
Baidakov, V. G.; Protsenko, S. P.; Chernykh, G. G.; Boltachev, G. Sh.
2002-04-01
Computer experiments on simulation of thermodynamic properties and structural characteristics of a Lennard-Jones fluid in one- and two-phase models have been performed for the purpose of checking the base concepts of the van der Waals theory. Calculations have been performed by the method of molecular dynamics at cutoff radii of the intermolecular potential rc,1=2.6σ and rc,2=6.78σ. The phase equilibrium parameters, surface tension, and density distribution have been determined in a two-phase model with a flat liquid-vapor interface. The strong dependence of these properties on the value of rc is shown. The p,ρ,T properties and correlation functions have been calculated in a homogeneous model for a stable and a metastable fluid. An equation of state for a Lennard-Jones fluid describing stable, metastable, and labile regions has been built. It is shown that at T>=1.1 the properties of a flat interface within the computer experimental error can be described by the van der Waals square-gradient theory with an influence parameter κ independent of the density. Taking into account the density dependence of κ through the second moment of the direct correlation function will deteriorate the agreement of the theory with data of computer simulation. The contribution of terms of a higher order than (∇ρ)2 to the Helmholtz free energy of an inhomogeneous system has been considered. It is shown that taking into account terms proportional to (∇ρ)4 leaves no way of obtaining agreement between the theory and simulation data, while taking into consideration of terms proportional to (∇ρ)6 makes it possible to describe with adequate accuracy all the properties of a flat interface in the temperature range from the triple to the critical point.
High-performance computing with quantum processing units
Britt, Keith A.; Oak Ridge National Lab.; Humble, Travis S.; ...
2017-03-01
The prospects of quantum computing have driven efforts to realize fully functional quantum processing units (QPUs). Recent success in developing proof-of-principle QPUs has prompted the question of how to integrate these emerging processors into modern high-performance computing (HPC) systems. We examine how QPUs can be integrated into current and future HPC system architectures by accounting for func- tional and physical design requirements. We identify two integration pathways that are differentiated by infrastructure constraints on the QPU and the use cases expected for the HPC system. This includes a tight integration that assumes infrastructure bottlenecks can be overcome as well asmore » a loose integration that as- sumes they cannot. We find that the performance of both approaches is likely to depend on the quantum interconnect that serves to entangle multiple QPUs. As a result, we also identify several challenges in assessing QPU performance for HPC, and we consider new metrics that capture the interplay between system architecture and the quantum parallelism underlying computational performance.« less
High-performance computing with quantum processing units
DOE Office of Scientific and Technical Information (OSTI.GOV)
Britt, Keith A.; Oak Ridge National Lab.; Humble, Travis S.
The prospects of quantum computing have driven efforts to realize fully functional quantum processing units (QPUs). Recent success in developing proof-of-principle QPUs has prompted the question of how to integrate these emerging processors into modern high-performance computing (HPC) systems. We examine how QPUs can be integrated into current and future HPC system architectures by accounting for func- tional and physical design requirements. We identify two integration pathways that are differentiated by infrastructure constraints on the QPU and the use cases expected for the HPC system. This includes a tight integration that assumes infrastructure bottlenecks can be overcome as well asmore » a loose integration that as- sumes they cannot. We find that the performance of both approaches is likely to depend on the quantum interconnect that serves to entangle multiple QPUs. As a result, we also identify several challenges in assessing QPU performance for HPC, and we consider new metrics that capture the interplay between system architecture and the quantum parallelism underlying computational performance.« less
Computational substrates of social value in interpersonal collaboration.
Fareri, Dominic S; Chang, Luke J; Delgado, Mauricio R
2015-05-27
Decisions to engage in collaborative interactions require enduring considerable risk, yet provide the foundation for building and maintaining relationships. Here, we investigate the mechanisms underlying this process and test a computational model of social value to predict collaborative decision making. Twenty-six participants played an iterated trust game and chose to invest more frequently with their friends compared with a confederate or computer despite equal reinforcement rates. This behavior was predicted by our model, which posits that people receive a social value reward signal from reciprocation of collaborative decisions conditional on the closeness of the relationship. This social value signal was associated with increased activity in the ventral striatum and medial prefrontal cortex, which significantly predicted the reward parameters from the social value model. Therefore, we demonstrate that the computation of social value drives collaborative behavior in repeated interactions and provide a mechanistic account of reward circuit function instantiating this process. Copyright © 2015 the authors 0270-6474/15/358170-11$15.00/0.
InteGO2: A web tool for measuring and visualizing gene semantic similarities using Gene Ontology
Peng, Jiajie; Li, Hongxiang; Liu, Yongzhuang; ...
2016-08-31
Here, the Gene Ontology (GO) has been used in high-throughput omics research as a major bioinformatics resource. The hierarchical structure of GO provides users a convenient platform for biological information abstraction and hypothesis testing. Computational methods have been developed to identify functionally similar genes. However, none of the existing measurements take into account all the rich information in GO. Similarly, using these existing methods, web-based applications have been constructed to compute gene functional similarities, and to provide pure text-based outputs. Without a graphical visualization interface, it is difficult for result interpretation. As a result, we present InteGO2, a web toolmore » that allows researchers to calculate the GO-based gene semantic similarities using seven widely used GO-based similarity measurements. Also, we provide an integrative measurement that synergistically integrates all the individual measurements to improve the overall performance. Using HTML5 and cytoscape.js, we provide a graphical interface in InteGO2 to visualize the resulting gene functional association networks. In conclusion, InteGO2 is an easy-to-use HTML5 based web tool. With it, researchers can measure gene or gene product functional similarity conveniently, and visualize the network of functional interactions in a graphical interface.« less
InteGO2: a web tool for measuring and visualizing gene semantic similarities using Gene Ontology.
Peng, Jiajie; Li, Hongxiang; Liu, Yongzhuang; Juan, Liran; Jiang, Qinghua; Wang, Yadong; Chen, Jin
2016-08-31
The Gene Ontology (GO) has been used in high-throughput omics research as a major bioinformatics resource. The hierarchical structure of GO provides users a convenient platform for biological information abstraction and hypothesis testing. Computational methods have been developed to identify functionally similar genes. However, none of the existing measurements take into account all the rich information in GO. Similarly, using these existing methods, web-based applications have been constructed to compute gene functional similarities, and to provide pure text-based outputs. Without a graphical visualization interface, it is difficult for result interpretation. We present InteGO2, a web tool that allows researchers to calculate the GO-based gene semantic similarities using seven widely used GO-based similarity measurements. Also, we provide an integrative measurement that synergistically integrates all the individual measurements to improve the overall performance. Using HTML5 and cytoscape.js, we provide a graphical interface in InteGO2 to visualize the resulting gene functional association networks. InteGO2 is an easy-to-use HTML5 based web tool. With it, researchers can measure gene or gene product functional similarity conveniently, and visualize the network of functional interactions in a graphical interface. InteGO2 can be accessed via http://mlg.hit.edu.cn:8089/ .
InteGO2: A web tool for measuring and visualizing gene semantic similarities using Gene Ontology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, Jiajie; Li, Hongxiang; Liu, Yongzhuang
Here, the Gene Ontology (GO) has been used in high-throughput omics research as a major bioinformatics resource. The hierarchical structure of GO provides users a convenient platform for biological information abstraction and hypothesis testing. Computational methods have been developed to identify functionally similar genes. However, none of the existing measurements take into account all the rich information in GO. Similarly, using these existing methods, web-based applications have been constructed to compute gene functional similarities, and to provide pure text-based outputs. Without a graphical visualization interface, it is difficult for result interpretation. As a result, we present InteGO2, a web toolmore » that allows researchers to calculate the GO-based gene semantic similarities using seven widely used GO-based similarity measurements. Also, we provide an integrative measurement that synergistically integrates all the individual measurements to improve the overall performance. Using HTML5 and cytoscape.js, we provide a graphical interface in InteGO2 to visualize the resulting gene functional association networks. In conclusion, InteGO2 is an easy-to-use HTML5 based web tool. With it, researchers can measure gene or gene product functional similarity conveniently, and visualize the network of functional interactions in a graphical interface.« less
Centralized Authorization Using a Direct Service, Part II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wachsmann, A
Authorization is the process of deciding if entity X is allowed to have access to resource Y. Determining the identity of X is the job of the authentication process. One task of authorization in computer networks is to define and determine which user has access to which computers in the network. On Linux, the tendency exists to create a local account for each single user who should be allowed to logon to a computer. This is typically the case because a user not only needs login privileges to a computer but also additional resources like a home directory to actuallymore » do some work. Creating a local account on every computer takes care of all this. The problem with this approach is that these local accounts can be inconsistent with each other. The same user name could have a different user ID and/or group ID on different computers. Even more problematic is when two different accounts share the same user ID and group ID on different computers: User joe on computer1 could have user ID 1234 and group ID 56 and user jane on computer2 could have the same user ID 1234 and group ID 56. This is a big security risk in case shared resources like NFS are used. These two different accounts are the same for an NFS server so that these users can wipe out each other's files. The solution to this inconsistency problem is to have only one central, authoritative data source for this kind of information and a means of providing all your computers with access to this central source. This is what a ''Directory Service'' is. The two directory services most widely used for centralizing authorization data are the Network Information Service (NIS, formerly known as Yellow Pages or YP) and Lightweight Directory Access Protocol (LDAP).« less
García-Grajales, Julián A.; Rucabado, Gabriel; García-Dopico, Antonio; Peña, José-María; Jérusalem, Antoine
2015-01-01
With the growing body of research on traumatic brain injury and spinal cord injury, computational neuroscience has recently focused its modeling efforts on neuronal functional deficits following mechanical loading. However, in most of these efforts, cell damage is generally only characterized by purely mechanistic criteria, functions of quantities such as stress, strain or their corresponding rates. The modeling of functional deficits in neurites as a consequence of macroscopic mechanical insults has been rarely explored. In particular, a quantitative mechanically based model of electrophysiological impairment in neuronal cells, Neurite, has only very recently been proposed. In this paper, we present the implementation details of this model: a finite difference parallel program for simulating electrical signal propagation along neurites under mechanical loading. Following the application of a macroscopic strain at a given strain rate produced by a mechanical insult, Neurite is able to simulate the resulting neuronal electrical signal propagation, and thus the corresponding functional deficits. The simulation of the coupled mechanical and electrophysiological behaviors requires computational expensive calculations that increase in complexity as the network of the simulated cells grows. The solvers implemented in Neurite—explicit and implicit—were therefore parallelized using graphics processing units in order to reduce the burden of the simulation costs of large scale scenarios. Cable Theory and Hodgkin-Huxley models were implemented to account for the electrophysiological passive and active regions of a neurite, respectively, whereas a coupled mechanical model accounting for the neurite mechanical behavior within its surrounding medium was adopted as a link between electrophysiology and mechanics. This paper provides the details of the parallel implementation of Neurite, along with three different application examples: a long myelinated axon, a segmented dendritic tree, and a damaged axon. The capabilities of the program to deal with large scale scenarios, segmented neuronal structures, and functional deficits under mechanical loading are specifically highlighted. PMID:25680098
A computational model of selection by consequences.
McDowell, J J
2004-05-01
Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of computational experiments that arranged reinforcement according to random-interval (RI) schedules. The quantitative features of the model were varied over wide ranges in these experiments, and many of the qualitative features of the model also were varied. The digital organism consistently showed a hyperbolic relation between response and reinforcement rates, and this hyperbolic description of the data was consistently better than the description provided by other, similar, function forms. In addition, the parameters of the hyperbola varied systematically with the quantitative, and some of the qualitative, properties of the model in ways that were consistent with findings from biological organisms. These results suggest that the material events responsible for an organism's responding on RI schedules are computationally equivalent to Darwinian selection by consequences. They also suggest that the computational model developed here is worth pursuing further as a possible dynamic account of behavior.
Lifetime Reliability Evaluation of Structural Ceramic Parts with the CARES/LIFE Computer Program
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Powers, Lynn M.; Janosik, Lesley A.; Gyekenyesi, John P.
1993-01-01
The computer program CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing the power law, Paris law, or Walker equation. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled using either the principle of independent action (PIA), Weibull's normal stress averaging method (NSA), or Batdorf's theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. Two example problems demonstrating cyclic fatigue parameter estimation and component reliability analysis with proof testing are included.
Development and Application of a Process-based River System Model at a Continental Scale
NASA Astrophysics Data System (ADS)
Kim, S. S. H.; Dutta, D.; Vaze, J.; Hughes, J. D.; Yang, A.; Teng, J.
2014-12-01
Existing global and continental scale river models, mainly designed for integrating with global climate model, are of very course spatial resolutions and they lack many important hydrological processes, such as overbank flow, irrigation diversion, groundwater seepage/recharge, which operate at a much finer resolution. Thus, these models are not suitable for producing streamflow forecast at fine spatial resolution and water accounts at sub-catchment levels, which are important for water resources planning and management at regional and national scale. A large-scale river system model has been developed and implemented for water accounting in Australia as part of the Water Information Research and Development Alliance between Australia's Bureau of Meteorology (BoM) and CSIRO. The model, developed using node-link architecture, includes all major hydrological processes, anthropogenic water utilisation and storage routing that influence the streamflow in both regulated and unregulated river systems. It includes an irrigation model to compute water diversion for irrigation use and associated fluxes and stores and a storage-based floodplain inundation model to compute overbank flow from river to floodplain and associated floodplain fluxes and stores. An auto-calibration tool has been built within the modelling system to automatically calibrate the model in large river systems using Shuffled Complex Evolution optimiser and user-defined objective functions. The auto-calibration tool makes the model computationally efficient and practical for large basin applications. The model has been implemented in several large basins in Australia including the Murray-Darling Basin, covering more than 2 million km2. The results of calibration and validation of the model shows highly satisfactory performance. The model has been operalisationalised in BoM for producing various fluxes and stores for national water accounting. This paper introduces this newly developed river system model describing the conceptual hydrological framework, methods used for representing different hydrological processes in the model and the results and evaluation of the model performance. The operational implementation of the model for water accounting is discussed.
43 CFR 44.23 - How does the Department certify payment computations?
Code of Federal Regulations, 2013 CFR
2013-10-01
... Certified Public Accountant, or an independent public accountant, that the statement has been audited in... guidelines that State auditors, independent Certified Public Accountants, or independent public accountants...
43 CFR 44.23 - How does the Department certify payment computations?
Code of Federal Regulations, 2011 CFR
2011-10-01
... Certified Public Accountant, or an independent public accountant, that the statement has been audited in... guidelines that State auditors, independent Certified Public Accountants, or independent public accountants...
43 CFR 44.23 - How does the Department certify payment computations?
Code of Federal Regulations, 2012 CFR
2012-10-01
... Certified Public Accountant, or an independent public accountant, that the statement has been audited in... guidelines that State auditors, independent Certified Public Accountants, or independent public accountants...
43 CFR 44.23 - How does the Department certify payment computations?
Code of Federal Regulations, 2014 CFR
2014-10-01
... Certified Public Accountant, or an independent public accountant, that the statement has been audited in... guidelines that State auditors, independent Certified Public Accountants, or independent public accountants...
Predicting vapor liquid equilibria using density functional theory: A case study of argon
NASA Astrophysics Data System (ADS)
Goel, Himanshu; Ling, Sanliang; Ellis, Breanna Nicole; Taconi, Anna; Slater, Ben; Rai, Neeraj
2018-06-01
Predicting vapor liquid equilibria (VLE) of molecules governed by weak van der Waals (vdW) interactions using the first principles approach is a significant challenge. Due to the poor scaling of the post Hartree-Fock wave function theory with system size/basis functions, the Kohn-Sham density functional theory (DFT) is preferred for systems with a large number of molecules. However, traditional DFT cannot adequately account for medium to long range correlations which are necessary for modeling vdW interactions. Recent developments in DFT such as dispersion corrected models and nonlocal van der Waals functionals have attempted to address this weakness with a varying degree of success. In this work, we predict the VLE of argon and assess the performance of several density functionals and the second order Møller-Plesset perturbation theory (MP2) by determining critical and structural properties via first principles Monte Carlo simulations. PBE-D3, BLYP-D3, and rVV10 functionals were used to compute vapor liquid coexistence curves, while PBE0-D3, M06-2X-D3, and MP2 were used for computing liquid density at a single state point. The performance of the PBE-D3 functional for VLE is superior to other functionals (BLYP-D3 and rVV10). At T = 85 K and P = 1 bar, MP2 performs well for the density and structural features of the first solvation shell in the liquid phase.
Vattikonda, Anirudh; Surampudi, Bapi Raju; Banerjee, Arpan; Deco, Gustavo; Roy, Dipanjan
2016-08-01
Computational modeling of the spontaneous dynamics over the whole brain provides critical insight into the spatiotemporal organization of brain dynamics at multiple resolutions and their alteration to changes in brain structure (e.g. in diseased states, aging, across individuals). Recent experimental evidence further suggests that the adverse effect of lesions is visible on spontaneous dynamics characterized by changes in resting state functional connectivity and its graph theoretical properties (e.g. modularity). These changes originate from altered neural dynamics in individual brain areas that are otherwise poised towards a homeostatic equilibrium to maintain a stable excitatory and inhibitory activity. In this work, we employ a homeostatic inhibitory mechanism, balancing excitation and inhibition in the local brain areas of the entire cortex under neurological impairments like lesions to understand global functional recovery (across brain networks and individuals). Previous computational and empirical studies have demonstrated that the resting state functional connectivity varies primarily due to the location and specific topological characteristics of the lesion. We show that local homeostatic balance provides a functional recovery by re-establishing excitation-inhibition balance in all areas that are affected by lesion. We systematically compare the extent of recovery in the primary hub areas (e.g. default mode network (DMN), medial temporal lobe, medial prefrontal cortex) as well as other sensory areas like primary motor area, supplementary motor area, fronto-parietal and temporo-parietal networks. Our findings suggest that stability and richness similar to the normal brain dynamics at rest are achievable by re-establishment of balance. Copyright © 2016 Elsevier Inc. All rights reserved.
Auditory expectation: the information dynamics of music perception and cognition.
Pearce, Marcus T; Wiggins, Geraint A
2012-10-01
Following in a psychological and musicological tradition beginning with Leonard Meyer, and continuing through David Huron, we present a functional, cognitive account of the phenomenon of expectation in music, grounded in computational, probabilistic modeling. We summarize a range of evidence for this approach, from psychology, neuroscience, musicology, linguistics, and creativity studies, and argue that simulating expectation is an important part of understanding a broad range of human faculties, in music and beyond. Copyright © 2012 Cognitive Science Society, Inc.
Finite element solution of optimal control problems with state-control inequality constraints
NASA Technical Reports Server (NTRS)
Bless, Robert R.; Hodges, Dewey H.
1992-01-01
It is demonstrated that the weak Hamiltonian finite-element formulation is amenable to the solution of optimal control problems with inequality constraints which are functions of both state and control variables. Difficult problems can be treated on account of the ease with which algebraic equations can be generated before having to specify the problem. These equations yield very accurate solutions. Owing to the sparse structure of the resulting Jacobian, computer solutions can be obtained quickly when the sparsity is exploited.
Neurodynamic system theory: scope and limits.
Erdi, P
1993-06-01
This paper proposes that neurodynamic system theory may be used to connect structural and functional aspects of neural organization. The paper claims that generalized causal dynamic models are proper tools for describing the self-organizing mechanism of the nervous system. In particular, it is pointed out that ontogeny, development, normal performance, learning, and plasticity, can be treated by coherent concepts and formalism. Taking into account the self-referential character of the brain, autopoiesis, endophysics and hermeneutics are offered as elements of a poststructuralist brain (-mind-computer) theory.
Smarr formula for Lovelock black holes: A Lagrangian approach
NASA Astrophysics Data System (ADS)
Liberati, Stefano; Pacilio, Costantino
2016-04-01
The mass formula for black holes can be formally expressed in terms of a Noether charge surface integral plus a suitable volume integral, for any gravitational theory. The integrals can be constructed as an application of Wald's formalism. We apply this formalism to compute the mass and the Smarr formula for static Lovelock black holes. Finally, we propose a new prescription for Wald's entropy in the case of Lovelock black holes, which takes into account topological contributions to the entropy functional.
Nonequilibrium Green's function theory for nonadiabatic effects in quantum electron transport
NASA Astrophysics Data System (ADS)
Kershaw, Vincent F.; Kosov, Daniel S.
2017-12-01
We develop nonequilibrium Green's function-based transport theory, which includes effects of nonadiabatic nuclear motion in the calculation of the electric current in molecular junctions. Our approach is based on the separation of slow and fast time scales in the equations of motion for Green's functions by means of the Wigner representation. Time derivatives with respect to central time serve as a small parameter in the perturbative expansion enabling the computation of nonadiabatic corrections to molecular Green's functions. Consequently, we produce a series of analytic expressions for non-adiabatic electronic Green's functions (up to the second order in the central time derivatives), which depend not solely on the instantaneous molecular geometry but likewise on nuclear velocities and accelerations. An extended formula for electric current is derived which accounts for the non-adiabatic corrections. This theory is concisely illustrated by the calculations on a model molecular junction.
Nonequilibrium Green's function theory for nonadiabatic effects in quantum electron transport.
Kershaw, Vincent F; Kosov, Daniel S
2017-12-14
We develop nonequilibrium Green's function-based transport theory, which includes effects of nonadiabatic nuclear motion in the calculation of the electric current in molecular junctions. Our approach is based on the separation of slow and fast time scales in the equations of motion for Green's functions by means of the Wigner representation. Time derivatives with respect to central time serve as a small parameter in the perturbative expansion enabling the computation of nonadiabatic corrections to molecular Green's functions. Consequently, we produce a series of analytic expressions for non-adiabatic electronic Green's functions (up to the second order in the central time derivatives), which depend not solely on the instantaneous molecular geometry but likewise on nuclear velocities and accelerations. An extended formula for electric current is derived which accounts for the non-adiabatic corrections. This theory is concisely illustrated by the calculations on a model molecular junction.
NASA Astrophysics Data System (ADS)
Claessens, M.; Möller, K.; Thiel, H. G.
1997-07-01
Computational fluid dynamics calculations for high- and low-current arcs in an interrupter of the self-blast type have been performed. The mixing process of the hot PTFE cloud with the cold 0022-3727/30/13/011/img6 in the pressure chamber is strongly inhomogeneous. The existence of two different species has been taken into account by interpolation of the material functions according to their mass fraction in each grid cell. Depending on the arcing time, fault current and interrupter geometry, blow temperatures of up to 2000 K have been found. The simulation results for a decaying arc immediately before current zero yield a significantly reduced arc cooling at the stagnation point for high blow temperatures.
The insignificant evolution of the richness-mass relation of galaxy clusters
NASA Astrophysics Data System (ADS)
Andreon, S.; Congdon, P.
2014-08-01
We analysed the richness-mass scaling of 23 very massive clusters at 0.15 < z < 0.55 with homogenously measured weak-lensing masses and richnesses within a fixed aperture of 0.5 Mpc radius. We found that the richness-mass scaling is very tight (the scatter is <0.09 dex with 90% probability) and independent of cluster evolutionary status and morphology. This implies a close association between infall and evolution of dark matter and galaxies in the central region of clusters. We also found that the evolution of the richness-mass intercept is minor at most, and, given the minor mass evolution across the studied redshift range, the richness evolution of individual massive clusters also turns out to be very small. Finally, it was paramount to account for the cluster mass function and the selection function. Ignoring them would lead to larger biases than the (otherwise quoted) errors. Our study benefits from: a) weak-lensing masses instead of proxy-based masses thereby removing the ambiguity between a real trend and one induced by an accounted evolution of the used mass proxy; b) the use of projected masses that simplify the statistical analysis thereby not requiring consideration of the unknown covariance induced by the cluster orientation/triaxiality; c) the use of aperture masses as they are free of the pseudo-evolution of mass definitions anchored to the evolving density of the Universe; d) a proper accounting of the sample selection function and of the Malmquist-like effect induced by the cluster mass function; e) cosmological simulations for the computation of the cluster mass function, its evolution, and the mass growth of each individual cluster.
Faceting for direction-dependent spectral deconvolution
NASA Astrophysics Data System (ADS)
Tasse, C.; Hugo, B.; Mirmont, M.; Smirnov, O.; Atemkeng, M.; Bester, L.; Hardcastle, M. J.; Lakhoo, R.; Perkins, S.; Shimwell, T.
2018-04-01
The new generation of radio interferometers is characterized by high sensitivity, wide fields of view and large fractional bandwidth. To synthesize the deepest images enabled by the high dynamic range of these instruments requires us to take into account the direction-dependent Jones matrices, while estimating the spectral properties of the sky in the imaging and deconvolution algorithms. In this paper we discuss and implement a wideband wide-field spectral deconvolution framework (DDFacet) based on image plane faceting, that takes into account generic direction-dependent effects. Specifically, we present a wide-field co-planar faceting scheme, and discuss the various effects that need to be taken into account to solve for the deconvolution problem (image plane normalization, position-dependent Point Spread Function, etc). We discuss two wideband spectral deconvolution algorithms based on hybrid matching pursuit and sub-space optimisation respectively. A few interesting technical features incorporated in our imager are discussed, including baseline dependent averaging, which has the effect of improving computing efficiency. The version of DDFacet presented here can account for any externally defined Jones matrices and/or beam patterns.
Prudic, David E.
1989-01-01
Computer models are widely used to simulate groundwater flow for evaluating and managing the groundwater resource of many aquifers, but few are designed to also account for surface flow in streams. A computer program was written for use in the US Geological Survey modular finite difference groundwater flow model to account for the amount of flow in streams and to simulate the interaction between surface streams and groundwater. The new program is called the Streamflow-Routing Package. The Streamflow-Routing Package is not a true surface water flow model, but rather is an accounting program that tracks the flow in one or more streams which interact with groundwater. The program limits the amount of groundwater recharge to the available streamflow. It permits two or more streams to merge into one with flow in the merged stream equal to the sum of the tributary flows. The program also permits diversions from streams. The groundwater flow model with the Streamflow-Routing Package has an advantage over the analytical solution in simulating the interaction between aquifer and stream because it can be used to simulate complex systems that cannot be readily solved analytically. The Streamflow-Routing Package does not include a time function for streamflow but rather streamflow entering the modeled area is assumed to be instantly available to downstream reaches during each time period. This assumption is generally reasonable because of the relatively slow rate of groundwater flow. Another assumption is that leakage between streams and aquifers is instantaneous. This assumption may not be reasonable if the streams and aquifers are separated by a thick unsaturated zone. Documentation of the Streamflow-Routing Package includes data input instructions; flow charts, narratives, and listings of the computer program for each of four modules; and input data sets and printed results for two test problems, and one example problem. (Lantz-PTT)
Accounting utility for determining individual usage of production level software systems
NASA Technical Reports Server (NTRS)
Garber, S. C.
1984-01-01
An accounting package was developed which determines the computer resources utilized by a user during the execution of a particular program and updates a file containing accumulated resource totals. The accounting package is divided into two separate programs. The first program determines the total amount of computer resources utilized by a user during the execution of a particular program. The second program uses these totals to update a file containing accumulated totals of computer resources utilized by a user for a particular program. This package is useful to those persons who have several other users continually accessing and running programs from their accounts. The package provides the ability to determine which users are accessing and running specified programs along with their total level of usage.
Variational Methods in Sensitivity Analysis and Optimization for Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Ibrahim, A. H.; Hou, G. J.-W.; Tiwari, S. N. (Principal Investigator)
1996-01-01
Variational methods (VM) sensitivity analysis, which is the continuous alternative to the discrete sensitivity analysis, is employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The determination of the sensitivity derivatives of the performance index or functional entails the coupled solutions of the state and costate equations. As the stable and converged numerical solution of the costate equations with their boundary conditions are a priori unknown, numerical stability analysis is performed on both the state and costate equations. Thereafter, based on the amplification factors obtained by solving the generalized eigenvalue equations, the stability behavior of the costate equations is discussed and compared with the state (Euler) equations. The stability analysis of the costate equations suggests that the converged and stable solution of the costate equation is possible only if the computational domain of the costate equations is transformed to take into account the reverse flow nature of the costate equations. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.
NASA Astrophysics Data System (ADS)
Gillissen, J. J. J.; Boersma, B. J.; Mortensen, P. H.; Andersson, H. I.
2007-03-01
Fiber-induced drag reduction can be studied in great detail by means of direct numerical simulation [J. S. Paschkewitz et al., J. Fluid Mech. 518, 281 (2004)]. To account for the effect of the fibers, the Navier-Stokes equations are supplemented by the fiber stress tensor, which depends on the distribution function of fiber orientation angles. We have computed this function in turbulent channel flow, by solving the Fokker-Planck equation numerically. The results are used to validate an approximate method for calculating fiber stress, in which the second moment of the orientation distribution is solved. Since the moment evolution equations contain higher-order moments, a closure relation is required to obtain as many equations as unknowns. We investigate the performance of the eigenvalue-based optimal fitted closure scheme [J. S. Cintra and C. L. Tucker, J. Rheol. 39, 1095 (1995)]. The closure-predicted stress and flow statistics in two-way coupled simulations are within 10% of the "exact" Fokker-Planck solution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jasper, Ahren W.; Gruey, Zackery B.; Harding, Lawrence B.
Monte Carlo phase space integration (MCPSI) is used to compute full dimensional and fully anharmonic, but classical, rovibrational partition functions for 22 small- and medium-sized molecules and radicals. Several of the species considered here feature multiple minima and low-frequency nonlocal motions, and efficiently sampling these systems is facilitated using curvilinear (stretch, bend, and torsion) coordinates. The curvilinear coordinate MCPSI method is demonstrated to be applicable to the treatment of fluxional species with complex rovibrational structures and as many as 21 fully coupled rovibrational degrees of freedom. Trends in the computed anharmonicity corrections are discussed. For many systems, rovibrational anharmonicities atmore » elevated temperatures are shown to vary consistently with the number of degrees of freedom and with temperature once rovibrational coupling and torsional anharmonicity are accounted for. Larger corrections are found for systems with complex vibrational structures, such as systems with multiple large-amplitude modes and/or multiple minima.« less
Nonlinear transport for a dilute gas in steady Couette flow
NASA Astrophysics Data System (ADS)
Garzó, V.; López de Haro, M.
1997-03-01
Transport properties of a dilute gas subjected to arbitrarily large velocity and temperature gradients (steady planar Couette flow) are determined. The results are obtained from the so-called ellipsoidal statistical (ES) kinetic model, which is an extension of the well-known BGK kinetic model to account for the correct Prandtl number. At a hydrodynamic level, the solution is characterized by constant pressure, and linear velocity and parabolic temperature profiles with respect to a scaled variable. The transport coefficients are explicitly evaluated as nonlinear functions of the shear rate. A comparison with previous results derived from a perturbative solution of the Boltzmann equation as well as from other kinetic models is carried out. Such a comparison shows that the ES predictions are in better agreement with the Boltzmann results than those of the other approximations. In addition, the velocity distribution function is also computed. Although the shear rates required for observing non-Newtonian effects are experimentally unrealizable, the conclusions obtained here may be relevant for analyzing computer results.
Continuous Fluorescence Microphotolysis and Correlation Spectroscopy Using 4Pi Microscopy
Arkhipov, Anton; Hüve, Jana; Kahms, Martin; Peters, Reiner; Schulten, Klaus
2007-01-01
Continuous fluorescence microphotolysis (CFM) and fluorescence correlation spectroscopy (FCS) permit measurement of molecular mobility and association reactions in single living cells. CFM and FCS complement each other ideally and can be realized using identical equipment. So far, the spatial resolution of CFM and FCS was restricted by the resolution of the light microscope to the micrometer scale. However, cellular functions generally occur on the nanometer scale. Here, we develop the theoretical and computational framework for CFM and FCS experiments using 4Pi microscopy, which features an axial resolution of ∼100 nm. The framework, taking the actual 4Pi point spread function of the instrument into account, was validated by measurements on model systems, employing 4Pi conditions or normal confocal conditions together with either single- or two-photon excitation. In all cases experimental data could be well fitted by computed curves for expected diffusion coefficients, even when the signal/noise ratio was small due to the small number of fluorophores involved. PMID:17704168
Strauss, G; Winkler, D; Jacobs, S; Trantakis, C; Dietz, A; Bootz, F; Meixensberger, J; Falk, V
2005-07-01
This study examines the advantages and disadvantages of a commercial telemanipulator system (daVinci, Intuitive Surgical, USA) with computer-guided instruments in functional endoscopic sinus surgery (FESS). We performed five different surgical FESS steps on 14 anatomical preparation and compared them with conventional FESS. A total of 140 procedures were examined taking into account the following parameters: degrees of freedom (DOF), duration , learning curve, force feedback, human-machine-interface. Telemanipulatory instruments have more DOF available then conventional instrumentation in FESS. The average time consumed by configuration of the telemanipulator is around 9+/-2 min. Missing force feedback is evaluated mainly as a disadvantage of the telemanipulator. Scaling was evaluated as helpful. The ergonomic concept seems to be better than the conventional solution. Computer guided instruments showed better results for the available DOF of the instruments. The human-machine-interface is more adaptable and variable then in conventional instrumentation. Motion scaling and indexing are characteristics of the telemanipulator concept which are helpful for FESS in our study.
O'Donnell, Cian; Gonçalves, J Tiago; Portera-Cailliau, Carlos; Sejnowski, Terrence J
2017-10-11
A leading theory holds that neurodevelopmental brain disorders arise from imbalances in excitatory and inhibitory (E/I) brain circuitry. However, it is unclear whether this one-dimensional model is rich enough to capture the multiple neural circuit alterations underlying brain disorders. Here, we combined computational simulations with analysis of in vivo two-photon Ca 2+ imaging data from somatosensory cortex of Fmr1 knock-out (KO) mice, a model of Fragile-X Syndrome, to test the E/I imbalance theory. We found that: (1) The E/I imbalance model cannot account for joint alterations in the observed neural firing rates and correlations; (2) Neural circuit function is vastly more sensitive to changes in some cellular components over others; (3) The direction of circuit alterations in Fmr1 KO mice changes across development. These findings suggest that the basic E/I imbalance model should be updated to higher dimensional models that can better capture the multidimensional computational functions of neural circuits.
NASA Astrophysics Data System (ADS)
Sharma, Manu; Resta, Raffaele; Car, Roberto
2004-03-01
We have implemented a modified Car-Parrinello molecular dynamics scheme in which maximally localized Wannier functions, instead of delocalized Bloch orbitals, are used to represent ``on the fly'' the electronic wavefunction of an insulating system. Within our scheme, we account for the effects of a finite homogeneous field applied to the simulation cell; we then use the ideas of the modern theory of polarization to investigate the system's response. The dielectric response (linear and nonlinear) of a given material is thus directly accessible at a reasonable computational cost. We have performed a thorough study of the behavior of a computational sample of liquid water under the effect of an electric field. We used norm-conserving pseudopotentials, the PBE exchange-correlation potential, and supercell containing water 64 molecules. Besides providing the static response of the liquid at a given temperature, our simulations yield microscopic insight into features wich are not easily measured in experiments, particularly regarding relaxation phenomena.
Otto, A. Ross; Gershman, Samuel J.; Markman, Arthur B.; Daw, Nathaniel D.
2013-01-01
A number of accounts of human and animal behavior posit the operation of parallel and competing valuation systems in the control of choice behavior. Along these lines, a flexible but computationally expensive model-based reinforcement learning system has been contrasted with a less flexible but more efficient model-free reinforcement learning system. The factors governing which system controls behavior—and under what circumstances—are still unclear. Based on the hypothesis that model-based reinforcement learning requires cognitive resources, we demonstrate that having human decision-makers perform a demanding secondary task engenders increased reliance on a model-free reinforcement learning strategy. Further, we show that across trials, people negotiate this tradeoff dynamically as a function of concurrent executive function demands and their choice latencies reflect the computational expenses of the strategy employed. These results demonstrate that competition between multiple learning systems can be controlled on a trial-by-trial basis by modulating the availability of cognitive resources. PMID:23558545
Computed versus measured ion velocity distribution functions in a Hall effect thruster
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garrigues, L.; CNRS, LAPLACE, F-31062 Toulouse; Mazouffre, S.
2012-06-01
We compare time-averaged and time-varying measured and computed ion velocity distribution functions in a Hall effect thruster for typical operating conditions. The ion properties are measured by means of laser induced fluorescence spectroscopy. Simulations of the plasma properties are performed with a two-dimensional hybrid model. In the electron fluid description of the hybrid model, the anomalous transport responsible for the electron diffusion across the magnetic field barrier is deduced from the experimental profile of the time-averaged electric field. The use of a steady state anomalous mobility profile allows the hybrid model to capture some properties like the time-averaged ion meanmore » velocity. Yet, the model fails at reproducing the time evolution of the ion velocity. This fact reveals a complex underlying physics that necessitates to account for the electron dynamics over a short time-scale. This study also shows the necessity for electron temperature measurements. Moreover, the strength of the self-magnetic field due to the rotating Hall current is found negligible.« less
Gonçalves, J Tiago; Portera-Cailliau, Carlos
2017-01-01
A leading theory holds that neurodevelopmental brain disorders arise from imbalances in excitatory and inhibitory (E/I) brain circuitry. However, it is unclear whether this one-dimensional model is rich enough to capture the multiple neural circuit alterations underlying brain disorders. Here, we combined computational simulations with analysis of in vivo two-photon Ca2+ imaging data from somatosensory cortex of Fmr1 knock-out (KO) mice, a model of Fragile-X Syndrome, to test the E/I imbalance theory. We found that: (1) The E/I imbalance model cannot account for joint alterations in the observed neural firing rates and correlations; (2) Neural circuit function is vastly more sensitive to changes in some cellular components over others; (3) The direction of circuit alterations in Fmr1 KO mice changes across development. These findings suggest that the basic E/I imbalance model should be updated to higher dimensional models that can better capture the multidimensional computational functions of neural circuits. PMID:29019321
An efficient dynamic load balancing algorithm
NASA Astrophysics Data System (ADS)
Lagaros, Nikos D.
2014-01-01
In engineering problems, randomness and uncertainties are inherent. Robust design procedures, formulated in the framework of multi-objective optimization, have been proposed in order to take into account sources of randomness and uncertainty. These design procedures require orders of magnitude more computational effort than conventional analysis or optimum design processes since a very large number of finite element analyses is required to be dealt. It is therefore an imperative need to exploit the capabilities of computing resources in order to deal with this kind of problems. In particular, parallel computing can be implemented at the level of metaheuristic optimization, by exploiting the physical parallelization feature of the nondominated sorting evolution strategies method, as well as at the level of repeated structural analyses required for assessing the behavioural constraints and for calculating the objective functions. In this study an efficient dynamic load balancing algorithm for optimum exploitation of available computing resources is proposed and, without loss of generality, is applied for computing the desired Pareto front. In such problems the computation of the complete Pareto front with feasible designs only, constitutes a very challenging task. The proposed algorithm achieves linear speedup factors and almost 100% speedup factor values with reference to the sequential procedure.
Uncertainty propagation of p-boxes using sparse polynomial chaos expansions
NASA Astrophysics Data System (ADS)
Schöbi, Roland; Sudret, Bruno
2017-06-01
In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions to surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.
Uncertainty propagation of p-boxes using sparse polynomial chaos expansions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schöbi, Roland, E-mail: schoebi@ibk.baug.ethz.ch; Sudret, Bruno, E-mail: sudret@ibk.baug.ethz.ch
2017-06-15
In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions tomore » surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.« less
Systemic Lisbon Battery: Normative Data for Memory and Attention Assessments.
Gamito, Pedro; Morais, Diogo; Oliveira, Jorge; Ferreira Lopes, Paulo; Picareli, Luís Felipe; Matias, Marcelo; Correia, Sara; Brito, Rodrigo
2016-05-04
Memory and attention are two cognitive domains pivotal for the performance of instrumental activities of daily living (IADLs). The assessment of these functions is still widely carried out with pencil-and-paper tests, which lack ecological validity. The evaluation of cognitive and memory functions while the patients are performing IADLs should contribute to the ecological validity of the evaluation process. The objective of this study is to establish normative data from virtual reality (VR) IADLs designed to activate memory and attention functions. A total of 243 non-clinical participants carried out a paper-and-pencil Mini-Mental State Examination (MMSE) and performed 3 VR activities: art gallery visual matching task, supermarket shopping task, and memory fruit matching game. The data (execution time and errors, and money spent in the case of the supermarket activity) was automatically generated from the app. Outcomes were computed using non-parametric statistics, due to non-normality of distributions. Age, academic qualifications, and computer experience all had significant effects on most measures. Normative values for different levels of these measures were defined. Age, academic qualifications, and computer experience should be taken into account while using our VR-based platform for cognitive assessment purposes. ©Pedro Gamito, Diogo Morais, Jorge Oliveira, Paulo Ferreira Lopes, Luís Felipe Picareli, Marcelo Matias, Sara Correia, Rodrigo Brito. Originally published in JMIR Rehabilitation and Assistive Technology (http://rehab.jmir.org), 04.05.2016.
Guyomarc'h, Pierre; Bruzek, Jaroslav
2011-05-20
Identification in forensic anthropology and the definition of a biological profile in bioarchaeology are essential to each of those fields and use the same methodologies. Sex, age, stature and ancestry can be conclusive or dispensable, depending on the field. The Fordisc(®) 3.0 computer program was developed to aid in the identification of the sex, stature and ancestry of skeletal remains by exploiting the Forensic Data Bank (FDB) and computing discriminant function analyses (DFAs). Although widely used, this tool has been recently criticised, principally when used to determine ancestry. Two sub-samples of individuals of known sex were drawn from French (n=50) and Thai (n=91) osteological collections and used to assess the reliability of sex determination using Fordisc(®) 3.0 with 12 cranial measurements. Comparisons were made using the whole FDB as well as using select groups, taking into account the posterior and typicality probabilities. The results of Fordisc(®) 3.0 vary between 52.2% and 77.8% depending on the options and groups selected. Tests of published discriminant functions and the computation of specific DFA were performed in order to discuss the applicability of this software and, overall, to question the pertinence of the use of DFA and linear distances in sex determination, in light of the huge cranial morphological variability. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Optimal crop selection and water allocation under limited water supply in irrigation
NASA Astrophysics Data System (ADS)
Stange, Peter; Grießbach, Ulrike; Schütze, Niels
2015-04-01
Due to climate change, extreme weather conditions such as droughts may have an increasing impact on irrigated agriculture. To cope with limited water resources in irrigation systems, a new decision support framework is developed which focuses on an integrated management of both irrigation water supply and demand at the same time. For modeling the regional water demand, local (and site-specific) water demand functions are used which are derived from optimized agronomic response on farms scale. To account for climate variability the agronomic response is represented by stochastic crop water production functions (SCWPF). These functions take into account different soil types, crops and stochastically generated climate scenarios. The SCWPF's are used to compute the water demand considering different conditions, e.g., variable and fixed costs. This generic approach enables the consideration of both multiple crops at farm scale as well as of the aggregated response to water pricing at a regional scale for full and deficit irrigation systems. Within the SAPHIR (SAxonian Platform for High Performance IRrigation) project a prototype of a decision support system is developed which helps to evaluate combined water supply and demand management policies.
NASA Astrophysics Data System (ADS)
Zuehlsdorff, T. J.; Isborn, C. M.
2018-01-01
The correct treatment of vibronic effects is vital for the modeling of absorption spectra of many solvated dyes. Vibronic spectra for small dyes in solution can be easily computed within the Franck-Condon approximation using an implicit solvent model. However, implicit solvent models neglect specific solute-solvent interactions on the electronic excited state. On the other hand, a straightforward way to account for solute-solvent interactions and temperature-dependent broadening is by computing vertical excitation energies obtained from an ensemble of solute-solvent conformations. Ensemble approaches usually do not account for vibronic transitions and thus often produce spectral shapes in poor agreement with experiment. We address these shortcomings by combining zero-temperature vibronic fine structure with vertical excitations computed for a room-temperature ensemble of solute-solvent configurations. In this combined approach, all temperature-dependent broadening is treated classically through the sampling of configurations and quantum mechanical vibronic contributions are included as a zero-temperature correction to each vertical transition. In our calculation of the vertical excitations, significant regions of the solvent environment are treated fully quantum mechanically to account for solute-solvent polarization and charge-transfer. For the Franck-Condon calculations, a small amount of frozen explicit solvent is considered in order to capture solvent effects on the vibronic shape function. We test the proposed method by comparing calculated and experimental absorption spectra of Nile red and the green fluorescent protein chromophore in polar and non-polar solvents. For systems with strong solute-solvent interactions, the combined approach yields significant improvements over the ensemble approach. For systems with weak to moderate solute-solvent interactions, both the high-energy vibronic tail and the width of the spectra are in excellent agreement with experiments.
Computational analysis of nonlinearities within dynamics of cable-based driving systems
NASA Astrophysics Data System (ADS)
Anghelache, G. D.; Nastac, S.
2017-08-01
This paper deals with computational nonlinear dynamics of mechanical systems containing some flexural parts within the actuating scheme, and, especially, the situations of the cable-based driving systems were treated. It was supposed both functional nonlinearities and the real characteristic of the power supply, in order to obtain a realistically computer simulation model being able to provide very feasible results regarding the system dynamics. It was taken into account the transitory and stable regimes during a regular exploitation cycle. The authors present a particular case of a lift system, supposed to be representatively for the objective of this study. The simulations were made based on the values of the essential parameters acquired from the experimental tests and/or the regular practice in the field. The results analysis and the final discussions reveal the correlated dynamic aspects within the mechanical parts, the driving system, and the power supply, whole of these supplying potential sources of particular resonances, within some transitory phases of the working cycle, and which can affect structural and functional dynamics. In addition, it was underlines the influences of computational hypotheses on the both quantitative and qualitative behaviour of the system. Obviously, the most significant consequence of this theoretical and computational research consist by developing an unitary and feasible model, useful to dignify the nonlinear dynamic effects into the systems with cable-based driving scheme, and hereby to help an optimization of the exploitation regime including a dynamics control measures.
A Coupling Function Linking Solar Wind /IMF Variations and Geomagnetic Activity
NASA Astrophysics Data System (ADS)
Lyatsky, W.; Lyatskaya, S.; Tan, A.
2006-12-01
From a theoretical consideration we have obtained expressions for the coupling function linking solar wind and IMF parameters to geomagnetic activity. While deriving these expressions, we took into account (1) a scaling factor due to polar cap expansion while increasing a reconnected magnetic flux in the dayside magnetosphere, and (2) a modified Akasofu function for the reconnected flux for combined IMF Bz and By components. The resulting coupling function may be written as Fa = aVsw B^1/2 sina (q/2), where Vsw is the solar wind speed, B^ is the magnitude of the IMF vector in the Y-Z plane, q is the clock angle between the Z axis and IMF vector in the Y-Z plane, a is a coefficient, and the exponent, a, is derived from the experimental data and equals approximately to 2. The Fa function differs primary by the power of B^ from coupling functions proposed earlier. For testing the obtained coupling function, we used solar wind and interplanetary magnetic field data for four years for maximum and minimum solar activity. We computed 2-D contour plots for correlation coefficients for the dependence of geomagnetic activity indices on solar wind parameters for different coupling functions. The obtained diagrams showed a good correspondence to the theoretic coupling function Fa for a »2. The maximum correlation coefficient for the dependence of the polar cap PC index on the Fa coupling function is significantly higher than that computed for other coupling functions used researchers, for the same time intervals.
Arkansas' Curriculum Guide. Competency Based Computerized Accounting.
ERIC Educational Resources Information Center
Arkansas State Dept. of Education, Little Rock. Div. of Vocational, Technical and Adult Education.
This guide contains the essential parts of a total curriculum for a one-year secondary-level course in computerized accounting. Addressed in the individual sections of the guide are the following topics: the complete accounting cycle, computer operations for accounting, computerized accounting and general ledgers, computerized accounts payable,…
NASA Astrophysics Data System (ADS)
Godfrey-Kittle, Andrew; Cafiero, Mauricio
We present density functional theory (DFT) interaction energies for the sandwich and T-shaped conformers of substituted benzene dimers. The DFT functionals studied include TPSS, HCTH407, B3LYP, and X3LYP. We also include Hartree-Fock (HF) and second-order Møller-Plesset perturbation theory calculations (MP2), as well as calculations using a new functional, P3LYP, which includes PBE and HF exchange and LYP correlation. Although DFT methods do not explicitly account for the dispersion interactions important in the benzene-dimer interactions, we find that our new method, P3LYP, as well as HCTH407 and TPSS, match MP2 and CCSD(T) calculations much better than the hybrid methods B3LYP and X3LYP methods do.
Accounting for quality in the measurement of hospital performance: evidence from Costa Rica.
Arocena, Pablo; García-Prado, Ariadna
2007-07-01
This paper provides insights into how Costa Rican public hospitals responded to the pressure for increased efficiency and quality introduced by the reforms carried out over the period 1997-2001. To that purpose we compute a generalized output distance function by means of non-parametric mathematical programming to construct a productivity index, which accounts for productivity changes while controlling for quality of care. Our results show an improvement in hospital performance mainly driven by quality increases. The adoption of management contracts seems to have contributed to such enhancement, more notably for small hospitals. Further, productivity growth is primarily due to technical and scale efficiency change rather than technological change. A number of policy implications are drawn from these results. Copyright (c) 2006 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Van Eester, Dirk
2005-03-01
A semi-analytical method is proposed to evaluate the dielectric response of a plasma to electromagnetic waves in the ion cyclotron domain of frequencies in a D-shaped but axisymmetric toroidal geometry. The actual drift orbit of the particles is accounted for. The method hinges on subdividing the orbit into elementary segments in which the integrations can be performed analytically or by tabulation, and it relies on the local book-keeping of the relation between the toroidal angular momentum and the poloidal flux function. Depending on which variables are chosen, the method allows computation of elementary building blocks for either the wave or the Fokker-Planck equation, but the accent is mainly on the latter. Two types of tangent resonance are distinguished.
NASA Astrophysics Data System (ADS)
Bourlier, C.; Berginc, G.
2004-07-01
In this paper the first- and second-order Kirchhoff approximation is applied to study the backscattering enhancement phenomenon, which appears when the surface rms slope is greater than 0.5. The formulation is reduced to the geometric optics approximation in which the second-order illumination function is taken into account. This study is developed for a two-dimensional (2D) anisotropic stationary rough dielectric surface and for any surface slope and height distributions assumed to be statistically even. Using the Weyl representation of the Green function (which introduces an absolute value over the surface elevation in the phase term), the incoherent scattering coefficient under the stationary phase assumption is expressed as the sum of three terms. The incoherent scattering coefficient then requires the numerical computation of a ten- dimensional integral. To reduce the number of numerical integrations, the geometric optics approximation is applied, which assumes that the correlation between two adjacent points is very strong. The model is then proportional to two surface slope probabilities, for which the slopes would specularly reflect the beams in the double scattering process. In addition, the slope distributions are related with each other by a propagating function, which accounts for the second-order illumination function. The companion paper is devoted to the simulation of this model and comparisons with an 'exact' numerical method.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Computation. 206.2 Section 206.2 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT ACCOUNT BENEFITS RATIO... the account benefits ratios for each of the most recent 10 preceding fiscal years; and (2) Certify the...
Diederich, Nick; Bartsch, Thorsten; Kohlstedt, Hermann; Ziegler, Martin
2018-06-19
Memristive systems have gained considerable attention in the field of neuromorphic engineering, because they allow the emulation of synaptic functionality in solid state nano-physical systems. In this study, we show that memristive behavior provides a broad working framework for the phenomenological modelling of cellular synaptic mechanisms. In particular, we seek to understand how close a memristive system can account for the biological realism. The basic characteristics of memristive systems, i.e. voltage and memory behavior, are used to derive a voltage-based plasticity rule. We show that this model is suitable to account for a variety of electrophysiology plasticity data. Furthermore, we incorporate the plasticity model into an all-to-all connecting network scheme. Motivated by the auto-associative CA3 network of the hippocampus, we show that the implemented network allows the discrimination and processing of mnemonic pattern information, i.e. the formation of functional bidirectional connections resulting in the formation of local receptive fields. Since the presented plasticity model can be applied to real memristive devices as well, the presented theoretical framework can support both, the design of appropriate memristive devices for neuromorphic computing and the development of complex neuromorphic networks, which account for the specific advantage of memristive devices.
Correia, Carlos M; Teixeira, Joel
2014-12-01
Computationally efficient wave-front reconstruction techniques for astronomical adaptive-optics (AO) systems have seen great development in the past decade. Algorithms developed in the spatial-frequency (Fourier) domain have gathered much attention, especially for high-contrast imaging systems. In this paper we present the Wiener filter (resulting in the maximization of the Strehl ratio) and further develop formulae for the anti-aliasing (AA) Wiener filter that optimally takes into account high-order wave-front terms folded in-band during the sensing (i.e., discrete sampling) process. We employ a continuous spatial-frequency representation for the forward measurement operators and derive the Wiener filter when aliasing is explicitly taken into account. We further investigate and compare to classical estimates using least-squares filters the reconstructed wave-front, measurement noise, and aliasing propagation coefficients as a function of the system order. Regarding high-contrast systems, we provide achievable performance results as a function of an ensemble of forward models for the Shack-Hartmann wave-front sensor (using sparse and nonsparse representations) and compute point-spread-function raw intensities. We find that for a 32×32 single-conjugated AOs system the aliasing propagation coefficient is roughly 60% of the least-squares filters, whereas the noise propagation is around 80%. Contrast improvements of factors of up to 2 are achievable across the field in the H band. For current and next-generation high-contrast imagers, despite better aliasing mitigation, AA Wiener filtering cannot be used as a standalone method and must therefore be used in combination with optical spatial filters deployed before image formation actually takes place.
Demand driven decision support for efficient water resources allocation in irrigated agriculture
NASA Astrophysics Data System (ADS)
Schuetze, Niels; Grießbach, Ulrike Ulrike; Röhm, Patric; Stange, Peter; Wagner, Michael; Seidel, Sabine; Werisch, Stefan; Barfus, Klemens
2014-05-01
Due to climate change, extreme weather conditions, such as longer dry spells in the summer months, may have an increasing impact on the agriculture in Saxony (Eastern Germany). For this reason, and, additionally, declining amounts of rainfall during the growing season the use of irrigation will be more important in future in Eastern Germany. To cope with this higher demand of water, a new decision support framework is developed which focuses on an integrated management of both irrigation water supply and demand. For modeling the regional water demand, local (and site-specific) water demand functions are used which are derived from the optimized agronomic response at farms scale. To account for climate variability the agronomic response is represented by stochastic crop water production functions (SCWPF) which provide the estimated yield subject to the minimum amount of irrigation water. These functions take into account the different soil types, crops and stochastically generated climate scenarios. By applying mathematical interpolation and optimization techniques, the SCWPF's are used to compute the water demand considering different constraints, for instance variable and fix costs or the producer price. This generic approach enables the computation for both multiple crops at farm scale as well as of the aggregated response to water pricing at a regional scale for full and deficit irrigation systems. Within the SAPHIR (SAxonian Platform for High Performance Irrigation) project a prototype of a decision support system is developed which helps to evaluate combined water supply and demand management policies for an effective and efficient utilization of water in order to meet future demands. The prototype is implemented as a web-based decision support system and it is based on a service-oriented geo-database architecture.
A Bayesian account of ‘hysteria’
Adams, Rick A.; Brown, Harriet; Pareés, Isabel; Friston, Karl J.
2012-01-01
This article provides a neurobiological account of symptoms that have been called ‘hysterical’, ‘psychogenic’ or ‘medically unexplained’, which we will call functional motor and sensory symptoms. We use a neurobiologically informed model of hierarchical Bayesian inference in the brain to explain functional motor and sensory symptoms in terms of perception and action arising from inference based on prior beliefs and sensory information. This explanation exploits the key balance between prior beliefs and sensory evidence that is mediated by (body focused) attention, symptom expectations, physical and emotional experiences and beliefs about illness. Crucially, this furnishes an explanation at three different levels: (i) underlying neuromodulatory (synaptic) mechanisms; (ii) cognitive and experiential processes (attention and attribution of agency); and (iii) formal computations that underlie perceptual inference (representation of uncertainty or precision). Our explanation involves primary and secondary failures of inference; the primary failure is the (autonomous) emergence of a percept or belief that is held with undue certainty (precision) following top-down attentional modulation of synaptic gain. This belief can constitute a sensory percept (or its absence) or induce movement (or its absence). The secondary failure of inference is when the ensuing percept (and any somatosensory consequences) is falsely inferred to be a symptom to explain why its content was not predicted by the source of attentional modulation. This account accommodates several fundamental observations about functional motor and sensory symptoms, including: (i) their induction and maintenance by attention; (ii) their modification by expectation, prior experience and cultural beliefs and (iii) their involuntary and symptomatic nature. PMID:22641838
Finite-fault source inversion using adjoint methods in 3D heterogeneous media
NASA Astrophysics Data System (ADS)
Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia
2018-04-01
Accounting for lateral heterogeneities in the 3D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3D heterogeneity in source inversion involves pre-computing 3D Green's functions, which requires a number of 3D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense datasets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3D heterogeneous velocity model. The velocity model comprises a uniform background and a 3D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3D velocity model are performed for two different station configurations, a dense and a sparse network with 1 km and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.
Finite-fault source inversion using adjoint methods in 3-D heterogeneous media
NASA Astrophysics Data System (ADS)
Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia
2018-07-01
Accounting for lateral heterogeneities in the 3-D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1-D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3-D heterogeneity in source inversion involves pre-computing 3-D Green's functions, which requires a number of 3-D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense data sets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3-D heterogeneous velocity model. The velocity model comprises a uniform background and a 3-D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3-D velocity model are performed for two different station configurations, a dense and a sparse network with 1 and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak-slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3-D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3-D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.
Reductionism in the comments and autobiographical accounts of prominent psychologists.
Martin, Jack; Dawda, Darek
2002-01-01
Many of the researchers in the field of psychological science use strategies and methods in which human actions and experiences are reduced to behavioral contingencies, statistical regularities, neurophysiological states and processes, and computational functions and models. However, many psychologists talk readily and easily about how their research might assist human agents to solve problems, cope, make decisions, self-regulate, and more generally "make a difference" and "take control." The authors considered informally selected comments by several eminent psychologists, and more formally, 73 autobiographical accounts of prominent psychologists to see what could be learned about the attitudes of these psychologists toward reductionism in their own work and in the field of psychology in general. In interpreting these comments and accounts, the authors posit a gap between many psychologists' contemplation of their work and their actual research practices. The authors also suggest that such a gap may be related to psychologists' educational experiences and their scholarly and professional socialization, as well as to their subdisciplinary attachments and contexts.
Empirically Assessing the Importance of Computer Skills
ERIC Educational Resources Information Center
Baker, William M.
2013-01-01
This research determines which computer skills are important for entry-level accountants, and whether some skills are more important than others. Students participated before and after internships in public accounting. Longitudinal analysis is also provided; responses from 2001 are compared to those from 2008-2009. Responses are also compared to…
34 CFR 80.42 - Retention and access requirements for records.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., and any similar accounting computations of the rate at which a particular group of costs is chargeable... plan, or computation and its supporting records starts from end of the fiscal year (or other accounting... to any pertinent books, documents, papers, or other records of grantees and subgrantees which are...
User Accounts | High-Performance Computing | NREL
see information on user account policies. ACCOUNT PASSWORDS Logging in for the first time? Forgot your Accounts User Accounts Learn how to request an NREL HPC user account. Request an HPC Account To request an HPC account, please complete our request form. This form is provided using DocuSign. REQUEST
Investigation of the mechanical behaviour of the foot skin.
Fontanella, C G; Carniel, E L; Forestiero, A; Natali, A N
2014-11-01
The aim of this work was to provide computational tools for the characterization of the actual mechanical behaviour of foot skin, accounting for results from experimental testing and histological investigation. Such results show the typical features of skin mechanics, such as anisotropic configuration, almost incompressible behaviour, material and geometrical non linearity. The anisotropic behaviour is mainly determined by the distribution of collagen fibres along specific directions, usually identified as cleavage lines. To evaluate the biomechanical response of foot skin, a refined numerical model of the foot is developed. The overall mechanical behaviour of the skin is interpreted by a fibre-reinforced hyperelastic constitutive model and the orientation of the cleavage lines is implemented by a specific procedure. Numerical analyses that interpret typical loading conditions of the foot are performed. The influence of fibres orientation and distribution on skin mechanics is outlined also by a comparison with results using an isotropic scheme. A specific constitutive formulation is provided to characterize the mechanical behaviour of foot skin. The formulation is applied within a numerical model of the foot to investigate the skin functionality during typical foot movements. Numerical analyses developed accounting for the actual anisotropic configuration of the skin show lower maximum principal stress fields than results from isotropic analyses. The developed computational models provide reliable tools for the investigation of foot tissues functionality. Furthermore, the comparison between numerical results from anisotropic and isotropic models shows the optimal configuration of foot skin. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Salewski, M.; Geiger, B.; Jacobsen, A. S.; Abramovic, I.; Korsholm, S. B.; Leipold, F.; Madsen, B.; Madsen, J.; McDermott, R. M.; Moseev, D.; Nielsen, S. K.; Nocente, M.; Rasmussen, J.; Stejner, M.; Weiland, M.; The EUROfusion MST1 Team; The ASDEX Upgrade Team
2018-03-01
We measure the deuterium density, the parallel drift velocity, and parallel and perpendicular temperatures (T_\\Vert , T_\\perp ) in non-Maxwellian plasmas at ASDEX Upgrade. This is done by taking moments of the ion velocity distribution function measured by tomographic inversion of five simultaneously acquired spectra of D_α -light. Alternatively, we fit the spectra using a bi-Maxwellian distribution function. The measured kinetic temperatures (T_\\Vert =9 keV, T_\\perp=11 keV) reveal the anisotropy of the plasma and are substantially higher than the measured boron temperature (7 keV). The Maxwellian deuterium temperature computed with TRANSP (6 keV) is not uniquely measurable due to the fast ions. Nevertheless, simulated kinetic temperatures accounting for fast ions based on TRANSP (T_\\Vert =8.3 keV, T_\\perp=10.4 keV) are in excellent agreement with the measurements. Similarly, the Maxwellian deuterium drift velocity computed with TRANSP (300 km s-1) is not uniquely measurable, but the simulated kinetic drift velocity accounting for fast ions agrees with the measurements (400 km s-1) and is substantially larger than the measured boron drift velocity (270 km s-1). We further find that ion cyclotron resonance heating elevates T_\\Vert and T_\\perp each by 2 keV without evidence for preferential heating in the D_α spectra. Lastly, we derive an expression for the 1D projection of an arbitrarily drifting bi-Maxwellian onto a diagnostic line-of-sight.
Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L
2016-01-01
Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples. © 2016 Elsevier Inc. All rights reserved.
A computational neural model of goal-directed utterance selection.
Klein, Michael; Kamp, Hans; Palm, Guenther; Doya, Kenji
2010-06-01
It is generally agreed that much of human communication is motivated by extra-linguistic goals: we often make utterances in order to get others to do something, or to make them support our cause, or adopt our point of view, etc. However, thus far a computational foundation for this view on language use has been lacking. In this paper we propose such a foundation using Markov Decision Processes. We borrow computational components from the field of action selection and motor control, where a neurobiological basis of these components has been established. In particular, we make use of internal models (i.e., next-state transition functions defined on current state action pairs). The internal model is coupled with reinforcement learning of a value function that is used to assess the desirability of any state that utterances (as well as certain non-verbal actions) can bring about. This cognitive architecture is tested in a number of multi-agent game simulations. In these computational experiments an agent learns to predict the context-dependent effects of utterances by interacting with other agents that are already competent speakers. We show that the cognitive architecture can account for acquiring the capability of deciding when to speak in order to achieve a certain goal (instead of performing a non-verbal action or simply doing nothing), whom to address and what to say. Copyright 2010 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Versino, Daniele; Bronkhorst, Curt Allan
The computational formulation of a micro-mechanical material model for the dynamic failure of ductile metals is presented in this paper. The statistical nature of porosity initiation is accounted for by introducing an arbitrary probability density function which describes the pores nucleation pressures. Each micropore within the representative volume element is modeled as a thick spherical shell made of plastically incompressible material. The treatment of porosity by a distribution of thick-walled spheres also allows for the inclusion of micro-inertia effects under conditions of shock and dynamic loading. The second order ordinary differential equation governing the microscopic porosity evolution is solved withmore » a robust implicit procedure. A new Chebyshev collocation method is employed to approximate the porosity distribution and remapping is used to optimize memory usage. The adaptive approximation of the porosity distribution leads to a reduction of computational time and memory usage of up to two orders of magnitude. Moreover, the proposed model affords consistent performance: changing the nucleation pressure probability density function and/or the applied strain rate does not reduce accuracy or computational efficiency of the material model. The numerical performance of the model and algorithms presented is tested against three problems for high density tantalum: single void, one-dimensional uniaxial strain, and two-dimensional plate impact. Here, the results using the integration and algorithmic advances suggest a significant improvement in computational efficiency and accuracy over previous treatments for dynamic loading conditions.« less
Grimme, Stefan; Brandenburg, Jan Gerit; Bannwarth, Christoph; Hansen, Andreas
2015-08-07
A density functional theory (DFT) based composite electronic structure approach is proposed to efficiently compute structures and interaction energies in large chemical systems. It is based on the well-known and numerically robust Perdew-Burke-Ernzerhoff (PBE) generalized-gradient-approximation in a modified global hybrid functional with a relatively large amount of non-local Fock-exchange. The orbitals are expanded in Ahlrichs-type valence-double zeta atomic orbital (AO) Gaussian basis sets, which are available for many elements. In order to correct for the basis set superposition error (BSSE) and to account for the important long-range London dispersion effects, our well-established atom-pairwise potentials are used. In the design of the new method, particular attention has been paid to an accurate description of structural parameters in various covalent and non-covalent bonding situations as well as in periodic systems. Together with the recently proposed three-fold corrected (3c) Hartree-Fock method, the new composite scheme (termed PBEh-3c) represents the next member in a hierarchy of "low-cost" electronic structure approaches. They are mainly free of BSSE and account for most interactions in a physically sound and asymptotically correct manner. PBEh-3c yields good results for thermochemical properties in the huge GMTKN30 energy database. Furthermore, the method shows excellent performance for non-covalent interaction energies in small and large complexes. For evaluating its performance on equilibrium structures, a new compilation of standard test sets is suggested. These consist of small (light) molecules, partially flexible, medium-sized organic molecules, molecules comprising heavy main group elements, larger systems with long bonds, 3d-transition metal systems, non-covalently bound complexes (S22 and S66×8 sets), and peptide conformations. For these sets, overall deviations from accurate reference data are smaller than for various other tested DFT methods and reach that of triple-zeta AO basis set second-order perturbation theory (MP2/TZ) level at a tiny fraction of computational effort. Periodic calculations conducted for molecular crystals to test structures (including cell volumes) and sublimation enthalpies indicate very good accuracy competitive to computationally more involved plane-wave based calculations. PBEh-3c can be applied routinely to several hundreds of atoms on a single processor and it is suggested as a robust "high-speed" computational tool in theoretical chemistry and physics.
Torres, Edmanuel; DiLabio, Gino A
2013-08-13
Large clusters of noncovalently bonded molecules can only be efficiently modeled by classical mechanics simulations. One prominent challenge associated with this approach is obtaining force-field parameters that accurately describe noncovalent interactions. High-level correlated wave function methods, such as CCSD(T), are capable of correctly predicting noncovalent interactions, and are widely used to produce reference data. However, high-level correlated methods are generally too computationally costly to generate the critical reference data required for good force-field parameter development. In this work we present an approach to generate Lennard-Jones force-field parameters to accurately account for noncovalent interactions. We propose the use of a computational step that is intermediate to CCSD(T) and classical molecular mechanics, that can bridge the accuracy and computational efficiency gap between them, and demonstrate the efficacy of our approach with methane clusters. On the basis of CCSD(T)-level binding energy data for a small set of methane clusters, we develop methane-specific, atom-centered, dispersion-correcting potentials (DCPs) for use with the PBE0 density-functional and 6-31+G(d,p) basis sets. We then use the PBE0-DCP approach to compute a detailed map of the interaction forces associated with the removal of a single methane molecule from a cluster of eight methane molecules and use this map to optimize the Lennard-Jones parameters for methane. The quality of the binding energies obtained by the Lennard-Jones parameters we obtained is assessed on a set of methane clusters containing from 2 to 40 molecules. Our Lennard-Jones parameters, used in combination with the intramolecular parameters of the CHARMM force field, are found to closely reproduce the results of our dispersion-corrected density-functional calculations. The approach outlined can be used to develop Lennard-Jones parameters for any kind of molecular system.
Cyber Contingency Analysis version 1.x
DOE Office of Scientific and Technical Information (OSTI.GOV)
Contingency analysis based approach for quantifying and examining the resiliency of a cyber system in respect to confidentiality, integrity and availability. A graph representing an organization's cyber system and related resources is used for the availability contingency analysis. The mission critical paths associated with an organization are used to determine the consequences of a potential contingency. A node (or combination of nodes) are removed from the graph to analyze a particular contingency. The value of all mission critical paths that are disrupted by that contingency are used to quantify its severity. A total severity score can be calculated based onmore » the complete list of all these contingencies. A simple n1 analysis can be done in which only one node is removed at a time for the analysis. We can also compute nk analysis, where k is the number of nodes to simultaneously remove for analysis. A contingency risk score can also be computed, which takes the probability of the contingencies into account. In addition to availability, we can also quantify confidentiality and integrity scores for the system. These treat user accounts as potential contingencies. The amount (and type) of files that an account can read to is used to compute the confidentiality score. The amount (and type) of files that an account can write to is used to compute the integrity score. As with availability analysis, we can use this information to compute total severity scores in regards to confidentiality and integrity. We can also take probability into account to compute associated risk scores.« less
Toutounji, Hazem; Pipa, Gordon
2014-01-01
It is a long-established fact that neuronal plasticity occupies the central role in generating neural function and computation. Nevertheless, no unifying account exists of how neurons in a recurrent cortical network learn to compute on temporally and spatially extended stimuli. However, these stimuli constitute the norm, rather than the exception, of the brain's input. Here, we introduce a geometric theory of learning spatiotemporal computations through neuronal plasticity. To that end, we rigorously formulate the problem of neural representations as a relation in space between stimulus-induced neural activity and the asymptotic dynamics of excitable cortical networks. Backed up by computer simulations and numerical analysis, we show that two canonical and widely spread forms of neuronal plasticity, that is, spike-timing-dependent synaptic plasticity and intrinsic plasticity, are both necessary for creating neural representations, such that these computations become realizable. Interestingly, the effects of these forms of plasticity on the emerging neural code relate to properties necessary for both combating and utilizing noise. The neural dynamics also exhibits features of the most likely stimulus in the network's spontaneous activity. These properties of the spatiotemporal neural code resulting from plasticity, having their grounding in nature, further consolidate the biological relevance of our findings. PMID:24651447
Statistics of cosmic density profiles from perturbation theory
NASA Astrophysics Data System (ADS)
Bernardeau, Francis; Pichon, Christophe; Codis, Sandrine
2014-11-01
The joint probability distribution function (PDF) of the density within multiple concentric spherical cells is considered. It is shown how its cumulant generating function can be obtained at tree order in perturbation theory as the Legendre transform of a function directly built in terms of the initial moments. In the context of the upcoming generation of large-scale structure surveys, it is conjectured that this result correctly models such a function for finite values of the variance. Detailed consequences of this assumption are explored. In particular the corresponding one-cell density probability distribution at finite variance is computed for realistic power spectra, taking into account its scale variation. It is found to be in agreement with Λ -cold dark matter simulations at the few percent level for a wide range of density values and parameters. Related explicit analytic expansions at the low and high density tails are given. The conditional (at fixed density) and marginal probability of the slope—the density difference between adjacent cells—and its fluctuations is also computed from the two-cell joint PDF; it also compares very well to simulations. It is emphasized that this could prove useful when studying the statistical properties of voids as it can serve as a statistical indicator to test gravity models and/or probe key cosmological parameters.
Cumming, Bruce G.
2016-01-01
In order to extract retinal disparity from a visual scene, the brain must match corresponding points in the left and right retinae. This computationally demanding task is known as the stereo correspondence problem. The initial stage of the solution to the correspondence problem is generally thought to consist of a correlation-based computation. However, recent work by Doi et al suggests that human observers can see depth in a class of stimuli where the mean binocular correlation is 0 (half-matched random dot stereograms). Half-matched random dot stereograms are made up of an equal number of correlated and anticorrelated dots, and the binocular energy model—a well-known model of V1 binocular complex cells—fails to signal disparity here. This has led to the proposition that a second, match-based computation must be extracting disparity in these stimuli. Here we show that a straightforward modification to the binocular energy model—adding a point output nonlinearity—is by itself sufficient to produce cells that are disparity-tuned to half-matched random dot stereograms. We then show that a simple decision model using this single mechanism can reproduce psychometric functions generated by human observers, including reduced performance to large disparities and rapidly updating dot patterns. The model makes predictions about how performance should change with dot size in half-matched stereograms and temporal alternation in correlation, which we test in human observers. We conclude that a single correlation-based computation, based directly on already-known properties of V1 neurons, can account for the literature on mixed correlation random dot stereograms. PMID:27196696
NASA Astrophysics Data System (ADS)
Jolivet, R.; Simons, M.
2016-12-01
InSAR time series analysis allows reconstruction of ground deformation with meter-scale spatial resolution and high temporal sampling. For instance, the ESA Sentinel-1 Constellation is capable of providing 6-day temporal sampling, thereby opening a new window on the spatio-temporal behavior of tectonic processes. However, due to computational limitations, most time series methods rely on a pixel-by-pixel approach. This limitation is a concern because (1) accounting for orbital errors requires referencing all interferograms to a common set of pixels before reconstruction of the time series and (2) spatially correlated atmospheric noise due to tropospheric turbulence is ignored. Decomposing interferograms into statistically independent wavelets will mitigate issues of correlated noise, but prior estimation of orbital uncertainties will still be required. Here, we explore a method that considers all pixels simultaneously when solving for the spatio-temporal evolution of interferometric phase Our method is based on a massively parallel implementation of a conjugate direction solver. We consider an interferogram as the sum of the phase difference between 2 SAR acquisitions and the corresponding orbital errors. In addition, we fit the temporal evolution with a physically parameterized function while accounting for spatially correlated noise in the data covariance. We assume noise is isotropic for any given InSAR pair with a covariance described by an exponential function that decays with increasing separation distance between pixels. We regularize our solution in space using a similar exponential function as model covariance. Given the problem size, we avoid matrix multiplications of the full covariances by computing convolutions in the Fourier domain. We first solve the unregularized least squares problem using the LSQR algorithm to approach the final solution, then run our conjugate direction solver to account for data and model covariances. We present synthetic tests showing the efficiency of our method. We then reconstruct a 20-year continuous time series covering Northern Chile. Without input from any additional GNSS data, we recover the secular deformation rate, seasonal oscillations and the deformation fields from the 2005 Mw 7.8 Tarapaca and 2007 Mw 7.7 Tocopilla earthquakes.
Self-evaluation of decision-making: A general Bayesian framework for metacognitive computation.
Fleming, Stephen M; Daw, Nathaniel D
2017-01-01
People are often aware of their mistakes, and report levels of confidence in their choices that correlate with objective performance. These metacognitive assessments of decision quality are important for the guidance of behavior, particularly when external feedback is absent or sporadic. However, a computational framework that accounts for both confidence and error detection is lacking. In addition, accounts of dissociations between performance and metacognition have often relied on ad hoc assumptions, precluding a unified account of intact and impaired self-evaluation. Here we present a general Bayesian framework in which self-evaluation is cast as a "second-order" inference on a coupled but distinct decision system, computationally equivalent to inferring the performance of another actor. Second-order computation may ensue whenever there is a separation between internal states supporting decisions and confidence estimates over space and/or time. We contrast second-order computation against simpler first-order models in which the same internal state supports both decisions and confidence estimates. Through simulations we show that second-order computation provides a unified account of different types of self-evaluation often considered in separate literatures, such as confidence and error detection, and generates novel predictions about the contribution of one's own actions to metacognitive judgments. In addition, the model provides insight into why subjects' metacognition may sometimes be better or worse than task performance. We suggest that second-order computation may underpin self-evaluative judgments across a range of domains. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Self-Evaluation of Decision-Making: A General Bayesian Framework for Metacognitive Computation
2017-01-01
People are often aware of their mistakes, and report levels of confidence in their choices that correlate with objective performance. These metacognitive assessments of decision quality are important for the guidance of behavior, particularly when external feedback is absent or sporadic. However, a computational framework that accounts for both confidence and error detection is lacking. In addition, accounts of dissociations between performance and metacognition have often relied on ad hoc assumptions, precluding a unified account of intact and impaired self-evaluation. Here we present a general Bayesian framework in which self-evaluation is cast as a “second-order” inference on a coupled but distinct decision system, computationally equivalent to inferring the performance of another actor. Second-order computation may ensue whenever there is a separation between internal states supporting decisions and confidence estimates over space and/or time. We contrast second-order computation against simpler first-order models in which the same internal state supports both decisions and confidence estimates. Through simulations we show that second-order computation provides a unified account of different types of self-evaluation often considered in separate literatures, such as confidence and error detection, and generates novel predictions about the contribution of one’s own actions to metacognitive judgments. In addition, the model provides insight into why subjects’ metacognition may sometimes be better or worse than task performance. We suggest that second-order computation may underpin self-evaluative judgments across a range of domains. PMID:28004960
NASA Technical Reports Server (NTRS)
Holland, C.; Brodie, I.
1985-01-01
A test stand has been set up to measure the current fluctuation noise properties of B- and M-type dispenser cathodes in a typical TWT gun structure. Noise techniques were used to determine the work function distribution on the cathode surfaces. Significant differences between the B and M types and significant changes in the work function distribution during activation and life are found. In turn, knowledge of the expected work function can be used to accurately determine the cathode-operating temperatures in a TWT structure. Noise measurements also demonstrate more sensitivity to space charge effects than the Miram method. Full automation of the measurements and computations is now required to speed up data acquisition and reduction. The complete set of equations for the space charge limited diode were programmed so that given four of the five measurable variables (J, J sub O, T, D, and V) the fifth could be computed. Using this program, we estimated that an rms fluctuation in the diode spacing d in the frequency range of 145 Hz about 20 kHz of only about 10 to the -5 power A would account for the observed noise in a space charge limited diode with 1 mm spacing.
NASA Astrophysics Data System (ADS)
Andrinopoulos, Lampros; Hine, Nicholas; Haynes, Peter; Mostofi, Arash
2010-03-01
The placement of organic molecules such as CuPc (copper phthalocyanine) on wurtzite ZnO (zinc oxide) charged surfaces has been proposed as a way of creating photovoltaic solar cellsfootnotetextG.D. Sharma et al., Solar Energy Materials & Solar Cells 90, 933 (2006) ; optimising their performance may be aided by computational simulation. Electronic structure calculations provide high accuracy at modest computational cost but two challenges are encountered for such layered systems. First, the system size is at or beyond the limit of traditional cubic-scaling Density Functional Theory (DFT). Second, traditional exchange-correlation functionals do not account for van der Waals (vdW) interactions, crucial for determining the structure of weakly bonded systems. We present an implementation of recently developed approachesfootnotetextP.L. Silvestrelli, P.R.L. 100, 102 (2008) to include vdW in DFT within ONETEPfootnotetextC.-K. Skylaris, P.D. Haynes, A.A. Mostofi and M.C. Payne, J.C.P. 122, 084119 (2005) , a linear-scaling package for performing DFT calculations using a basis of localised functions. We have applied this methodology to simple planar organic molecules, such as benzene and pentacene, on ZnO surfaces.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campione, Salvatore; Warne, Larry K.; Sainath, Kamalesh
In this report we overview the fundamental concepts for a pair of techniques which together greatly hasten computational predictions of electromagnetic pulse (EMP) excitation of finite-length dissipative conductors over a ground plane. In a time- domain, transmission line (TL) model implementation, predictions are computationally bottlenecked time-wise, either for late-time predictions (about 100ns-10000ns range) or predictions concerning EMP excitation of long TLs (order of kilometers or more ). This is because the method requires a temporal convolution to account for the losses in the ground. Addressing this to facilitate practical simulation of EMP excitation of TLs, we first apply a techniquemore » to extract an (approximate) complex exponential function basis-fit to the ground/Earth's impedance function, followed by incorporating this into a recursion-based convolution acceleration technique. Because the recursion-based method only requires the evaluation of the most recent voltage history data (versus the entire history in a "brute-force" convolution evaluation), we achieve necessary time speed- ups across a variety of TL/Earth geometry/material scenarios. Intentionally Left Blank« less
Two-Flux and Green's Function Method for Transient Radiative Transfer in a Semi-Transparent Layer
NASA Technical Reports Server (NTRS)
Siegel, Robert
1995-01-01
A method using a Green's function is developed for computing transient temperatures in a semitransparent layer by using the two-flux method coupled with the transient energy equation. Each boundary of the layer is exposed to a hot or cold radiative environment, and is heated or cooled by convection. The layer refractive index is larger than one, and the effect of internal reflections is included with the boundaries assumed diffuse. The analysis accounts for internal emission, absorption, heat conduction, and isotropic scattering. Spectrally dependent radiative properties are included, and transient results are given to illustrate two-band spectral behavior with optically thin and thick bands. Transient results using the present Green's function method are verified for a gray layer by comparison with a finite difference solution of the exact radiative transfer equations; excellent agreement is obtained. The present method requires only moderate computing times and incorporates isotropic scattering without additional complexity. Typical temperature distributions are given to illustrate application of the method by examining the effect of strong radiative heating on one side of a layer with convective cooling on the other side, and the interaction of strong convective heating with radiative cooling from the layer interior.
A Single Camera Motion Capture System for Human-Computer Interaction
NASA Astrophysics Data System (ADS)
Okada, Ryuzo; Stenger, Björn
This paper presents a method for markerless human motion capture using a single camera. It uses tree-based filtering to efficiently propagate a probability distribution over poses of a 3D body model. The pose vectors and associated shapes are arranged in a tree, which is constructed by hierarchical pairwise clustering, in order to efficiently evaluate the likelihood in each frame. Anew likelihood function based on silhouette matching is proposed that improves the pose estimation of thinner body parts, i. e. the limbs. The dynamic model takes self-occlusion into account by increasing the variance of occluded body-parts, thus allowing for recovery when the body part reappears. We present two applications of our method that work in real-time on a Cell Broadband Engine™: a computer game and a virtual clothing application.
Time-dependent reliability analysis of ceramic engine components
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.
1993-01-01
The computer program CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing either the power or Paris law relations. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. Two example problems demonstrating proof testing and fatigue parameter estimation are given.
Predicting broadband noise from a stator vane of a gas turbine engine
NASA Technical Reports Server (NTRS)
Hanson, Donald B. (Inventor)
2002-01-01
A computer-implemented model of fan section of a gas turbine engine accounts for the turbulence in the gas flow emanating from the rotor assembly and impinging upon an inlet to the stator vane cascade. The model allows for user-input variations in the sweep and/or lean angles for the stator vanes. The model determines the resulting acoustic response of the fan section as a function of the turbulence and the lean and/or sweep angles of the vanes. The model may be embodied in software that is rapidly executed in a computer. This way, an optimum arrangement in terms of fan noise reduction is quickly determined for the stator vane lean and sweep physical positioning in the fan section of a gas turbine engine.
NASA Technical Reports Server (NTRS)
Mogilevsky, M.
1973-01-01
The Category A computer systems at KSC (Al and A2) which perform scientific and business/administrative operations are described. This data division is responsible for scientific requirements supporting Saturn, Atlas/Centaur, Titan/Centaur, Titan III, and Delta vehicles, and includes realtime functions, Apollo-Soyuz Test Project (ASTP), and the Space Shuttle. The work is performed chiefly on the GEL-635 (Al) system located in the Central Instrumentation Facility (CIF). The Al system can perform computations and process data in three modes: (1) real-time critical mode; (2) real-time batch mode; and (3) batch mode. The Division's IBM-360/50 (A2) system, also at the CIF, performs business/administrative data processing such as personnel, procurement, reliability, financial management and payroll, real-time inventory management, GSE accounting, preventive maintenance, and integrated launch vehicle modification status.
Influence of ionotropic receptor location on their dynamics at glutamatergic synapses.
Allam, Sushmita L; Bouteiller, Jean-Marie C; Hu, Eric; Greget, Renaud; Ambert, Nicolas; Bischoff, Serge; Baudry, Michel; Berger, Theodore W
2012-01-01
In this paper we study the effects of the location of ionotropic receptors, especially AMPA and NMDA receptors, on their function at excitatory glutamatergic synapses. As few computational models only allow to evaluate the influence of receptor location on state transition and receptor dynamics, we present an elaborate computational model of a glutamatergic synapse that takes into account detailed parametric models of ionotropic receptors along with glutamate diffusion within the synaptic cleft. Our simulation results underscore the importance of the wide spread distribution of AMPA receptors which is required to avoid massive desensitization of these receptors following a single glutamate release event while NMDA receptor location is potentially optimal relative to the glutamate release site thus, emphasizing the contribution of location dependent effects of the two major ionotropic receptors to synaptic efficacy.
Advanced imaging in COPD: insights into pulmonary pathophysiology
Milne, Stephen
2014-01-01
Chronic obstructive pulmonary disease (COPD) involves a complex interaction of structural and functional abnormalities. The two have long been studied in isolation. However, advanced imaging techniques allow us to simultaneously assess pathological processes and their physiological consequences. This review gives a comprehensive account of the various advanced imaging modalities used to study COPD, including computed tomography (CT), magnetic resonance imaging (MRI), and the nuclear medicine techniques positron emission tomography (PET) and single-photon emission computed tomography (SPECT). Some more recent developments in imaging technology, including micro-CT, synchrotron imaging, optical coherence tomography (OCT) and electrical impedance tomography (EIT), are also described. The authors identify the pathophysiological insights gained from these techniques, and speculate on the future role of advanced imaging in both clinical and research settings. PMID:25478198
Towards Full-Waveform Ambient Noise Inversion
NASA Astrophysics Data System (ADS)
Sager, K.; Ermert, L. A.; Boehm, C.; Fichtner, A.
2016-12-01
Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green function between the two receivers. This assumption, however, is only met under specific conditions, e.g. wavefield diffusivity and equipartitioning, or the isotropic distribution of both mono- and dipolar uncorrelated noise sources. These assumptions are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations in order to constrain Earth structure and noise generation. To overcome this limitation, we attempt to develop a method that consistently accounts for the distribution of noise sources, 3D heterogeneous Earth structure and the full seismic wave propagation physics. This is intended to improve the resolution of tomographic images, to refine noise source location, and thereby to contribute to a better understanding of noise generation. We introduce an operator-based formulation for the computation of correlation functions and apply the continuous adjoint method that allows us to compute first and second derivatives of misfit functionals with respect to source distribution and Earth structure efficiently. Based on these developments we design an inversion scheme using a 2D finite-difference code. To enable a joint inversion for noise sources and Earth structure, we investigate the following aspects: The capability of different misfit functionals to image wave speed anomalies and source distribution. Possible source-structure trade-offs, especially to what extent unresolvable structure can be mapped into the inverted noise source distribution and vice versa. In anticipation of real-data applications, we present an extension of the open-source waveform modelling and inversion package Salvus, which allows us to compute correlation functions in 3D media with heterogeneous noise sources at the surface.
26 CFR 1.6655-6 - Methods of accounting.
Code of Federal Regulations, 2010 CFR
2010-04-01
... of accounting method. Corporation ABC, a calendar year taxpayer, uses an accrual method of accounting... 26 Internal Revenue 13 2010-04-01 2010-04-01 false Methods of accounting. 1.6655-6 Section 1.6655... Methods of accounting. (a) In general. In computing any required installment, a corporation must use the...
3D-Digital soil property mapping by geoadditive models
NASA Astrophysics Data System (ADS)
Papritz, Andreas
2016-04-01
In many digital soil mapping (DSM) applications, soil properties must be predicted not only for a single but for multiple soil depth intervals. In the GlobalSoilMap project, as an example, predictions are computed for the 0-5 cm, 5-15 cm, 15-30 cm, 30-60 cm, 60-100 cm, 100-200 cm depth intervals (Arrouays et al., 2014). Legacy soil data are often used for DSM. It is common for such datasets that soil properties were measured for soil horizons or for layers at varying soil depth and with non-constant thickness (support). This poses problems for DSM: One strategy is to harmonize the soil data to common depth prior to the analyses (e.g. Bishop et al., 1999) and conduct the statistical analyses for each depth interval independently. The disadvantage of this approach is that the predictions for different depths are computed independently from each other so that the predicted depth profiles may be unrealistic. Furthermore, the error induced by the harmonization to common depth is ignored in this approach (Orton et al. 2016). A better strategy is therefore to process all soil data jointly without prior harmonization by a 3D-analysis that takes soil depth and geographical position explicitly into account. Usually, the non-constant support of the data is then ignored, but Orton et al. (2016) presented recently a geostatistical approach that accounts for non-constant support of soil data and relies on restricted maximum likelihood estimation (REML) of a linear geostatistical model with a separable, heteroscedastic, zonal anisotropic auto-covariance function and area-to-point kriging (Kyriakidis, 2004.) Although this model is theoretically coherent and elegant, estimating its many parameters by REML and selecting covariates for the spatial mean function is a formidable task. A simpler approach might be to use geoadditive models (Kammann and Wand, 2003; Wand, 2003) for 3D-analyses of soil data. geoAM extend the scope of the linear model with spatially correlated errors to account for nonlinear effects of covariates by fitting componentwise smooth, nonlinear functions to the covariates (additive terms). REML estimation of model parameters and computing best linear unbiased predictions (BLUP) builds in the geoAM framework on the fact that both geostatistical and additive models can be parametrized as linear mixed models Wand, 2003. For 3D-DSM analysis of soil data, it is natural to model depth profiles of soil properties by additive terms of soil depth. Including interactions between these additive terms and covariates of the spatial mean function allows to model spatially varying depth profiles. Furthermore, with suitable choice of the basis functions of the additive term (e.g. polynomial regression splines), non-constant support of the soil data can be taken into account. Finally, boosting (Bühlmann and Hothorn, 2007) can be used for selecting covariates for the spatial mean function. The presentation will detail the geoAM approach and present an example of geoAM for 3D-analysis of legacy soil data. Arrouays, D., McBratney, A. B., Minasny, B., Hempel, J. W., Heuvelink, G. B. M., MacMillan, R. A., Hartemink, A. E., Lagacherie, P., and McKenzie, N. J. (2014). The GlobalSoilMap project specifications. In GlobalSoilMap Basis of the global spatial soil information system, pages 9-12. CRC Press. Bishop, T., McBratney, A., and Laslett, G. (1999). Modelling soil attribute depth functions with equal-area quadratic smoothing splines. Geoderma, 91(1-2), 27-45. Bühlmann, P. and Hothorn, T. (2007). Boosting algorithms: Regularization, prediction and model fitting. Statistical Science, 22(4), 477-505. Kammann, E. E. and Wand, M. P. (2003). Geoadditive models. Journal of the Royal Statistical Society. Series C: Applied Statistics, 52(1), 1-18. Kyriakidis, P. (2004). A geostatistical framework for area-to-point spatial interpolation. Geographical Analysis, 36(3), 259-289. Orton, T., Pringle, M., and Bishop, T. (2016). A one-step approach for modelling and mapping soil properties based on profile data sampled over varying depth intervals. Geoderma, 262, 174-186. Wand, M. P. (2003). Smoothing and mixed models. Computational Statistics, 18(2), 223-249.
Sparse networks of directly coupled, polymorphic, and functional side chains in allosteric proteins.
Soltan Ghoraie, Laleh; Burkowski, Forbes; Zhu, Mu
2015-03-01
Recent studies have highlighted the role of coupled side-chain fluctuations alone in the allosteric behavior of proteins. Moreover, examination of X-ray crystallography data has recently revealed new information about the prevalence of alternate side-chain conformations (conformational polymorphism), and attempts have been made to uncover the hidden alternate conformations from X-ray data. Hence, new computational approaches are required that consider the polymorphic nature of the side chains, and incorporate the effects of this phenomenon in the study of information transmission and functional interactions of residues in a molecule. These studies can provide a more accurate understanding of the allosteric behavior. In this article, we first present a novel approach to generate an ensemble of conformations and an efficient computational method to extract direct couplings of side chains in allosteric proteins, and provide sparse network representations of the couplings. We take the side-chain conformational polymorphism into account, and show that by studying the intrinsic dynamics of an inactive structure, we are able to construct a network of functionally crucial residues. Second, we show that the proposed method is capable of providing a magnified view of the coupled and conformationally polymorphic residues. This model reveals couplings between the alternate conformations of a coupled residue pair. To the best of our knowledge, this is the first computational method for extracting networks of side chains' alternate conformations. Such networks help in providing a detailed image of side-chain dynamics in functionally important and conformationally polymorphic sites, such as binding and/or allosteric sites. © 2014 Wiley Periodicals, Inc.
Written and Computer-Mediated Accounting Communication Skills: An Employer Perspective
ERIC Educational Resources Information Center
Jones, Christopher G.
2011-01-01
Communication skills are a fundamental personal competency for a successful career in accounting. What is not so obvious is the specific written communication skill set employers look for and the extent those skills are computer mediated. Using survey research, this article explores the particular skills employers desire and their satisfaction…
Code of Federal Regulations, 2011 CFR
2011-04-01
..., (E) Accounting, (F) Actuarial science, (G) Performing arts, or (H) Consulting. Substantially all of... the client. The taxpayer does not, however, provide the client with additional computer programming... processing systems. The client will then order computers and other data processing equipment through the...
Code of Federal Regulations, 2014 CFR
2014-04-01
..., (E) Accounting, (F) Actuarial science, (G) Performing arts, or (H) Consulting. Substantially all of... the client. The taxpayer does not, however, provide the client with additional computer programming... processing systems. The client will then order computers and other data processing equipment through the...
Code of Federal Regulations, 2012 CFR
2012-04-01
..., (E) Accounting, (F) Actuarial science, (G) Performing arts, or (H) Consulting. Substantially all of... the client. The taxpayer does not, however, provide the client with additional computer programming... processing systems. The client will then order computers and other data processing equipment through the...
17 CFR 190.07 - Calculation of allowed net equity.
Code of Federal Regulations, 2010 CFR
2010-04-01
... computing, with respect to such account, the sum of: (i) The ledger balance; (ii) The open trade balance... purposes of this paragraph (b)(1), the open trade balance of a customer's account shall be computed by... ledger balance or open trade balance of any customer, exclude any security futures products, any gains or...
17 CFR 190.07 - Calculation of allowed net equity.
Code of Federal Regulations, 2011 CFR
2011-04-01
... computing, with respect to such account, the sum of: (i) The ledger balance; (ii) The open trade balance... purposes of this paragraph (b)(1), the open trade balance of a customer's account shall be computed by... ledger balance or open trade balance of any customer, exclude any security futures products, any gains or...
ERIC Educational Resources Information Center
Morrison, Robert G.; Doumas, Leonidas A. A.; Richland, Lindsey E.
2011-01-01
Theories accounting for the development of analogical reasoning tend to emphasize either the centrality of relational knowledge accretion or changes in information processing capability. Simulations in LISA (Hummel & Holyoak, 1997, 2003), a neurally inspired computer model of analogical reasoning, allow us to explore how these factors may…
Code of Federal Regulations, 2013 CFR
2013-04-01
..., (E) Accounting, (F) Actuarial science, (G) Performing arts, or (H) Consulting. Substantially all of... the client. The taxpayer does not, however, provide the client with additional computer programming... processing systems. The client will then order computers and other data processing equipment through the...
41 CFR 105-72.603 - Retention and access requirements for records.
Code of Federal Regulations, 2010 CFR
2010-07-01
... representatives, have the right of timely and unrestricted access to any books, documents, papers, or other... accounting computations of the rate at which a particular group of costs is chargeable (such as computer... records starts at the end of the fiscal year (or other accounting period) covered by the proposal, plan...
29 CFR 95.53 - Retention and access requirements for records.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., have the right of timely and unrestricted access to any books, documents, papers, or other records of... allocation plans, and any similar accounting computations of the rate at which a particular group of costs is... of the fiscal year (or other accounting period) covered by the proposal, plan, or other computation...
26 CFR 1.9002-1 - Purpose, applicability, and definitions.
Code of Federal Regulations, 2013 CFR
2013-04-01
... elect to have its provisions apply with two alternatives for accounting for the adjustments to income..., for his most recent taxable year ending on or before June 22, 1959, (1) computed, or been required to compute, taxable income under an accrual method of accounting, and (2) treated dealer reserve income (or...
26 CFR 1.9002-1 - Purpose, applicability, and definitions.
Code of Federal Regulations, 2014 CFR
2014-04-01
... elect to have its provisions apply with two alternatives for accounting for the adjustments to income..., for his most recent taxable year ending on or before June 22, 1959, (1) computed, or been required to compute, taxable income under an accrual method of accounting, and (2) treated dealer reserve income (or...
26 CFR 1.9002-1 - Purpose, applicability, and definitions.
Code of Federal Regulations, 2012 CFR
2012-04-01
... elect to have its provisions apply with two alternatives for accounting for the adjustments to income..., for his most recent taxable year ending on or before June 22, 1959, (1) computed, or been required to compute, taxable income under an accrual method of accounting, and (2) treated dealer reserve income (or...
26 CFR 1.9002-1 - Purpose, applicability, and definitions.
Code of Federal Regulations, 2011 CFR
2011-04-01
... elect to have its provisions apply with two alternatives for accounting for the adjustments to income..., for his most recent taxable year ending on or before June 22, 1959, (1) computed, or been required to compute, taxable income under an accrual method of accounting, and (2) treated dealer reserve income (or...
Na, Y; Suh, T; Xing, L
2012-06-01
Multi-objective (MO) plan optimization entails generation of an enormous number of IMRT or VMAT plans constituting the Pareto surface, which presents a computationally challenging task. The purpose of this work is to overcome the hurdle by developing an efficient MO method using emerging cloud computing platform. As a backbone of cloud computing for optimizing inverse treatment planning, Amazon Elastic Compute Cloud with a master node (17.1 GB memory, 2 virtual cores, 420 GB instance storage, 64-bit platform) is used. The master node is able to scale seamlessly a number of working group instances, called workers, based on the user-defined setting account for MO functions in clinical setting. Each worker solved the objective function with an efficient sparse decomposition method. The workers are automatically terminated if there are finished tasks. The optimized plans are archived to the master node to generate the Pareto solution set. Three clinical cases have been planned using the developed MO IMRT and VMAT planning tools to demonstrate the advantages of the proposed method. The target dose coverage and critical structure sparing of plans are comparable obtained using the cloud computing platform are identical to that obtained using desktop PC (Intel Xeon® CPU 2.33GHz, 8GB memory). It is found that the MO planning speeds up the processing of obtaining the Pareto set substantially for both types of plans. The speedup scales approximately linearly with the number of nodes used for computing. With the use of N nodes, the computational time is reduced by the fitting model, 0.2+2.3/N, with r̂2>0.99, on average of the cases making real-time MO planning possible. A cloud computing infrastructure is developed for MO optimization. The algorithm substantially improves the speed of inverse plan optimization. The platform is valuable for both MO planning and future off- or on-line adaptive re-planning. © 2012 American Association of Physicists in Medicine.
Structure, function, and behaviour of computational models in systems biology
2013-01-01
Background Systems Biology develops computational models in order to understand biological phenomena. The increasing number and complexity of such “bio-models” necessitate computer support for the overall modelling task. Computer-aided modelling has to be based on a formal semantic description of bio-models. But, even if computational bio-models themselves are represented precisely in terms of mathematical expressions their full meaning is not yet formally specified and only described in natural language. Results We present a conceptual framework – the meaning facets – which can be used to rigorously specify the semantics of bio-models. A bio-model has a dual interpretation: On the one hand it is a mathematical expression which can be used in computational simulations (intrinsic meaning). On the other hand the model is related to the biological reality (extrinsic meaning). We show that in both cases this interpretation should be performed from three perspectives: the meaning of the model’s components (structure), the meaning of the model’s intended use (function), and the meaning of the model’s dynamics (behaviour). In order to demonstrate the strengths of the meaning facets framework we apply it to two semantically related models of the cell cycle. Thereby, we make use of existing approaches for computer representation of bio-models as much as possible and sketch the missing pieces. Conclusions The meaning facets framework provides a systematic in-depth approach to the semantics of bio-models. It can serve two important purposes: First, it specifies and structures the information which biologists have to take into account if they build, use and exchange models. Secondly, because it can be formalised, the framework is a solid foundation for any sort of computer support in bio-modelling. The proposed conceptual framework establishes a new methodology for modelling in Systems Biology and constitutes a basis for computer-aided collaborative research. PMID:23721297
Numerical prediction of a draft tube flow taking into account uncertain inlet conditions
NASA Astrophysics Data System (ADS)
Brugiere, O.; Balarac, G.; Corre, C.; Metais, O.; Flores, E.; Pleroy
2012-11-01
The swirling turbulent flow in a hydroturbine draft tube is computed with a non-intrusive uncertainty quantification (UQ) method coupled to Reynolds-Averaged Navier-Stokes (RANS) modelling in order to take into account in the numerical prediction the physical uncertainties existing on the inlet flow conditions. The proposed approach yields not only mean velocity fields to be compared with measured profiles, as is customary in Computational Fluid Dynamics (CFD) practice, but also variance of these quantities from which error bars can be deduced on the computed profiles, thus making more significant the comparison between experiment and computation.
A computational model of selection by consequences.
McDowell, J J
2004-01-01
Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of computational experiments that arranged reinforcement according to random-interval (RI) schedules. The quantitative features of the model were varied over wide ranges in these experiments, and many of the qualitative features of the model also were varied. The digital organism consistently showed a hyperbolic relation between response and reinforcement rates, and this hyperbolic description of the data was consistently better than the description provided by other, similar, function forms. In addition, the parameters of the hyperbola varied systematically with the quantitative, and some of the qualitative, properties of the model in ways that were consistent with findings from biological organisms. These results suggest that the material events responsible for an organism's responding on RI schedules are computationally equivalent to Darwinian selection by consequences. They also suggest that the computational model developed here is worth pursuing further as a possible dynamic account of behavior. PMID:15357512
Hopfield, J J
2008-05-01
The algorithms that simple feedback neural circuits representing a brain area can rapidly carry out are often adequate to solve easy problems but for more difficult problems can return incorrect answers. A new excitatory-inhibitory circuit model of associative memory displays the common human problem of failing to rapidly find a memory when only a small clue is present. The memory model and a related computational network for solving Sudoku puzzles produce answers that contain implicit check bits in the representation of information across neurons, allowing a rapid evaluation of whether the putative answer is correct or incorrect through a computation related to visual pop-out. This fact may account for our strong psychological feeling of right or wrong when we retrieve a nominal memory from a minimal clue. This information allows more difficult computations or memory retrievals to be done in a serial fashion by using the fast but limited capabilities of a computational module multiple times. The mathematics of the excitatory-inhibitory circuits for associative memory and for Sudoku, both of which are understood in terms of energy or Lyapunov functions, is described in detail.
A first-principle calculation of the XANES spectrum of Cu2+ in water
NASA Astrophysics Data System (ADS)
La Penna, G.; Minicozzi, V.; Morante, S.; Rossi, G. C.; Stellato, F.
2015-09-01
The progress in high performance computing we are witnessing today offers the possibility of accurate electron density calculations of systems in realistic physico-chemical conditions. In this paper, we present a strategy aimed at performing a first-principle computation of the low energy part of the X-ray Absorption Spectroscopy (XAS) spectrum based on the density functional theory calculation of the electronic potential. To test its effectiveness, we apply the method to the computation of the X-ray absorption near edge structure part of the XAS spectrum in the paradigmatic, but simple case of Cu2+ in water. In order to keep into account the effect of the metal site structure fluctuations in determining the experimental signal, the theoretical spectrum is evaluated as the average over the computed spectra of a statistically significant number of simulated metal site configurations. The comparison of experimental data with theoretical calculations suggests that Cu2+ lives preferentially in a square-pyramidal geometry. The remarkable success of this approach in the interpretation of XAS data makes us optimistic about the possibility of extending the computational strategy we have outlined to the more interesting case of molecules of biological relevance bound to transition metal ions.
Accounting Systems for School Districts.
ERIC Educational Resources Information Center
Atwood, E. Barrett, Jr.
1983-01-01
Advises careful analysis and improvement of existing school district accounting systems prior to investment in new ones. Emphasizes the importance of attracting and maintaining quality financial staffs, developing an accounting policies and procedures manual, and designing a good core accounting system before purchasing computer hardware and…
Radiation transfer in plant canopies - Scattering of solar radiation and canopy reflectance
NASA Technical Reports Server (NTRS)
Verstraete, Michel M.
1988-01-01
The one-dimensional vertical model of radiation transfer in a plant canopy described by Verstraete (1987) is extended to account for the transfer of diffuse radiation. This improved model computes the absorption and scattering of both visible and near-infrared radiation in a multilayer canopy as a function of solar position and leaf orientation distribution. Multiple scattering is allowed, and the spectral reflectance of the vegetation stand is predicted. The results of the model are compared to those of other models and actual observations.
NASA Astrophysics Data System (ADS)
Bilchenko, G. G.; Bilchenko, N. G.
2018-03-01
The hypersonic aircraft permeable surfaces heat and mass transfer effective control mathematical modeling problems are considered. The analysis of the control (the blowing) constructive and gasdynamical restrictions is carried out for the porous and perforated surfaces. The functions classes allowing realize the controls taking into account the arising types of restrictions are suggested. Estimates of the computational complexity of the W. G. Horner scheme application in the case of using the C. Hermite interpolation polynomial are given.
A Smoothed Eclipse Model for Solar Electric Propulsion Trajectory Optimization
NASA Technical Reports Server (NTRS)
Aziz, Jonathan D.; Scheeres, Daniel J.; Parker, Jeffrey S.; Englander, Jacob A.
2017-01-01
Solar electric propulsion (SEP) is the dominant design option for employing low-thrust propulsion on a space mission. Spacecraft solar arrays power the SEP system but are subject to blackout periods during solar eclipse conditions. Discontinuity in power available to the spacecraft must be accounted for in trajectory optimization, but gradient-based methods require a differentiable power model. This work presents a power model that smooths the eclipse transition from total eclipse to total sunlight with a logistic function. Example trajectories are computed with differential dynamic programming, a second-order gradient-based method.
Performance simulation for the design of solar heating and cooling systems
NASA Technical Reports Server (NTRS)
Mccormick, P. O.
1975-01-01
Suitable approaches for evaluating the performance and the cost of a solar heating and cooling system are considered, taking into account the value of a computer simulation concerning the entire system in connection with the large number of parameters involved. Operational relations concerning the collector efficiency in the case of a new improved collector and a reference collector are presented in a graph. Total costs for solar and conventional heating, ventilation, and air conditioning systems as a function of time are shown in another graph.
NASA Astrophysics Data System (ADS)
Tyupikova, T. V.; Samoilov, V. N.
2003-04-01
Modern information technologies urge natural sciences to further development. But it comes together with evaluation of infrastructures, to spotlight favorable conditions for the development of science and financial base in order to prove and protect legally new research. Any scientific development entails accounting and legal protection. In the report, we consider a new direction in software, organization and control of common databases on the example of the electronic document handling, which functions in some departments of the Joint Institute for Nuclear Research.
NASA Astrophysics Data System (ADS)
Sinkin, Oleg V.; Grigoryan, Vladimir S.; Menyuk, Curtis R.
2006-12-01
We introduce a fully deterministic, computationally efficient method for characterizing the effect of nonlinearity in optical fiber transmission systems that utilize wavelength-division multiplexing and return-to-zero modulation. The method accurately accounts for bit-pattern-dependent nonlinear distortion due to collision-induced timing jitter and for amplifier noise. We apply this method to calculate the error probability as a function of channel spacing in a prototypical multichannel return-to-zero undersea system.
Ontological and Epistemological Issues Regarding Climate Models and Computer Experiments
NASA Astrophysics Data System (ADS)
Vezer, M. A.
2010-12-01
Recent philosophical discussions (Parker 2009; Frigg and Reiss 2009; Winsberg, 2009; Morgon 2002, 2003, 2005; Gula 2002) about the ontology of computer simulation experiments and the epistemology of inferences drawn from them are of particular relevance to climate science as computer modeling and analysis are instrumental in understanding climatic systems. How do computer simulation experiments compare with traditional experiments? Is there an ontological difference between these two methods of inquiry? Are there epistemological considerations that result in one type of inference being more reliable than the other? What are the implications of these questions with respect to climate studies that rely on computer simulation analysis? In this paper, I examine these philosophical questions within the context of climate science, instantiating concerns in the philosophical literature with examples found in analysis of global climate change. I concentrate on Wendy Parker’s (2009) account of computer simulation studies, which offers a treatment of these and other questions relevant to investigations of climate change involving such modelling. Two theses at the center of Parker’s account will be the focus of this paper. The first is that computer simulation experiments ought to be regarded as straightforward material experiments; which is to say, there is no significant ontological difference between computer and traditional experimentation. Parker’s second thesis is that some of the emphasis on the epistemological importance of materiality has been misplaced. I examine both of these claims. First, I inquire as to whether viewing computer and traditional experiments as ontologically similar in the way she does implies that there is no proper distinction between abstract experiments (such as ‘thought experiments’ as well as computer experiments) and traditional ‘concrete’ ones. Second, I examine the notion of materiality (i.e., the material commonality between object and target systems) and some arguments for the claim that materiality entails some inferential advantage to traditional experimentation. I maintain that Parker’s account of the ontology of computer simulations has some interesting though potentially problematic implications regarding conventional distinctions between abstract and concrete methods of inquiry. With respect to her account of materiality, I outline and defend an alternative account, posited by Mary Morgan (2002, 2003, 2005), which holds that ontological similarity between target and object systems confers some epistemological advantage to traditional forms of experimental inquiry.
Decomposition of Proteins into Dynamic Units from Atomic Cross-Correlation Functions.
Calligari, Paolo; Gerolin, Marco; Abergel, Daniel; Polimeno, Antonino
2017-01-10
In this article, we present a clustering method of atoms in proteins based on the analysis of the correlation times of interatomic distance correlation functions computed from MD simulations. The goal is to provide a coarse-grained description of the protein in terms of fewer elements that can be treated as dynamically independent subunits. Importantly, this domain decomposition method does not take into account structural properties of the protein. Instead, the clustering of protein residues in terms of networks of dynamically correlated domains is defined on the basis of the effective correlation times of the pair distance correlation functions. For these properties, our method stands as a complementary analysis to the customary protein decomposition in terms of quasi-rigid, structure-based domains. Results obtained for a prototypal protein structure illustrate the approach proposed.
Guidez, Emilie B; Gordon, Mark S
2015-03-12
The modeling of dispersion interactions in density functional theory (DFT) is commonly performed using an energy correction that involves empirically fitted parameters for all atom pairs of the system investigated. In this study, the first-principles-derived dispersion energy from the effective fragment potential (EFP) method is implemented for the density functional theory (DFT-D(EFP)) and Hartree-Fock (HF-D(EFP)) energies. Overall, DFT-D(EFP) performs similarly to the semiempirical DFT-D corrections for the test cases investigated in this work. HF-D(EFP) tends to underestimate binding energies and overestimate intermolecular equilibrium distances, relative to coupled cluster theory, most likely due to incomplete accounting for electron correlation. Overall, this first-principles dispersion correction yields results that are in good agreement with coupled-cluster calculations at a low computational cost.
Linear electro-optic effect in semiconductors: Ab initio description of the electronic contribution
NASA Astrophysics Data System (ADS)
Prussel, Lucie; Véniard, Valérie
2018-05-01
We propose an ab initio framework to derive the electronic part of the second-order susceptibility tensor for the electro-optic effect in bulk semiconductors. We find a general expression for χ(2 ) evaluated within time-dependent density-functional theory, including explicitly the band-gap corrections at the level of the scissors approximation. Excitonic effects are accounted for, on the basis of a simple scalar approximation. We apply our formalism to the computation of the electro-optic susceptibilities for several semiconductors, such as GaAs, GaN, and SiC. Taking into account the ionic contribution according to the Faust-Henry coefficient, we obtain a good agreement with experimental results. Finally, using different types of strain to break centrosymmetry, we show that high electro-optic coefficients can be obtained in bulk silicon for a large range of frequencies.
ERIC Educational Resources Information Center
Eastman, Susan T.
1984-01-01
Argues that the telecommunications field has specific computer applications; therefore courses on how to use computer programs for audience analysis, station accounting, newswriting, etc., should be included in the telecommunications curriculum. (PD)
Verburgh, Lot; Scherder, Erik J A; Van Lange, Paul A M; Oosterlaan, Jaap
2016-01-01
Research suggested a positive association between physical fitness and neurocognitive functioning in children. Aim of the present study is to investigate possible dose-response relationships between diverse daily physical activities and a broad range of neurocognitive functions in preadolescent children. Furthermore, the relationship between several sedentary behaviours, including TV-watching, gaming and computer time, and neurocognitive functioning will be investigated in this group of children. A total of 168 preadolescent boys, aged 8 to 12 years, were recruited from various locations, including primary schools, an amateur soccer club, and a professional soccer club, to increase variability in the amount of participation in sports. All children performed neurocognitive tasks measuring inhibition, short term memory, working memory, attention and information processing speed. Regression analyses examined the predictive power of a broad range of physical activities, including sports, active transport to school, physical education (PE), outdoor play, and sedentary behaviour such as TV-watching and gaming, for neurocognitive functioning. Time spent in sports significantly accounted for the variance in inhibition, short term memory, working memory and lapses of attention, where more time spent in sports was associated with better performance. Outdoor play was also positively associated with working memory. In contrast, time spent on the computer was negatively associated with inhibition. Results of the current study suggest a positive relationship between participation in sports and several important neurocognitive functions. Interventions are recommended to increase sports participation and to reduce sedentary behaviour in preadolescent children.
Local Approximation and Hierarchical Methods for Stochastic Optimization
NASA Astrophysics Data System (ADS)
Cheng, Bolong
In this thesis, we present local and hierarchical approximation methods for two classes of stochastic optimization problems: optimal learning and Markov decision processes. For the optimal learning problem class, we introduce a locally linear model with radial basis function for estimating the posterior mean of the unknown objective function. The method uses a compact representation of the function which avoids storing the entire history, as is typically required by nonparametric methods. We derive a knowledge gradient policy with the locally parametric model, which maximizes the expected value of information. We show the policy is asymptotically optimal in theory, and experimental works suggests that the method can reliably find the optimal solution on a range of test functions. For the Markov decision processes problem class, we are motivated by an application where we want to co-optimize a battery for multiple revenue, in particular energy arbitrage and frequency regulation. The nature of this problem requires the battery to make charging and discharging decisions at different time scales while accounting for the stochastic information such as load demand, electricity prices, and regulation signals. Computing the exact optimal policy becomes intractable due to the large state space and the number of time steps. We propose two methods to circumvent the computation bottleneck. First, we propose a nested MDP model that structure the co-optimization problem into smaller sub-problems with reduced state space. This new model allows us to understand how the battery behaves down to the two-second dynamics (that of the frequency regulation market). Second, we introduce a low-rank value function approximation for backward dynamic programming. This new method only requires computing the exact value function for a small subset of the state space and approximate the entire value function via low-rank matrix completion. We test these methods on historical price data from the PJM Interconnect and show that it outperforms the baseline approach used in the industry.
Performance Models for Split-execution Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S; McCaskey, Alex; Schrock, Jonathan
Split-execution computing leverages the capabilities of multiple computational models to solve problems, but splitting program execution across different computational models incurs costs associated with the translation between domains. We analyze the performance of a split-execution computing system developed from conventional and quantum processing units (QPUs) by using behavioral models that track resource usage. We focus on asymmetric processing models built using conventional CPUs and a family of special-purpose QPUs that employ quantum computing principles. Our performance models account for the translation of a classical optimization problem into the physical representation required by the quantum processor while also accounting for hardwaremore » limitations and conventional processor speed and memory. We conclude that the bottleneck in this split-execution computing system lies at the quantum-classical interface and that the primary time cost is independent of quantum processor behavior.« less
Simulation games that integrate research, entertainment, and learning around ecosystem services
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costanza, Robert; Chichakly, Karim; Dale, Virginia
Humans currently spend over 3 billion person-hours per week playing computer games. Most of these games are purely for entertainment, but use of computer games for education has also expanded dramatically. At the same time, experimental games have become a staple of social science research but have depended on relatively small sample sizes and simple, abstract situations, limiting their range and applicability. If only a fraction of the time spent playing computer games could be harnessed for research, it would open up a huge range of new opportunities. We review the use of games in research, education, and entertainment andmore » develop ideas for integrating these three functions around the idea of ecosystem services valuation. This approach to valuation can be seen as a version of choice modeling that allows players to generate their own scenarios taking account of the trade-offs embedded in the game, rather than simply ranking pre-formed scenarios. We outline a prototype game called Lagom Island to test the proposition that gaming can be used to reveal the value of ecosystem services. Ultimately, our prototype provides a potential pathway and functional building blocks for approaching the relatively untapped potential of games in the context of ecosystem services research.« less
Simulation games that integrate research, entertainment, and learning around ecosystem services
Costanza, Robert; Chichakly, Karim; Dale, Virginia; ...
2014-11-07
Humans currently spend over 3 billion person-hours per week playing computer games. Most of these games are purely for entertainment, but use of computer games for education has also expanded dramatically. At the same time, experimental games have become a staple of social science research but have depended on relatively small sample sizes and simple, abstract situations, limiting their range and applicability. If only a fraction of the time spent playing computer games could be harnessed for research, it would open up a huge range of new opportunities. We review the use of games in research, education, and entertainment andmore » develop ideas for integrating these three functions around the idea of ecosystem services valuation. This approach to valuation can be seen as a version of choice modeling that allows players to generate their own scenarios taking account of the trade-offs embedded in the game, rather than simply ranking pre-formed scenarios. We outline a prototype game called Lagom Island to test the proposition that gaming can be used to reveal the value of ecosystem services. Ultimately, our prototype provides a potential pathway and functional building blocks for approaching the relatively untapped potential of games in the context of ecosystem services research.« less
Moberget, Torgeir; Ivry, Richard B
2016-04-01
The past 25 years have seen the functional domain of the cerebellum extend beyond the realm of motor control, with considerable discussion of how this subcortical structure contributes to cognitive domains including attention, memory, and language. Drawing on evidence from neuroanatomy, physiology, neuropsychology, and computational work, sophisticated models have been developed to describe cerebellar function in sensorimotor control and learning. In contrast, mechanistic accounts of how the cerebellum contributes to cognition have remained elusive. Inspired by the homogeneous cerebellar microanatomy and a desire for parsimony, many researchers have sought to extend mechanistic ideas from motor control to cognition. One influential hypothesis centers on the idea that the cerebellum implements internal models, representations of the context-specific dynamics of an agent's interactions with the environment, enabling predictive control. We briefly review cerebellar anatomy and physiology, to review the internal model hypothesis as applied in the motor domain, before turning to extensions of these ideas in the linguistic domain, focusing on speech perception and semantic processing. While recent findings are consistent with this computational generalization, they also raise challenging questions regarding the nature of cerebellar learning, and may thus inspire revisions of our views on the role of the cerebellum in sensorimotor control. © 2016 New York Academy of Sciences.
Vorobjev, Yury N; Scheraga, Harold A; Vila, Jorge A
2018-02-01
A computational method, to predict the pKa values of the ionizable residues Asp, Glu, His, Tyr, and Lys of proteins, is presented here. Calculation of the electrostatic free-energy of the proteins is based on an efficient version of a continuum dielectric electrostatic model. The conformational flexibility of the protein is taken into account by carrying out molecular dynamics simulations of 10 ns in implicit water. The accuracy of the proposed method of calculation of pKa values is estimated from a test set of experimental pKa data for 297 ionizable residues from 34 proteins. The pKa-prediction test shows that, on average, 57, 86, and 95% of all predictions have an error lower than 0.5, 1.0, and 1.5 pKa units, respectively. This work contributes to our general understanding of the importance of protein flexibility for an accurate computation of pKa, providing critical insight about the significance of the multiple neutral states of acid and histidine residues for pKa-prediction, and may spur significant progress in our effort to develop a fast and accurate electrostatic-based method for pKa-predictions of proteins as a function of pH.
Theory and simulation of time-fractional fluid diffusion in porous media
NASA Astrophysics Data System (ADS)
Carcione, José M.; Sanchez-Sesma, Francisco J.; Luzón, Francisco; Perez Gavilán, Juan J.
2013-08-01
We simulate a fluid flow in inhomogeneous anisotropic porous media using a time-fractional diffusion equation and the staggered Fourier pseudospectral method to compute the spatial derivatives. A fractional derivative of the order of 0 < ν < 2 replaces the first-order time derivative in the classical diffusion equation. It implies a time-dependent permeability tensor having a power-law time dependence, which describes memory effects and accounts for anomalous diffusion. We provide a complete analysis of the physics based on plane waves. The concepts of phase, group and energy velocities are analyzed to describe the location of the diffusion front, and the attenuation and quality factors are obtained to quantify the amplitude decay. We also obtain the frequency-domain Green function. The time derivative is computed with the Grünwald-Letnikov summation, which is a finite-difference generalization of the standard finite-difference operator to derivatives of fractional order. The results match the analytical solution obtained from the Green function. An example of the pressure field generated by a fluid injection in a heterogeneous sandstone illustrates the performance of the algorithm for different values of ν. The calculation requires storing the whole pressure field in the computer memory since anomalous diffusion ‘recalls the past’.
Pyroclastic flow transport dynamics for a Montserrat volcano eruption
NASA Astrophysics Data System (ADS)
Cordoba, G.; Sparks, S.; del Risco, E.
2003-04-01
A two phase model of pyroclastic flows dynamics which account for the bed load and suspended load is shown. The model uses the compressible Navier-Stokes equations coupled with the convection-diffusion equation in order to take into account for the sedimentation. The skin friction is taken into account by using the wall functions. In despite of the complex mathematical formulation of the model, it has been implemented in a Personal Computer due to an assumption of two phase one velocity model which reduce the number of equations in the system. This non-linear equation system is solved numerically by using the Finite Element Method. This numerical method let us move the mesh in the direction of the deposition and then accounting for the shape of the bed and the thickness of the deposit The model is applied to the Montserrat's White River basin which extend from the dome to the sea, located about 4 Km away and then compared with the field data from the Boxing Day (26 December, 1997) eruption. Additionally some features as the temporary evolution of the dynamical pressure, particle concentration and temperature along the path at each time step is shown.
Sehgal, Vasudha; Seviour, Elena G; Moss, Tyler J; Mills, Gordon B; Azencott, Robert; Ram, Prahlad T
2015-01-01
MicroRNAs (miRNAs) play a crucial role in the maintenance of cellular homeostasis by regulating the expression of their target genes. As such, the dysregulation of miRNA expression has been frequently linked to cancer. With rapidly accumulating molecular data linked to patient outcome, the need for identification of robust multi-omic molecular markers is critical in order to provide clinical impact. While previous bioinformatic tools have been developed to identify potential biomarkers in cancer, these methods do not allow for rapid classification of oncogenes versus tumor suppressors taking into account robust differential expression, cutoffs, p-values and non-normality of the data. Here, we propose a methodology, Robust Selection Algorithm (RSA) that addresses these important problems in big data omics analysis. The robustness of the survival analysis is ensured by identification of optimal cutoff values of omics expression, strengthened by p-value computed through intensive random resampling taking into account any non-normality in the data and integration into multi-omic functional networks. Here we have analyzed pan-cancer miRNA patient data to identify functional pathways involved in cancer progression that are associated with selected miRNA identified by RSA. Our approach demonstrates the way in which existing survival analysis techniques can be integrated with a functional network analysis framework to efficiently identify promising biomarkers and novel therapeutic candidates across diseases.
Psychometric functions for pure-tone frequency discrimination.
Dai, Huanping; Micheyl, Christophe
2011-07-01
The form of the psychometric function (PF) for auditory frequency discrimination is of theoretical interest and practical importance. In this study, PFs for pure-tone frequency discrimination were measured for several standard frequencies (200-8000 Hz) and levels [35-85 dB sound pressure level (SPL)] in normal-hearing listeners. The proportion-correct data were fitted using a cumulative-Gaussian function of the sensitivity index, d', computed as a power transformation of the frequency difference, Δf. The exponent of the power function corresponded to the slope of the PF on log(d')-log(Δf) coordinates. The influence of attentional lapses on PF-slope estimates was investigated. When attentional lapses were not taken into account, the estimated PF slopes on log(d')-log(Δf) coordinates were found to be significantly lower than 1, suggesting a nonlinear relationship between d' and Δf. However, when lapse rate was included as a free parameter in the fits, PF slopes were found not to differ significantly from 1, consistent with a linear relationship between d' and Δf. This was the case across the wide ranges of frequencies and levels tested in this study. Therefore, spectral and temporal models of frequency discrimination must account for a linear relationship between d' and Δf across a wide range of frequencies and levels. © 2011 Acoustical Society of America
ERIC Educational Resources Information Center
Lai, Ming-Ling
2008-01-01
Purpose: This study aims to assess the state of technology readiness of professional accounting students in Malaysia, to examine their level of internet self-efficacy, to assess their prior computing experience, and to explore if they are satisfied with the professional course that they are pursuing in improving their technology skills.…
ERIC Educational Resources Information Center
Rich, Peter J.; Bly, Neil; Leatham, Keith R.
2014-01-01
This study aimed to provide first-hand accounts of the perceived long-term effects of learning computer programming on a learner's approach to mathematics. These phenomenological accounts, garnered from individual interviews of seven different programmers, illustrate four specific areas of interest: (1) programming provides context for many…
A Factor Graph Approach to Automated GO Annotation
Spetale, Flavio E.; Tapia, Elizabeth; Krsticevic, Flavia; Roda, Fernando; Bulacio, Pilar
2016-01-01
As volume of genomic data grows, computational methods become essential for providing a first glimpse onto gene annotations. Automated Gene Ontology (GO) annotation methods based on hierarchical ensemble classification techniques are particularly interesting when interpretability of annotation results is a main concern. In these methods, raw GO-term predictions computed by base binary classifiers are leveraged by checking the consistency of predefined GO relationships. Both formal leveraging strategies, with main focus on annotation precision, and heuristic alternatives, with main focus on scalability issues, have been described in literature. In this contribution, a factor graph approach to the hierarchical ensemble formulation of the automated GO annotation problem is presented. In this formal framework, a core factor graph is first built based on the GO structure and then enriched to take into account the noisy nature of GO-term predictions. Hence, starting from raw GO-term predictions, an iterative message passing algorithm between nodes of the factor graph is used to compute marginal probabilities of target GO-terms. Evaluations on Saccharomyces cerevisiae, Arabidopsis thaliana and Drosophila melanogaster protein sequences from the GO Molecular Function domain showed significant improvements over competing approaches, even when protein sequences were naively characterized by their physicochemical and secondary structure properties or when loose noisy annotation datasets were considered. Based on these promising results and using Arabidopsis thaliana annotation data, we extend our approach to the identification of most promising molecular function annotations for a set of proteins of unknown function in Solanum lycopersicum. PMID:26771463
Taslimifar, Mehdi; Buoso, Stefano; Verrey, Francois; Kurtcuoglu, Vartan
2018-01-01
The homeostatic regulation of large neutral amino acid (LNAA) concentration in the brain interstitial fluid (ISF) is essential for proper brain function. LNAA passage into the brain is primarily mediated by the complex and dynamic interactions between various solute carrier (SLC) transporters expressed in the neurovascular unit (NVU), among which SLC7A5/LAT1 is considered to be the major contributor in microvascular brain endothelial cells (MBEC). The LAT1-mediated trans-endothelial transport of LNAAs, however, could not be characterized precisely by available in vitro and in vivo standard methods so far. To circumvent these limitations, we have incorporated published in vivo data of rat brain into a robust computational model of NVU-LNAA homeostasis, allowing us to evaluate hypotheses concerning LAT1-mediated trans-endothelial transport of LNAAs across the blood brain barrier (BBB). We show that accounting for functional polarity of MBECs with either asymmetric LAT1 distribution between membranes and/or intrinsic LAT1 asymmetry with low intraendothelial binding affinity is required to reproduce the experimentally measured brain ISF response to intraperitoneal (IP) L-tyrosine and L-phenylalanine injection. On the basis of these findings, we have also investigated the effect of IP administrated L-tyrosine and L-phenylalanine on the dynamics of LNAAs in MBECs, astrocytes and neurons. Finally, the computational model was shown to explain the trans-stimulation of LNAA uptake across the BBB observed upon ISF perfusion with a competitive LAT1 inhibitor. PMID:29593549
Washko, George R; Kinney, Gregory L; Ross, James C; San José Estépar, Raúl; Han, MeiLan K; Dransfield, Mark T; Kim, Victor; Hatabu, Hiroto; Come, Carolyn E; Bowler, Russell P; Silverman, Edwin K; Crapo, James; Lynch, David A; Hokanson, John; Diaz, Alejandro A
2017-04-01
Emphysema is characterized by airspace dilation, inflammation, and irregular deposition of elastin and collagen in the interstitium. Computed tomographic studies have reported that lung mass (LM) may be increased in smokers, a finding attributed to inflammatory and parenchymal remodeling processes observed on histopathology. We sought to examine the epidemiologic and clinical associations of LM in smokers. Baseline epidemiologic, clinical, and computed tomography (CT) data (n = 8156) from smokers enrolled into the COPDGene Study were analyzed. LM was calculated from the CT scan. Changes in lung function at 5 years' follow-up were available from 1623 subjects. Regression analysis was performed to assess for associations of LM with forced expiratory volume in 1 second (FEV 1 ) and FEV 1 decline. Subjects with Global Initiative for Chronic Obstructive Lung Disease (GOLD) 1 chronic obstructive pulmonary disease had greater LM than either smokers with normal lung function or those with GOLD 2-4 chronic obstructive pulmonary disease (P < 0.001 for both comparisons). LM was predictive of the rate of the decline in FEV 1 (decline per 100 g, -4.7 ± 1.7 mL/y, P = 0.006). Our cross-sectional data suggest the presence of a biphasic radiological remodeling process in smokers: the presence of such nonlinearity must be accounted for in longitudinal computed tomographic studies. Baseline LM predicts the decline in lung function. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
A Factor Graph Approach to Automated GO Annotation.
Spetale, Flavio E; Tapia, Elizabeth; Krsticevic, Flavia; Roda, Fernando; Bulacio, Pilar
2016-01-01
As volume of genomic data grows, computational methods become essential for providing a first glimpse onto gene annotations. Automated Gene Ontology (GO) annotation methods based on hierarchical ensemble classification techniques are particularly interesting when interpretability of annotation results is a main concern. In these methods, raw GO-term predictions computed by base binary classifiers are leveraged by checking the consistency of predefined GO relationships. Both formal leveraging strategies, with main focus on annotation precision, and heuristic alternatives, with main focus on scalability issues, have been described in literature. In this contribution, a factor graph approach to the hierarchical ensemble formulation of the automated GO annotation problem is presented. In this formal framework, a core factor graph is first built based on the GO structure and then enriched to take into account the noisy nature of GO-term predictions. Hence, starting from raw GO-term predictions, an iterative message passing algorithm between nodes of the factor graph is used to compute marginal probabilities of target GO-terms. Evaluations on Saccharomyces cerevisiae, Arabidopsis thaliana and Drosophila melanogaster protein sequences from the GO Molecular Function domain showed significant improvements over competing approaches, even when protein sequences were naively characterized by their physicochemical and secondary structure properties or when loose noisy annotation datasets were considered. Based on these promising results and using Arabidopsis thaliana annotation data, we extend our approach to the identification of most promising molecular function annotations for a set of proteins of unknown function in Solanum lycopersicum.
NASA Astrophysics Data System (ADS)
Grujicic, M.; Yavari, R.; Ramaswami, S.; Snipes, J. S.; Yen, C.-F.; Cheeseman, B. A.
2013-11-01
A comprehensive all-atom molecular-level computational investigation is carried out in order to identify and quantify: (i) the effect of prior longitudinal-compressive or axial-torsional loading on the longitudinal-tensile behavior of p-phenylene terephthalamide (PPTA) fibrils/fibers; and (ii) the role various microstructural/topological defects play in affecting this behavior. Experimental and computational results available in the relevant open literature were utilized to construct various defects within the molecular-level model and to assign the concentration to these defects consistent with the values generally encountered under "prototypical" PPTA-polymer synthesis and fiber fabrication conditions. When quantifying the effect of the prior longitudinal-compressive/axial-torsional loading on the longitudinal-tensile behavior of PPTA fibrils, the stochastic nature of the size/potency of these defects was taken into account. The results obtained revealed that: (a) due to the stochastic nature of the defect type, concentration/number density and size/potency, the PPTA fibril/fiber longitudinal-tensile strength is a statistical quantity possessing a characteristic probability density function; (b) application of the prior axial compression or axial torsion to the PPTA imperfect single-crystalline fibrils degrades their longitudinal-tensile strength and only slightly modifies the associated probability density function; and (c) introduction of the fibril/fiber interfaces into the computational analyses showed that prior axial torsion can induce major changes in the material microstructure, causing significant reductions in the PPTA-fiber longitudinal-tensile strength and appreciable changes in the associated probability density function.
Mathematical and computational model for the analysis of micro hybrid rocket motor
NASA Astrophysics Data System (ADS)
Stoia-Djeska, Marius; Mingireanu, Florin
2012-11-01
The hybrid rockets use a two-phase propellant system. In the present work we first develop a simplified model of the coupling of the hybrid combustion process with the complete unsteady flow, starting from the combustion port and ending with the nozzle. The physical and mathematical model are adapted to the simulations of micro hybrid rocket motors. The flow model is based on the one-dimensional Euler equations with source terms. The flow equations and the fuel regression rate law are solved in a coupled manner. The platform of the numerical simulations is an implicit fourth-order Runge-Kutta second order cell-centred finite volume method. The numerical results obtained with this model show a good agreement with published experimental and numerical results. The computational model developed in this work is simple, computationally efficient and offers the advantage of taking into account a large number of functional and constructive parameters that are used by the engineers.
NASA Astrophysics Data System (ADS)
Melli, Alessio; Melosso, Mattia; Tasinato, Nicola; Bosi, Giulio; Spada, Lorenzo; Bloino, Julien; Mendolicchio, Marco; Dore, Luca; Barone, Vincenzo; Puzzarini, Cristina
2018-03-01
Ethanimine, a possible precursor of amino acids, is considered an important prebiotic molecule and thus may play important roles in the formation of biological building blocks in the interstellar medium. In addition, its identification in Titan’s atmosphere would be important for understanding the abiotic synthesis of organic species. An accurate computational characterization of the molecular structure, energetics, and spectroscopic properties of the E and Z isomers of ethanimine, CH3CHNH, has been carried out by means of a composite scheme based on coupled-cluster techniques, which also account for extrapolation to the complete basis-set limit and core-valence correlation correction, combined with density functional theory for the treatment of vibrational anharmonic effects. By combining the computational results with new millimeter-wave measurements up to 300 GHz, the rotational spectrum of both isomers can be accurately predicted up to 500 GHz. Furthermore, our computations allowed us to revise the infrared spectrum of both E- and Z-CH3CHNH, thus predicting all fundamental bands with high accuracy.
NASA Astrophysics Data System (ADS)
Bazanov, A. A.; Ivanovskii, A. V.; Panov, A. I.; Samodolov, A. V.; Sokolov, S. S.; Shaidullin, V. Sh.
2017-06-01
We report on the results of the computer simulation of the operation of magnetodynamic break switches used as the second stage of current pulse formation in magnetic explosion generators. The simulation was carried out under the conditions when the magnetic field energy density on the surface of the switching conductor as a function of the current through it was close to but still did not exceed the critical value typical of the beginning of electric explosion. In the computational model, we used the parameters of experimentally tested sample of a coil magnetic explosion generator that can store energy of up to 2.7 MJ in the inductive storage circuit and equipped with a primary explosion stage of the current pulse formation. It has been shown that the choice of the switching conductor material, as well as its elastoplastic properties, considerably affects the breaker speed. Comparative results of computer simulation for copper and aluminum have been considered.
A Computational Framework for Bioimaging Simulation.
Watabe, Masaki; Arjunan, Satya N V; Fukushima, Seiya; Iwamoto, Kazunari; Kozuka, Jun; Matsuoka, Satomi; Shindo, Yuki; Ueda, Masahiro; Takahashi, Koichi
2015-01-01
Using bioimaging technology, biologists have attempted to identify and document analytical interpretations that underlie biological phenomena in biological cells. Theoretical biology aims at distilling those interpretations into knowledge in the mathematical form of biochemical reaction networks and understanding how higher level functions emerge from the combined action of biomolecules. However, there still remain formidable challenges in bridging the gap between bioimaging and mathematical modeling. Generally, measurements using fluorescence microscopy systems are influenced by systematic effects that arise from stochastic nature of biological cells, the imaging apparatus, and optical physics. Such systematic effects are always present in all bioimaging systems and hinder quantitative comparison between the cell model and bioimages. Computational tools for such a comparison are still unavailable. Thus, in this work, we present a computational framework for handling the parameters of the cell models and the optical physics governing bioimaging systems. Simulation using this framework can generate digital images of cell simulation results after accounting for the systematic effects. We then demonstrate that such a framework enables comparison at the level of photon-counting units.
NASA Astrophysics Data System (ADS)
Huang, Dong; Liu, Yangang
2014-12-01
Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost, allowing for more realistic representation of cloud radiation interactions in large-scale models.
Electron Correlation from the Adiabatic Connection for Multireference Wave Functions
NASA Astrophysics Data System (ADS)
Pernal, Katarzyna
2018-01-01
An adiabatic connection (AC) formula for the electron correlation energy is derived for a broad class of multireference wave functions. The AC expression recovers dynamic correlation energy and assures a balanced treatment of the correlation energy. Coupling the AC formalism with the extended random phase approximation allows one to find the correlation energy only from reference one- and two-electron reduced density matrices. If the generalized valence bond perfect pairing model is employed a simple closed-form expression for the approximate AC formula is obtained. This results in the overall M5 scaling of the computation cost making the method one of the most efficient multireference approaches accounting for dynamic electron correlation also for the strongly correlated systems.
McBride, Sebastian; Huelse, Martin; Lee, Mark
2013-01-01
Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as 'active vision', to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1) conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2) implementation and validation of the model into robotic hardware (as a representative of an active vision system). Seven computational requirements were identified: 1) transformation of retinotopic to egocentric mappings, 2) spatial memory for the purposes of medium-term inhibition of return, 3) synchronization of 'where' and 'what' information from the two visual streams, 4) convergence of top-down and bottom-up information to a centralized point of information processing, 5) a threshold function to elicit saccade action, 6) a function to represent task relevance as a ratio of excitation and inhibition, and 7) derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate 'active' visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a 'priority map'.
Computations in the deep vs superficial layers of the cerebral cortex.
Rolls, Edmund T; Mills, W Patrick C
2017-11-01
A fundamental question is how the cerebral neocortex operates functionally, computationally. The cerebral neocortex with its superficial and deep layers and highly developed recurrent collateral systems that provide a basis for memory-related processing might perform somewhat different computations in the superficial and deep layers. Here we take into account the quantitative connectivity within and between laminae. Using integrate-and-fire neuronal network simulations that incorporate this connectivity, we first show that attractor networks implemented in the deep layers that are activated by the superficial layers could be partly independent in that the deep layers might have a different time course, which might because of adaptation be more transient and useful for outputs from the neocortex. In contrast the superficial layers could implement more prolonged firing, useful for slow learning and for short-term memory. Second, we show that a different type of computation could in principle be performed in the superficial and deep layers, by showing that the superficial layers could operate as a discrete attractor network useful for categorisation and feeding information forward up a cortical hierarchy, whereas the deep layers could operate as a continuous attractor network useful for providing a spatially and temporally smooth output to output systems in the brain. A key advance is that we draw attention to the functions of the recurrent collateral connections between cortical pyramidal cells, often omitted in canonical models of the neocortex, and address principles of operation of the neocortex by which the superficial and deep layers might be specialized for different types of attractor-related memory functions implemented by the recurrent collaterals. Copyright © 2017 Elsevier Inc. All rights reserved.
McBride, Sebastian; Huelse, Martin; Lee, Mark
2013-01-01
Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as ‘active vision’, to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1) conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2) implementation and validation of the model into robotic hardware (as a representative of an active vision system). Seven computational requirements were identified: 1) transformation of retinotopic to egocentric mappings, 2) spatial memory for the purposes of medium-term inhibition of return, 3) synchronization of ‘where’ and ‘what’ information from the two visual streams, 4) convergence of top-down and bottom-up information to a centralized point of information processing, 5) a threshold function to elicit saccade action, 6) a function to represent task relevance as a ratio of excitation and inhibition, and 7) derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate ‘active’ visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a ‘priority map’. PMID:23437044
NASA Astrophysics Data System (ADS)
Karasiev, V. V.
2017-10-01
Free-energy density functional theory (DFT) is one of the standard tools in high-energy-density physics used to determine the fundamental properties of dense plasmas, especially in cold and warm regimes when quantum effects are essential. DFT is usually implemented via the orbital-dependent Kohn-Sham (KS) procedure. There are two challenges of conventional implementation: (1) KS computational cost becomes prohibitively expensive at high temperatures; and (2) ground-state exchange-correlation (XC) functionals do not take into account the XC thermal effects. This talk will address both challenges and report details of the formal development of new generalized gradient approximation (GGA) XC free-energy functional which bridges low-temperature (ground state) and high-temperature (plasma) limits. Recent progress on development of functionals for orbital-free DFT as a way to address the second challenge will also be discussed. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.
An Internet-Based Accounting Information Systems Project
ERIC Educational Resources Information Center
Miller, Louise
2012-01-01
This paper describes a student project assignment used in an accounting information systems course. We are now truly immersed in the internet age, and while many required accounting information systems courses and textbooks introduce database design, accounting software development, cloud computing, and internet security, projects involving the…
Voutilainen, Arto; Kaipio, Jari P; Pekkanen, Juha; Timonen, Kirsi L; Ruuskanen, Juhani
2004-01-01
A theoretical comparison of modeled particle depositions in the human respiratory tract was performed by taking into account different particle number and mass size distributions and physical activity in an urban environment. Urban-air data on particulate concentrations in the size range 10 nm-10 microm were used to estimate the hourly average particle number and mass size distribution functions. The functions were then combined with the deposition probability functions obtained from a computerized ICRP 66 deposition model of the International Commission on Radiological Protection to calculate the numbers and masses of particles deposited in five regions of the respiratory tract of a male adult. The man's physical activity and minute ventilation during the day were taken into account in the calculations. Two different mass and number size distributions of aerosol particles with equal (computed) <10 microm particle mass concentrations gave clearly different deposition patterns in the central and peripheral regions of the human respiratory tract. The deposited particle numbers and masses were much higher during the day (0700-1900) than during the night (1900-0700) because an increase in physical activity and ventilation were temporally associated with highly increased traffic-derived particles in urban outdoor air. In future analyses of the short-term associations between particulate air pollution and health, it would not only be important to take into account the outdoor-to-indoor penetration of different particle sizes and human time-activity patterns, but also actual lung deposition patterns and physical activity in significant microenvironments.
An Accurate and Dynamic Computer Graphics Muscle Model
NASA Technical Reports Server (NTRS)
Levine, David Asher
1997-01-01
A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.
Reliability, Risk and Cost Trade-Offs for Composite Designs
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.
1996-01-01
Risk and cost trade-offs have been simulated using a probabilistic method. The probabilistic method accounts for all naturally-occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry and loading conditions. The probability density function of first buckling load for a set of uncertain variables is computed. The probabilistic sensitivity factors of uncertain variables to the first buckling load is calculated. The reliability-based cost for a composite fuselage panel is defined and minimized with respect to requisite design parameters. The optimization is achieved by solving a system of nonlinear algebraic equations whose coefficients are functions of probabilistic sensitivity factors. With optimum design parameters such as the mean and coefficient of variation (representing range of scatter) of uncertain variables, the most efficient and economical manufacturing procedure can be selected. In this paper, optimum values of the requisite design parameters for a predetermined cost due to failure occurrence are computationally determined. The results for the fuselage panel analysis show that the higher the cost due to failure occurrence, the smaller the optimum coefficient of variation of fiber modulus (design parameter) in longitudinal direction.
Fast simulation tool for ultraviolet radiation at the earth's surface
NASA Astrophysics Data System (ADS)
Engelsen, Ola; Kylling, Arve
2005-04-01
FastRT is a fast, yet accurate, UV simulation tool that computes downward surface UV doses, UV indices, and irradiances in the spectral range 290 to 400 nm with a resolution as small as 0.05 nm. It computes a full UV spectrum within a few milliseconds on a standard PC, and enables the user to convolve the spectrum with user-defined and built-in spectral response functions including the International Commission on Illumination (CIE) erythemal response function used for UV index calculations. The program accounts for the main radiative input parameters, i.e., instrumental characteristics, solar zenith angle, ozone column, aerosol loading, clouds, surface albedo, and surface altitude. FastRT is based on look-up tables of carefully selected entries of atmospheric transmittances and spherical albedos, and exploits the smoothness of these quantities with respect to atmospheric, surface, geometrical, and spectral parameters. An interactive site, http://nadir.nilu.no/~olaeng/fastrt/fastrt.html, enables the public to run the FastRT program with most input options. This page also contains updated information about FastRT and links to freely downloadable source codes and binaries.
Utility functions and resource management in an oversubscribed heterogeneous computing environment
Khemka, Bhavesh; Friese, Ryan; Briceno, Luis Diego; ...
2014-09-26
We model an oversubscribed heterogeneous computing system where tasks arrive dynamically and a scheduler maps the tasks to machines for execution. The environment and workloads are based on those being investigated by the Extreme Scale Systems Center at Oak Ridge National Laboratory. Utility functions that are designed based on specifications from the system owner and users are used to create a metric for the performance of resource allocation heuristics. Each task has a time-varying utility (importance) that the enterprise will earn based on when the task successfully completes execution. We design multiple heuristics, which include a technique to drop lowmore » utility-earning tasks, to maximize the total utility that can be earned by completing tasks. The heuristics are evaluated using simulation experiments with two levels of oversubscription. The results show the benefit of having fast heuristics that account for the importance of a task and the heterogeneity of the environment when making allocation decisions in an oversubscribed environment. Furthermore, the ability to drop low utility-earning tasks allow the heuristics to tolerate the high oversubscription as well as earn significant utility.« less
System Resource Allocation Requests | High-Performance Computing | NREL
Account to utilize the online allocation request system. If you need a HPC User Account, please request one online: Visit User Accounts. Click the green "Request Account" Button - this will direct . Follow the online instructions provided in the DocuSign form. Write "Need HPC User Account to use
Fox, Stephen J; Pittock, Chris; Tautermann, Christofer S; Fox, Thomas; Christ, Clara; Malcolm, N O J; Essex, Jonathan W; Skylaris, Chris-Kriton
2013-08-15
Schemes of increasing sophistication for obtaining free energies of binding have been developed over the years, where configurational sampling is used to include the all-important entropic contributions to the free energies. However, the quality of the results will also depend on the accuracy with which the intermolecular interactions are computed at each molecular configuration. In this context, the energy change associated with the rearrangement of electrons (electronic polarization and charge transfer) upon binding is a very important effect. Classical molecular mechanics force fields do not take this effect into account explicitly, and polarizable force fields and semiempirical quantum or hybrid quantum-classical (QM/MM) calculations are increasingly employed (at higher computational cost) to compute intermolecular interactions in free-energy schemes. In this work, we investigate the use of large-scale quantum mechanical calculations from first-principles as a way of fully taking into account electronic effects in free-energy calculations. We employ a one-step free-energy perturbation (FEP) scheme from a molecular mechanical (MM) potential to a quantum mechanical (QM) potential as a correction to thermodynamic integration calculations within the MM potential. We use this approach to calculate relative free energies of hydration of small aromatic molecules. Our quantum calculations are performed on multiple configurations from classical molecular dynamics simulations. The quantum energy of each configuration is obtained from density functional theory calculations with a near-complete psinc basis set on over 600 atoms using the ONETEP program.
Shortwave radiation parameterization scheme for subgrid topography
NASA Astrophysics Data System (ADS)
Helbig, N.; LöWe, H.
2012-02-01
Topography is well known to alter the shortwave radiation balance at the surface. A detailed radiation balance is therefore required in mountainous terrain. In order to maintain the computational performance of large-scale models while at the same time increasing grid resolutions, subgrid parameterizations are gaining more importance. A complete radiation parameterization scheme for subgrid topography accounting for shading, limited sky view, and terrain reflections is presented. Each radiative flux is parameterized individually as a function of sky view factor, slope and sun elevation angle, and albedo. We validated the parameterization with domain-averaged values computed from a distributed radiation model which includes a detailed shortwave radiation balance. Furthermore, we quantify the individual topographic impacts on the shortwave radiation balance. Rather than using a limited set of real topographies we used a large ensemble of simulated topographies with a wide range of typical terrain characteristics to study all topographic influences on the radiation balance. To this end slopes and partial derivatives of seven real topographies from Switzerland and the United States were analyzed and Gaussian statistics were found to best approximate real topographies. Parameterized direct beam radiation presented previously compared well with modeled values over the entire range of slope angles. The approximation of multiple, anisotropic terrain reflections with single, isotropic terrain reflections was confirmed as long as domain-averaged values are considered. The validation of all parameterized radiative fluxes showed that it is indeed not necessary to compute subgrid fluxes in order to account for all topographic influences in large grid sizes.
NASA Astrophysics Data System (ADS)
Zhao, X. Y.; Haworth, D. C.; Ren, T.; Modest, M. F.
2013-04-01
A computational fluid dynamics model for high-temperature oxy-natural gas combustion is developed and exercised. The model features detailed gas-phase chemistry and radiation treatments (a photon Monte Carlo method with line-by-line spectral resolution for gas and wall radiation - PMC/LBL) and a transported probability density function (PDF) method to account for turbulent fluctuations in composition and temperature. The model is first validated for a 0.8 MW oxy-natural gas furnace, and the level of agreement between model and experiment is found to be at least as good as any that has been published earlier. Next, simulations are performed with systematic model variations to provide insight into the roles of individual physical processes and their interplay in high-temperature oxy-fuel combustion. This includes variations in the chemical mechanism and the radiation model, and comparisons of results obtained with versus without the PDF method to isolate and quantify the effects of turbulence-chemistry interactions and turbulence-radiation interactions. In this combustion environment, it is found to be important to account for the interconversion of CO and CO2, and radiation plays a dominant role. The PMC/LBL model allows the effects of molecular gas radiation and wall radiation to be clearly separated and quantified. Radiation and chemistry are tightly coupled through the temperature, and correct temperature prediction is required for correct prediction of the CO/CO2 ratio. Turbulence-chemistry interactions influence the computed flame structure and mean CO levels. Strong local effects of turbulence-radiation interactions are found in the flame, but the net influence of TRI on computed mean temperature and species profiles is small. The ultimate goal of this research is to simulate high-temperature oxy-coal combustion, where accurate treatments of chemistry, radiation and turbulence-chemistry-particle-radiation interactions will be even more important.
Working-memory capacity protects model-based learning from stress
Otto, A. Ross; Raio, Candace M.; Chiang, Alice; Phelps, Elizabeth A.; Daw, Nathaniel D.
2013-01-01
Accounts of decision-making have long posited the operation of separate, competing valuation systems in the control of choice behavior. Recent theoretical and experimental advances suggest that this classic distinction between habitual and goal-directed (or more generally, automatic and controlled) choice may arise from two computational strategies for reinforcement learning, called model-free and model-based learning. Popular neurocomputational accounts of reward processing emphasize the involvement of the dopaminergic system in model-free learning and prefrontal, central executive–dependent control systems in model-based choice. Here we hypothesized that the hypothalamic-pituitary-adrenal (HPA) axis stress response—believed to have detrimental effects on prefrontal cortex function—should selectively attenuate model-based contributions to behavior. To test this, we paired an acute stressor with a sequential decision-making task that affords distinguishing the relative contributions of the two learning strategies. We assessed baseline working-memory (WM) capacity and used salivary cortisol levels to measure HPA axis stress response. We found that stress response attenuates the contribution of model-based, but not model-free, contributions to behavior. Moreover, stress-induced behavioral changes were modulated by individual WM capacity, such that low-WM-capacity individuals were more susceptible to detrimental stress effects than high-WM-capacity individuals. These results enrich existing accounts of the interplay between acute stress, working memory, and prefrontal function and suggest that executive function may be protective against the deleterious effects of acute stress. PMID:24324166
NASA Technical Reports Server (NTRS)
Basile, Lisa
1988-01-01
The SLDPF is responsible for the capture, quality monitoring processing, accounting, and shipment of Spacelab and/or Attached Shuttle Payloads (ASP) telemetry data to various user facilities. Expert systems will aid in the performance of the quality assurance and data accounting functions of the two SLDPF functional elements: the Spacelab Input Processing System (SIPS) and the Spacelab Output Processing System (SOPS). Prototypes were developed for each as independent efforts. The SIPS Knowledge System Prototype (KSP) used the commercial shell OPS5+ on an IBM PC/AT; the SOPS Expert System Prototype used the expert system shell CLIPS implemented on a Macintosh personal computer. Both prototypes emulate the duties of the respective QA/DA analysts based upon analyst input and predetermined mission criteria parameters, and recommended instructions and decisions governing the reprocessing, release, or holding for further analysis of data. These prototypes demonstrated feasibility and high potential for operational systems. Increase in productivity, decrease of tedium, consistency, concise historical records, and a training tool for new analyses were the principal advantages. An operational configuration, taking advantage of the SLDPF network capabilities, is under development with the expert systems being installed on SUN workstations. This new configuration in conjunction with the potential of the expert systems will enhance the efficiency, in both time and quality, of the SLDPF's release of Spacelab/AST data products.
NASA Technical Reports Server (NTRS)
Basile, Lisa
1988-01-01
The SLDPF is responsible for the capture, quality monitoring processing, accounting, and shipment of Spacelab and/or Attached Shuttle Payloads (ASP) telemetry data to various user facilities. Expert systems will aid in the performance of the quality assurance and data accounting functions of the two SLDPF functional elements: the Spacelab Input Processing System (SIPS) and the Spacelab Output Processing System (SOPS). Prototypes were developed for each as independent efforts. The SIPS Knowledge System Prototype (KSP) used the commercial shell OPS5+ on an IBM PC/AT; the SOPS Expert System Prototype used the expert system shell CLIPS implemented on a Macintosh personal computer. Both prototypes emulate the duties of the respective QA/DA analysts based upon analyst input and predetermined mission criteria parameters, and recommended instructions and decisions governing the reprocessing, release, or holding for further analysis of data. These prototypes demonstrated feasibility and high potential for operational systems. Increase in productivity, decrease of tedium, consistency, concise historial records, and a training tool for new analyses were the principal advantages. An operational configuration, taking advantage of the SLDPF network capabilities, is under development with the expert systems being installed on SUN workstations. This new configuration in conjunction with the potential of the expert systems will enhance the efficiency, in both time and quality, of the SLDPF's release of Spacelab/AST data products.
47 CFR 32.2124 - General purpose computers.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 2 2010-10-01 2010-10-01 false General purpose computers. 32.2124 Section 32.2124 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM... General purpose computers. (a) This account shall include the original cost of computers and peripheral...
47 CFR 32.2124 - General purpose computers.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 2 2011-10-01 2011-10-01 false General purpose computers. 32.2124 Section 32.2124 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM... General purpose computers. (a) This account shall include the original cost of computers and peripheral...
47 CFR 32.2124 - General purpose computers.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 2 2014-10-01 2014-10-01 false General purpose computers. 32.2124 Section 32.2124 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM... General purpose computers. (a) This account shall include the original cost of computers and peripheral...
47 CFR 32.2124 - General purpose computers.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 2 2013-10-01 2013-10-01 false General purpose computers. 32.2124 Section 32.2124 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM... General purpose computers. (a) This account shall include the original cost of computers and peripheral...
47 CFR 32.2124 - General purpose computers.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 2 2012-10-01 2012-10-01 false General purpose computers. 32.2124 Section 32.2124 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM... General purpose computers. (a) This account shall include the original cost of computers and peripheral...
Oligomerization of G protein-coupled receptors: computational methods.
Selent, J; Kaczor, A A
2011-01-01
Recent research has unveiled the complexity of mechanisms involved in G protein-coupled receptor (GPCR) functioning in which receptor dimerization/oligomerization may play an important role. Although the first high-resolution X-ray structure for a likely functional chemokine receptor dimer has been deposited in the Protein Data Bank, the interactions and mechanisms of dimer formation are not yet fully understood. In this respect, computational methods play a key role for predicting accurate GPCR complexes. This review outlines computational approaches focusing on sequence- and structure-based methodologies as well as discusses their advantages and limitations. Sequence-based approaches that search for possible protein-protein interfaces in GPCR complexes have been applied with success in several studies, but did not yield always consistent results. Structure-based methodologies are a potent complement to sequence-based approaches. For instance, protein-protein docking is a valuable method especially when guided by experimental constraints. Some disadvantages like limited receptor flexibility and non-consideration of the membrane environment have to be taken into account. Molecular dynamics simulation can overcome these drawbacks giving a detailed description of conformational changes in a native-like membrane. Successful prediction of GPCR complexes using computational approaches combined with experimental efforts may help to understand the role of dimeric/oligomeric GPCR complexes for fine-tuning receptor signaling. Moreover, since such GPCR complexes have attracted interest as potential drug target for diverse diseases, unveiling molecular determinants of dimerization/oligomerization can provide important implications for drug discovery.
Radiology's Achilles' heel: error and variation in the interpretation of the Röntgen image.
Robinson, P J
1997-11-01
The performance of the human eye and brain has failed to keep pace with the enormous technical progress in the first full century of radiology. Errors and variations in interpretation now represent the weakest aspect of clinical imaging. Those interpretations which differ from the consensus view of a panel of "experts" may be regarded as errors; where experts fail to achieve consensus, differing reports are regarded as "observer variation". Errors arise from poor technique, failures of perception, lack of knowledge and misjudgments. Observer variation is substantial and should be taken into account when different diagnostic methods are compared; in many cases the difference between observers outweighs the difference between techniques. Strategies for reducing error include attention to viewing conditions, training of the observers, availability of previous films and relevant clinical data, dual or multiple reporting, standardization of terminology and report format, and assistance from computers. Digital acquisition and display will probably not affect observer variation but the performance of radiologists, as measured by receiver operating characteristic (ROC) analysis, may be improved by computer-directed search for specific image features. Other current developments show that where image features can be comprehensively described, computer analysis can replace the perception function of the observer, whilst the function of interpretation can in some cases be performed better by artificial neural networks. However, computer-assisted diagnosis is still in its infancy and complete replacement of the human observer is as yet a remote possibility.
Innovations in an Accounting Information Systems Course.
ERIC Educational Resources Information Center
Shaoul, Jean
A new approach to teaching an introductory accounting information systems course is outlined and the potential of this approach for integrating computers into the accounting curriculum at Manchester University (England) is demonstrated. Specifically, the use of a small inventory recording system and database in an accounting information course is…
43 CFR 44.23 - How does the Department certify payment computations?
Code of Federal Regulations, 2010 CFR
2010-10-01
... Certified Public Accountant, or an independent public accountant, that the statement has been audited in... guidelines that State auditors, independent Certified Public Accountants, or independent public accountants... 43 Public Lands: Interior 1 2010-10-01 2010-10-01 false How does the Department certify payment...
Model Accounting Program. Adopters Guide.
ERIC Educational Resources Information Center
Beaverton School District 48, OR.
The accounting cluster demonstration project conducted at Aloha High School in the Beaverton, Oregon, school district developed a model curriculum for high school accounting. The curriculum is based on interviews with professionals in the accounting field and emphasizes the use of computers. It is suitable for use with special needs students as…
ERIC Educational Resources Information Center
Fox, Janna; Cheng, Liying
2015-01-01
In keeping with the trend to elicit multiple stakeholder responses to operational tests as part of test validation, this exploratory mixed methods study examines test-taker accounts of an Internet-based (i.e., computer-administered) test in the high-stakes context of proficiency testing for university admission. In 2013, as language testing…
NASA Astrophysics Data System (ADS)
Wang, Chenxi; Yang, Ping; Nasiri, Shaima L.; Platnick, Steven; Baum, Bryan A.; Heidinger, Andrew K.; Liu, Xu
2013-02-01
A computationally efficient radiative transfer model (RTM) for calculating visible (VIS) through shortwave infrared (SWIR) reflectances is developed for use in satellite and airborne cloud property retrievals. The full radiative transfer equation (RTE) for combinations of cloud, aerosol, and molecular layers is solved approximately by using six independent RTEs that assume the plane-parallel approximation along with a single-scattering approximation for Rayleigh scattering. Each of the six RTEs can be solved analytically if the bidirectional reflectance/transmittance distribution functions (BRDF/BTDF) of the cloud/aerosol layers are known. The adding/doubling (AD) algorithm is employed to account for overlapped cloud/aerosol layers and non-Lambertian surfaces. Two approaches are used to mitigate the significant computational burden of the AD algorithm. First, the BRDF and BTDF of single cloud/aerosol layers are pre-computed using the discrete ordinates radiative transfer program (DISORT) implemented with 128 streams, and second, the required integral in the AD algorithm is numerically implemented on a twisted icosahedral mesh. A concise surface BRDF simulator associated with the MODIS land surface product (MCD43) is merged into a fast RTM to accurately account for non-isotropic surface reflectance. The resulting fast RTM is evaluated with respect to its computational accuracy and efficiency. The simulation bias between DISORT and the fast RTM is large (e.g., relative error >5%) only when both the solar zenith angle (SZA) and the viewing zenith angle (VZA) are large (i.e., SZA>45° and VZA>70°). For general situations, i.e., cloud/aerosol layers above a non-Lambertian surface, the fast RTM calculation rate is faster than that of the 128-stream DISORT by approximately two orders of magnitude.
Barone, Vincenzo; Biczysko, Malgorzata; Borkowska-Panek, Monika; Bloino, Julien
2014-10-20
The subtle interplay of several different effects means that the interpretation and analysis of experimental spectra in terms of structural and dynamic characteristics is a challenging task. In this context, theoretical studies can be helpful, and as such, computational spectroscopy is rapidly evolving from a highly specialized research field toward a versatile and widespread tool. However, in the case of electronic spectra (e.g. UV/Vis, circular dichroism, photoelectron, and X-ray spectra), the most commonly used methods still rely on the computation of vertical excitation energies, which are further convoluted to simulate line shapes. Such treatment completely neglects the influence of nuclear motions, despite the well-recognized notion that a proper account of vibronic effects is often mandatory to correctly interpret experimental findings. Development and validation of improved models rooted into density functional theory (DFT) and its time-dependent extension (TD-DFT) is of course instrumental for the optimal balance between reliability and favorable scaling with the number of electrons. However, the implementation of easy-to-use and effective procedures to simulate vibrationally resolved electronic spectra, and their availability to a wide community of users, is at least equally important for reliable simulations of spectral line shapes for compounds of biological and technological interest. Here, such an approach has been applied to the study of the UV/Vis spectra of chlorophyll a. The results show that properly tailored approaches are feasible for state-of-the-art computational spectroscopy studies, and allow, with affordable computational resources, vibrational and environmental effects on the spectral line shapes to be taken into account for large systems. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Yang, Laurence; Tan, Justin; O'Brien, Edward J; Monk, Jonathan M; Kim, Donghyuk; Li, Howard J; Charusanti, Pep; Ebrahim, Ali; Lloyd, Colton J; Yurkovich, James T; Du, Bin; Dräger, Andreas; Thomas, Alex; Sun, Yuekai; Saunders, Michael A; Palsson, Bernhard O
2015-08-25
Finding the minimal set of gene functions needed to sustain life is of both fundamental and practical importance. Minimal gene lists have been proposed by using comparative genomics-based core proteome definitions. A definition of a core proteome that is supported by empirical data, is understood at the systems-level, and provides a basis for computing essential cell functions is lacking. Here, we use a systems biology-based genome-scale model of metabolism and expression to define a functional core proteome consisting of 356 gene products, accounting for 44% of the Escherichia coli proteome by mass based on proteomics data. This systems biology core proteome includes 212 genes not found in previous comparative genomics-based core proteome definitions, accounts for 65% of known essential genes in E. coli, and has 78% gene function overlap with minimal genomes (Buchnera aphidicola and Mycoplasma genitalium). Based on transcriptomics data across environmental and genetic backgrounds, the systems biology core proteome is significantly enriched in nondifferentially expressed genes and depleted in differentially expressed genes. Compared with the noncore, core gene expression levels are also similar across genetic backgrounds (two times higher Spearman rank correlation) and exhibit significantly more complex transcriptional and posttranscriptional regulatory features (40% more transcription start sites per gene, 22% longer 5'UTR). Thus, genome-scale systems biology approaches rigorously identify a functional core proteome needed to support growth. This framework, validated by using high-throughput datasets, facilitates a mechanistic understanding of systems-level core proteome function through in silico models; it de facto defines a paleome.
The Effects of Computer Instruction on College Students' Reading Skills.
ERIC Educational Resources Information Center
Kuehner, Alison V.
1999-01-01
Reviews research concerning computer-based reading instruction for college students. Finds that most studies suggest that computers can provide motivating and efficient learning, but it is not clear whether the computer, or the instruction via computer, accounts for student gains. Notes many methodological flaws in the studies. Suggests…
Extended Glauert tip correction to include vortex rollup effects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maniaci, David; Schmitz, Sven
Wind turbine loads predictions by blade-element momentum theory using the standard tip-loss correction have been shown to over-predict loading near the blade tip in comparison to experimental data. This over-prediction is theorized to be due to the assumption of light rotor loading, inherent in the standard tip-loss correction model of Glauert. A higher- order free-wake method, WindDVE, is used to compute the rollup process of the trailing vortex sheets downstream of wind turbine blades. Results obtained serve an exact correction function to the Glauert tip correction used in blade-element momentum methods. Lastly, it is found that accounting for the effectsmore » of tip vortex rollup within the Glauert tip correction indeed results in improved prediction of blade tip loads computed by blade-element momentum methods.« less
Care 3 phase 2 report, maintenance manual
NASA Technical Reports Server (NTRS)
Bryant, L. A.; Stiffler, J. J.
1982-01-01
CARE 3 (Computer-Aided Reliability Estimation, version three) is a computer program designed to help estimate the reliability of complex, redundant systems. Although the program can model a wide variety of redundant structures, it was developed specifically for fault-tolerant avionics systems--systems distinguished by the need for extremely reliable performance since a system failure could well result in the loss of human life. It substantially generalizes the class of redundant configurations that could be accommodated, and includes a coverage model to determine the various coverage probabilities as a function of the applicable fault recovery mechanisms (detection delay, diagnostic scheduling interval, isolation and recovery delay, etc.). CARE 3 further generalizes the class of system structures that can be modeled and greatly expands the coverage model to take into account such effects as intermittent and transient faults, latent faults, error propagation, etc.
Bogdanov, Nikolay A.; Bisogni, Valentina; Kraus, Roberto; ...
2016-11-21
In existing theoretical approaches to core-level excitations of transition-metal ions in solids relaxation and polarization effects due to the inner core hole are often ignored or described phenomenologically. Here, we set up an ab initio computational scheme that explicitly accounts for such physics in the calculation of x-ray absorption and resonant inelastic x-ray scattering spectra. Good agreement is found with experimental transition-metal L-edge data for the strongly correlated d 9 cuprate Li 2CuO 2, for which we also determine the absolute scattering intensities. The newly developed methodology opens the way for the investigation of even more complex d n electronicmore » structures of group VI B to VIII B correlated oxide compounds.« less
A Hybrid Soft-computing Method for Image Analysis of Digital Plantar Scanners.
Razjouyan, Javad; Khayat, Omid; Siahi, Mehdi; Mansouri, Ali Alizadeh
2013-01-01
Digital foot scanners have been developed in recent years to yield anthropometrists digital image of insole with pressure distribution and anthropometric information. In this paper, a hybrid algorithm containing gray level spatial correlation (GLSC) histogram and Shanbag entropy is presented for analysis of scanned foot images. An evolutionary algorithm is also employed to find the optimum parameters of GLSC and transform function of the membership values. Resulting binary images as the thresholded images are undergone anthropometric measurements taking in to account the scale factor of pixel size to metric scale. The proposed method is finally applied to plantar images obtained through scanning feet of randomly selected subjects by a foot scanner system as our experimental setup described in the paper. Running computation time and the effects of GLSC parameters are investigated in the simulation results.
Temperature dependence of electron impact ionization coefficient in bulk silicon
NASA Astrophysics Data System (ADS)
Ahmed, Mowfaq Jalil
2017-09-01
This work exhibits a modified procedure to compute the electron impact ionization coefficient of silicon for temperatures between 77 and 800K and electric fields ranging from 70 to 400 kV/cm. The ionization coefficients are computed from the electron momentum distribution function through solving the Boltzmann transport equation (BTE). The arrangement is acquired by joining Legendre polynomial extension with BTE. The resulting BTE is solved by differences-differential method using MATLAB®. Six (X) equivalent ellipsoidal and non-parabolic valleys of the conduction band of silicon are taken into account. Concerning the scattering mechanisms, the interval acoustic scattering, non-polar optical scattering and II scattering are taken into consideration. This investigation showed that the ionization coefficients decrease with increasing temperature. The overall results are in good agreement with previous experimental and theoretical reported data predominantly at high electric fields.
Extended Glauert tip correction to include vortex rollup effects
Maniaci, David; Schmitz, Sven
2016-10-03
Wind turbine loads predictions by blade-element momentum theory using the standard tip-loss correction have been shown to over-predict loading near the blade tip in comparison to experimental data. This over-prediction is theorized to be due to the assumption of light rotor loading, inherent in the standard tip-loss correction model of Glauert. A higher- order free-wake method, WindDVE, is used to compute the rollup process of the trailing vortex sheets downstream of wind turbine blades. Results obtained serve an exact correction function to the Glauert tip correction used in blade-element momentum methods. Lastly, it is found that accounting for the effectsmore » of tip vortex rollup within the Glauert tip correction indeed results in improved prediction of blade tip loads computed by blade-element momentum methods.« less
A model for the accurate computation of the lateral scattering of protons in water
NASA Astrophysics Data System (ADS)
Bellinzona, E. V.; Ciocca, M.; Embriaco, A.; Ferrari, A.; Fontana, A.; Mairani, A.; Parodi, K.; Rotondi, A.; Sala, P.; Tessonnier, T.
2016-02-01
A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.
Modeling of heterogeneous elastic materials by the multiscale hp-adaptive finite element method
NASA Astrophysics Data System (ADS)
Klimczak, Marek; Cecot, Witold
2018-01-01
We present an enhancement of the multiscale finite element method (MsFEM) by combining it with the hp-adaptive FEM. Such a discretization-based homogenization technique is a versatile tool for modeling heterogeneous materials with fast oscillating elasticity coefficients. No assumption on periodicity of the domain is required. In order to avoid direct, so-called overkill mesh computations, a coarse mesh with effective stiffness matrices is used and special shape functions are constructed to account for the local heterogeneities at the micro resolution. The automatic adaptivity (hp-type at the macro resolution and h-type at the micro resolution) increases efficiency of computation. In this paper details of the modified MsFEM are presented and a numerical test performed on a Fichera corner domain is presented in order to validate the proposed approach.
Jacobi spectral Galerkin method for elliptic Neumann problems
NASA Astrophysics Data System (ADS)
Doha, E.; Bhrawy, A.; Abd-Elhameed, W.
2009-01-01
This paper is concerned with fast spectral-Galerkin Jacobi algorithms for solving one- and two-dimensional elliptic equations with homogeneous and nonhomogeneous Neumann boundary conditions. The paper extends the algorithms proposed by Shen (SIAM J Sci Comput 15:1489-1505, 1994) and Auteri et al. (J Comput Phys 185:427-444, 2003), based on Legendre polynomials, to Jacobi polynomials with arbitrary α and β. The key to the efficiency of our algorithms is to construct appropriate basis functions with zero slope at the endpoints, which lead to systems with sparse matrices for the discrete variational formulations. The direct solution algorithm developed for the homogeneous Neumann problem in two-dimensions relies upon a tensor product process. Nonhomogeneous Neumann data are accounted for by means of a lifting. Numerical results indicating the high accuracy and effectiveness of these algorithms are presented.
A model for the accurate computation of the lateral scattering of protons in water.
Bellinzona, E V; Ciocca, M; Embriaco, A; Ferrari, A; Fontana, A; Mairani, A; Parodi, K; Rotondi, A; Sala, P; Tessonnier, T
2016-02-21
A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.
FORTRAN manpower account program
NASA Technical Reports Server (NTRS)
Strand, J. N.
1972-01-01
Computer program for determining manpower costs for full time, part time, and contractor personnel is discussed. Twelve different tables resulting from computer output are described. Program is written in FORTRAN 4 for IBM 360/65 computer.
NASA Astrophysics Data System (ADS)
Nayak, Aditya B.; Price, James M.; Dai, Bin; Perkins, David; Chen, Ding Ding; Jones, Christopher M.
2015-06-01
Multivariate optical computing (MOC), an optical sensing technique for analog calculation, allows direct and robust measurement of chemical and physical properties of complex fluid samples in high-pressure/high-temperature (HP/HT) downhole environments. The core of this MOC technology is the integrated computational element (ICE), an optical element with a wavelength-dependent transmission spectrum designed to allow the detector to respond sensitively and specifically to the analytes of interest. A key differentiator of this technology is it uses all of the information present in the broadband optical spectrum to determine the proportion of the analyte present in a complex fluid mixture. The detection methodology is photometric in nature; therefore, this technology does not require a spectrometer to measure and record a spectrum or a computer to perform calculations on the recorded optical spectrum. The integrated computational element is a thin-film optical element with a specific optical response function designed for each analyte. The optical response function is achieved by fabricating alternating layers of high-index (a-Si) and low-index (SiO2) thin films onto a transparent substrate (BK7 glass) using traditional thin-film manufacturing processes (e.g., ion-assisted e-beam vacuum deposition). A proprietary software and process are used to control the thickness and material properties, including the optical constants of the materials during deposition to achieve the desired optical response function. The ion-assisted deposition is useful for controlling the densification of the film, stoichiometry, and material optical constants as well as to achieve high deposition growth rates and moisture-stable films. However, the ion-source can induce undesirable absorption in the film; and subsequently, modify the optical constants of the material during the ramp-up and stabilization period of the e-gun and ion-source, respectively. This paper characterizes the unwanted absorption in the a-Si thin-film using advanced thin-film metrology methods, including spectroscopic ellipsometry and Fourier transform infrared (FTIR) spectroscopy. The resulting analysis identifies a fundamental mechanism contributing to this absorption and a method for minimizing and accounting for the unwanted absorption in the thin-film such that the exact optical response function can be achieved.
The adsorption of CH3 and C6H6 on corundum-type sesquioxides: The role of van der Waals interactions
NASA Astrophysics Data System (ADS)
Dabaghmanesh, Samira; Partoens, Bart; Neyts, Erik
Van der Waals (vdW) interactions play an important role in the adsorption of atoms and molecules on the surface of solids. This role becomes more significant whenever the interaction between the adsorbate and surface is physisorption. Thanks to recent developments in density functional theory (DFT), we are now able to employ different vdW methods that helps us to account for the long-range vdW forces. However, the choice of the most efficient vdW functional for different materials is still an open question. In our study, we examine different vdW approaches to compute bulk and molecular adsorption properties of M2O3 oxides (M: Cr, Fe, and Al) as well-known examples of the corundum family. For the bulk properties, we compare our results for the heat of formation, cohesive energy, lattice parameters and bond distances as obtained using the different vdW functionals and available experimental data. Next we compute the adsorption energies of the benzene molecule (as an example of physisorption) and CH3 (as an example of chemisorption) on top of the (0001) M-terminated and MO-terminated surfaces. Calculating the vdW contributions into the adsorption energies, we find that the vdW functionals play important role not just in the weak adsorptions but even in strong adsorption.
T -matrix approach to quark-gluon plasma
NASA Astrophysics Data System (ADS)
Liu, Shuai Y. F.; Rapp, Ralf
2018-03-01
A self-consistent thermodynamic T -matrix approach is deployed to study the microscopic properties of the quark-gluon plasma (QGP), encompassing both light- and heavy-parton degrees of freedom in a unified framework. The starting point is a relativistic effective Hamiltonian with a universal color force. The input in-medium potential is quantitatively constrained by computing the heavy-quark (HQ) free energy from the static T -matrix and fitting it to pertinent lattice-QCD (lQCD) data. The corresponding T -matrix is then applied to compute the equation of state (EoS) of the QGP in a two-particle irreducible formalism, including the full off-shell properties of the selfconsistent single-parton spectral functions and their two-body interaction. In particular, the skeleton diagram functional is fully resummed to account for emerging bound and scattering states as the critical temperature is approached from above. We find that the solution satisfying three sets of lQCD data (EoS, HQ free energy, and quarkonium correlator ratios) is not unique. As limiting cases we discuss a weakly coupled solution, which features color potentials close to the free energy, relatively sharp quasiparticle spectral functions and weak hadronic resonances near Tc, and a strongly coupled solution with a strong color potential (much larger than the free energy), resulting in broad nonquasiparticle parton spectral functions and strong hadronic resonance states which dominate the EoS when approaching Tc.
The Transfer Function Model as a Tool to Study and Describe Space Weather Phenomena
NASA Technical Reports Server (NTRS)
Porter, Hayden S.; Mayr, Hans G.; Bhartia, P. K. (Technical Monitor)
2001-01-01
The Transfer Function Model (TFM) is a semi-analytical, linear model that is designed especially to describe thermospheric perturbations associated with magnetic storms and substorm. activity. It is a multi-constituent model (N2, O, He H, Ar) that accounts for wind induced diffusion, which significantly affects not only the composition and mass density but also the temperature and wind fields. Because the TFM adopts a semianalytic approach in which the geometry and temporal dependencies of the driving sources are removed through the use of height-integrated Green's functions, it provides physical insight into the essential properties of processes being considered, which are uncluttered by the accidental complexities that arise from particular source geometrie and time dependences. Extending from the ground to 700 km, the TFM eliminates spurious effects due to arbitrarily chosen boundary conditions. A database of transfer functions, computed only once, can be used to synthesize a wide range of spatial and temporal sources dependencies. The response synthesis can be performed quickly in real-time using only limited computing capabilities. These features make the TFM unique among global dynamical models. Given these desirable properties, a version of the TFM has been developed for personal computers (PC) using advanced platform-independent 3D visualization capabilities. We demonstrate the model capabilities with simulations for different auroral sources, including the response of ducted gravity waves modes that propagate around the globe. The thermospheric response is found to depend strongly on the spatial and temporal frequency spectra of the storm. Such varied behavior is difficult to describe in statistical empirical models. To improve the capability of space weather prediction, the TFM thus could be grafted naturally onto existing statistical models using data assimilation.
Guise, Jennifer; Widdicombe, Sue; McKinlay, Andy
2007-01-01
ME (Myalgic Encephalomyelitis) or CFS (chronic fatigue syndrome) is a debilitating illness for which no cause or medical tests have been identified. Debates over its nature have generated interest from qualitative researchers. However, participants are difficult to recruit because of the nature of their condition. Therefore, this study explores the utility of the internet as a means of eliciting accounts. We analyse data from focus groups and the internet in order to ascertain the extent to which previous research findings apply to the internet domain. Interviews were conducted among 49 members of internet groups (38 chatline, 11 personal) and 7 members of two face-to-face support groups. Discourse analysis of descriptions and accounts of ME or CFS revealed similar devices and interactional concerns in both internet and face-to-face communication. Participants constructed their condition as serious, enigmatic and not psychological. These functioned to deflect problematic assumptions about ME or CFS and to manage their accountability for the illness and its effects.
Dissociating 'what' and 'how' in visual form agnosia: a computational investigation.
Vecera, S P
2002-01-01
Patients with visual form agnosia exhibit a profound impairment in shape perception (what an object is) coupled with intact visuomotor functions (how to act on an object), demonstrating a dissociation between visual perception and action. How can these patients act on objects that they cannot perceive? Although two explanations of this 'what-how' dissociation have been offered, each explanation has shortcomings. A 'pathway information' account of the 'what-how' dissociation is presented in this paper. This account hypothesizes that 'where' and 'how' tasks require less information than 'what' tasks, thereby allowing 'where/how' to remain relatively spared in the face of neurological damage. Simulations with a neural network model test the predictions of the pathway information account. Following damage to an input layer common to the 'what' and 'where/how' pathways, the model performs object identification more poorly than spatial localization. Thus, the model offers a parsimonious explanation of differential 'what-how' performance in visual form agnosia. The simulation results are discussed in terms of their implications for visual form agnosia and other neuropsychological syndromes.
Molecular robots with sensors and intelligence.
Hagiya, Masami; Konagaya, Akihiko; Kobayashi, Satoshi; Saito, Hirohide; Murata, Satoshi
2014-06-17
CONSPECTUS: What we can call a molecular robot is a set of molecular devices such as sensors, logic gates, and actuators integrated into a consistent system. The molecular robot is supposed to react autonomously to its environment by receiving molecular signals and making decisions by molecular computation. Building such a system has long been a dream of scientists; however, despite extensive efforts, systems having all three functions (sensing, computation, and actuation) have not been realized yet. This Account introduces an ongoing research project that focuses on the development of molecular robotics funded by MEXT (Ministry of Education, Culture, Sports, Science and Technology, Japan). This 5 year project started in July 2012 and is titled "Development of Molecular Robots Equipped with Sensors and Intelligence". The major issues in the field of molecular robotics all correspond to a feedback (i.e., plan-do-see) cycle of a robotic system. More specifically, these issues are (1) developing molecular sensors capable of handling a wide array of signals, (2) developing amplification methods of signals to drive molecular computing devices, (3) accelerating molecular computing, (4) developing actuators that are controllable by molecular computers, and (5) providing bodies of molecular robots encapsulating the above molecular devices, which implement the conformational changes and locomotion of the robots. In this Account, the latest contributions to the project are reported. There are four research teams in the project that specialize on sensing, intelligence, amoeba-like actuation, and slime-like actuation, respectively. The molecular sensor team is focusing on the development of molecular sensors that can handle a variety of signals. This team is also investigating methods to amplify signals from the molecular sensors. The molecular intelligence team is developing molecular computers and is currently focusing on a new photochemical technology for accelerating DNA-based computations. They also introduce novel computational models behind various kinds of molecular computers necessary for designing such computers. The amoeba robot team aims at constructing amoeba-like robots. The team is trying to incorporate motor proteins, including kinesin and microtubules (MTs), for use as actuators implemented in a liposomal compartment as a robot body. They are also developing a methodology to link DNA-based computation and molecular motor control. The slime robot team focuses on the development of slime-like robots. The team is evaluating various gels, including DNA gel and BZ gel, for use as actuators, as well as the body material to disperse various molecular devices in it. They also try to control the gel actuators by DNA signals coming from molecular computers.
NASA Astrophysics Data System (ADS)
Tscherning, Carl Christian; Herceg, Matija
2014-05-01
The methods of Least-Squares Collocation (LSC) and the Reduced Point Mass method (RPM) both uses radial basis-functions for the representation of the anomalous gravity potential (T). LSC uses as many base-functions as the number of observations, while the RPM method uses as many as deemed necessary. Both methods have been evaluated and for some tests compared in the two areas (Central Europe and South-East Pacific). For both areas test data had been generated using EGM2008. As observational data (a) ground gravity disturbances, (b) airborne gravity disturbances, (c) GOCE like Second order radial derivatives and (d) GRACE along-track potential differences were available. The use of these data for the computation of values of (e) T in a grid was the target of the evaluation and comparison investigation. Due to the fact that T in principle can only be computed using global data, the remove-restore procedure was used, with EGM2008 subtracted (and later added to T) up to degree 240 using dataset (a) and (b) and up to degree 36 for datasets (c) and (d). The estimated coefficient error was accounted for when using LSC and in the calculation of error-estimates. The main result is that T was estimated with an error (computed minus control data, (e) from which EGM2008 to degree 240 or 36 had been subtracted ) as found in the table (LSC used): Area Europe Data-set (mgal) (e)-240(a) (b) (e)-36 (c) (d) Mean -0.0 0.0 -0.1 -0.1 -0.3 -1.8 Standard deviation4.1 0.8 2.7 32.6 6.0 19.2 Max. difference 19.9 10.4 16.9 69.9 31.3 47.0 Min.difference -16.2 -3.7 -15.5 -92.1 -27.8 -65.5 Area Pacific Data-set (mgal) (e)-240(a) (b) (e)-36 (c) (d) Mean -0.1 -0.1 -0.1 4.6 -0.2 0.2 Standard deviation4.8 0.2 1.9 49.1 6.7 18.6 Max.difference 22.2 1.8 13.4 115.5 26.9 26.5 Min.difference -28.7 -3.1 -15.7 -106.4 -33.6 22.1 The result using RPM with data-sets (a), (b), (c) gave comparable results. The use of (d) with the RPM method is being implemented. Tests were also done computing dataset (a) from the other datasets. The results here may serve as a bench-mark for other radial basis-function implementations for computing approximations to T. Improvements are certainly possible, e.g. by taking the topography and bathymetry into account.
Globus File Transfer Services | High-Performance Computing | NREL
Account To get a Globus account, sign up on the Globus account website. To get access to the NREL endpoint continue. Use the Globus credentials you used to register your Globus.org account. Go to the Transfer Files Globus.org account to allow ssh CLI access To use the CLI you must have a Globus account with ssh access
An integrative approach for measuring semantic similarities using gene ontology.
Peng, Jiajie; Li, Hongxiang; Jiang, Qinghua; Wang, Yadong; Chen, Jin
2014-01-01
Gene Ontology (GO) provides rich information and a convenient way to study gene functional similarity, which has been successfully used in various applications. However, the existing GO based similarity measurements have limited functions for only a subset of GO information is considered in each measure. An appropriate integration of the existing measures to take into account more information in GO is demanding. We propose a novel integrative measure called InteGO2 to automatically select appropriate seed measures and then to integrate them using a metaheuristic search method. The experiment results show that InteGO2 significantly improves the performance of gene similarity in human, Arabidopsis and yeast on both molecular function and biological process GO categories. InteGO2 computes gene-to-gene similarities more accurately than tested existing measures and has high robustness. The supplementary document and software are available at http://mlg.hit.edu.cn:8082/.
New technologies accelerate the exploration of non-coding RNAs in horticultural plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Degao; Mewalal, Ritesh; Hu, Rongbin
Non-coding RNAs (ncRNAs), that is, RNAs not translated into proteins, are crucial regulators of a variety of biological processes in plants. While protein-encoding genes have been relatively well-annotated in sequenced genomes, accounting for a small portion of the genome space in plants, the universe of plant ncRNAs is rapidly expanding. Recent advances in experimental and computational technologies have generated a great momentum for discovery and functional characterization of ncRNAs. Here we summarize the classification and known biological functions of plant ncRNAs, review the application of next-generation sequencing (NGS) technology and ribosome profiling technology to ncRNA discovery in horticultural plants andmore » discuss the application of new technologies, especially the new genome-editing tool clustered regularly interspaced short palindromic repeat (CRISPR)/CRISPR-associated protein 9 (Cas9) systems, to functional characterization of plant ncRNAs.« less
Emergence of binocular functional properties in a monocular neural circuit
Ramdya, Pavan; Engert, Florian
2010-01-01
Sensory circuits frequently integrate converging inputs while maintaining precise functional relationships between them. For example, in mammals with stereopsis, neurons at the first stages of binocular visual processing show a close alignment of receptive-field properties for each eye. Still, basic questions about the global wiring mechanisms that enable this functional alignment remain unanswered, including whether the addition of a second retinal input to an otherwise monocular neural circuit is sufficient for the emergence of these binocular properties. We addressed this question by inducing a de novo binocular retinal projection to the larval zebrafish optic tectum and examining recipient neuronal populations using in vivo two-photon calcium imaging. Notably, neurons in rewired tecta were predominantly binocular and showed matching direction selectivity for each eye. We found that a model based on local inhibitory circuitry that computes direction selectivity using the topographic structure of both retinal inputs can account for the emergence of this binocular feature. PMID:19160507
New technologies accelerate the exploration of non-coding RNAs in horticultural plants
Liu, Degao; Mewalal, Ritesh; Hu, Rongbin; Tuskan, Gerald A; Yang, Xiaohan
2017-01-01
Non-coding RNAs (ncRNAs), that is, RNAs not translated into proteins, are crucial regulators of a variety of biological processes in plants. While protein-encoding genes have been relatively well-annotated in sequenced genomes, accounting for a small portion of the genome space in plants, the universe of plant ncRNAs is rapidly expanding. Recent advances in experimental and computational technologies have generated a great momentum for discovery and functional characterization of ncRNAs. Here we summarize the classification and known biological functions of plant ncRNAs, review the application of next-generation sequencing (NGS) technology and ribosome profiling technology to ncRNA discovery in horticultural plants and discuss the application of new technologies, especially the new genome-editing tool clustered regularly interspaced short palindromic repeat (CRISPR)/CRISPR-associated protein 9 (Cas9) systems, to functional characterization of plant ncRNAs. PMID:28698797
Encircling the dark: constraining dark energy via cosmic density in spheres
NASA Astrophysics Data System (ADS)
Codis, S.; Pichon, C.; Bernardeau, F.; Uhlemann, C.; Prunet, S.
2016-08-01
The recently published analytic probability density function for the mildly non-linear cosmic density field within spherical cells is used to build a simple but accurate maximum likelihood estimate for the redshift evolution of the variance of the density, which, as expected, is shown to have smaller relative error than the sample variance. This estimator provides a competitive probe for the equation of state of dark energy, reaching a few per cent accuracy on wp and wa for a Euclid-like survey. The corresponding likelihood function can take into account the configuration of the cells via their relative separations. A code to compute one-cell-density probability density functions for arbitrary initial power spectrum, top-hat smoothing and various spherical-collapse dynamics is made available online, so as to provide straightforward means of testing the effect of alternative dark energy models and initial power spectra on the low-redshift matter distribution.
A Stratified Acoustic Model Accounting for Phase Shifts for Underwater Acoustic Networks
Wang, Ping; Zhang, Lin; Li, Victor O. K.
2013-01-01
Accurate acoustic channel models are critical for the study of underwater acoustic networks. Existing models include physics-based models and empirical approximation models. The former enjoy good accuracy, but incur heavy computational load, rendering them impractical in large networks. On the other hand, the latter are computationally inexpensive but inaccurate since they do not account for the complex effects of boundary reflection losses, the multi-path phenomenon and ray bending in the stratified ocean medium. In this paper, we propose a Stratified Acoustic Model (SAM) based on frequency-independent geometrical ray tracing, accounting for each ray's phase shift during the propagation. It is a feasible channel model for large scale underwater acoustic network simulation, allowing us to predict the transmission loss with much lower computational complexity than the traditional physics-based models. The accuracy of the model is validated via comparisons with the experimental measurements in two different oceans. Satisfactory agreements with the measurements and with other computationally intensive classical physics-based models are demonstrated. PMID:23669708
NASA Trapezoidal Wing Computations Including Transition and Advanced Turbulence Modeling
NASA Technical Reports Server (NTRS)
Rumsey, C. L.; Lee-Rausch, E. M.
2012-01-01
Flow about the NASA Trapezoidal Wing is computed with several turbulence models by using grids from the first High Lift Prediction Workshop in an effort to advance understanding of computational fluid dynamics modeling for this type of flowfield. Transition is accounted for in many of the computations. In particular, a recently-developed 4-equation transition model is utilized and works well overall. Accounting for transition tends to increase lift and decrease moment, which improves the agreement with experiment. Upper surface flap separation is reduced, and agreement with experimental surface pressures and velocity profiles is improved. The predicted shape of wakes from upstream elements is strongly influenced by grid resolution in regions above the main and flap elements. Turbulence model enhancements to account for rotation and curvature have the general effect of increasing lift and improving the resolution of the wing tip vortex as it convects downstream. However, none of the models improve the prediction of surface pressures near the wing tip, where more grid resolution is needed.
A stratified acoustic model accounting for phase shifts for underwater acoustic networks.
Wang, Ping; Zhang, Lin; Li, Victor O K
2013-05-13
Accurate acoustic channel models are critical for the study of underwater acoustic networks. Existing models include physics-based models and empirical approximation models. The former enjoy good accuracy, but incur heavy computational load, rendering them impractical in large networks. On the other hand, the latter are computationally inexpensive but inaccurate since they do not account for the complex effects of boundary reflection losses, the multi-path phenomenon and ray bending in the stratified ocean medium. In this paper, we propose a Stratified Acoustic Model (SAM) based on frequency-independent geometrical ray tracing, accounting for each ray's phase shift during the propagation. It is a feasible channel model for large scale underwater acoustic network simulation, allowing us to predict the transmission loss with much lower computational complexity than the traditional physics-based models. The accuracy of the model is validated via comparisons with the experimental measurements in two different oceans. Satisfactory agreements with the measurements and with other computationally intensive classical physics-based models are demonstrated.
Empirical assessment of debris flow risk on a regional scale in Yunnan province, southwestern China.
Liu, Xilin; Yue, Zhong Qi; Tham, Lesliw George; Lee, Chack Fan
2002-08-01
Adopting the definition suggested by the United Nations, a risk model for regional debris flow assessment is presented. Risk is defined as the product of hazard and vulnerability, both of which are necessary for evaluation. A Multiple-Factor Composite Assessment Model is developed for quantifying regional debris flow hazard by taking into account eight variables that contribute to debris flow magnitude and its frequency of occurrence. Vulnerability is a measure of the potential total losses. On a regional scale, it can be measured by the fixed asset, gross domestic product, land resources, population density, as well as the age, education, and wealth of the inhabitants. A nonlinear power-function assessment model that accounts for these indexes is developed. As a case study, the model is applied to compute the hazard, vulnerability and risk for each prefecture of the Yunnan province in southwestern China.
Water's hydrogen bonds in the hydrophobic effect: a simple model.
Xu, Huafeng; Dill, Ken A
2005-12-15
We propose a simple analytical model to account for water's hydrogen bonds in the hydrophobic effect. It is based on computing a mean-field partition function for a water molecule in the first solvation shell around a solute molecule. The model treats the orientational restrictions from hydrogen bonding, and utilizes quantities that can be obtained from bulk water simulations. We illustrate the principles in a 2-dimensional Mercedes-Benz-like model. Our model gives good predictions for the heat capacity of hydrophobic solvation, reproduces the solvation energies and entropies at different temperatures with only one fitting parameter, and accounts for the solute size dependence of the hydrophobic effect. Our model supports the view that water's hydrogen bonding propensity determines the temperature dependence of the hydrophobic effect. It explains the puzzling experimental observation that dissolving a nonpolar solute in hot water has positive entropy.
Thermostability in rubredoxin and its relationship to mechanical rigidity
NASA Astrophysics Data System (ADS)
Rader, A. J.
2010-03-01
The source of increased stability in proteins from organisms that thrive in extreme thermal environments is not well understood. Previous experimental and theoretical studies have suggested many different features possibly responsible for such thermostability. Many of these thermostabilizing mechanisms can be accounted for in terms of structural rigidity. Thus a plausible hypothesis accounting for this remarkable stability in thermophilic enzymes states that these enzymes have enhanced conformational rigidity at temperatures below their native, functioning temperature. Experimental evidence exists to both support and contradict this supposition. We computationally investigate the relationship between thermostability and rigidity using rubredoxin as a case study. The mechanical rigidity is calculated using atomic models of homologous rubredoxin structures from the hyperthermophile Pyrococcus furiosus and mesophile Clostridium pasteurianum using the FIRST software. A global increase in structural rigidity (equivalently a decrease in flexibility) corresponds to an increase in thermostability. Locally, rigidity differences (between mesophilic and thermophilic structures) agree with differences in protection factors.
The Ventral Anterior Temporal Lobe has a Necessary Role in Exception Word Reading.
Ueno, Taiji; Meteyard, Lotte; Hoffman, Paul; Murayama, Kou
2018-06-06
An influential account of reading holds that words with exceptional spelling-to-sound correspondences (e.g., PINT) are read via activation of their lexical-semantic representations, supported by the anterior temporal lobe (ATL). This account has been inconclusive because it is based on neuropsychological evidence, in which lesion-deficit relationships are difficult to localize precisely, and functional neuroimaging data, which is spatially precise but cannot demonstrate whether the ATL activity is necessary for exception word reading. To address these issues, we used a technique with good spatial specificity-repetitive transcranial magnetic stimulation (rTMS)-to demonstrate a necessary role of ATL in exception word reading. Following rTMS to left ventral ATL, healthy Japanese adults made more regularization errors in reading Japanese exception words. We successfully simulated these results in a computational model in which exception word reading was underpinned by semantic activations. The ATL is critically and selectively involved in reading exception words.
Integrating Computers into the Accounting Curriculum Using an IBM PC Network. Final Report.
ERIC Educational Resources Information Center
Shaoul, Jean
Noting the increased use of microcomputers in commerce and the accounting profession, the Department of Accounting and Finance at the University of Manchester recognized the importance of integrating microcomputers into the accounting curriculum and requested and received a grant to develop an integrated study environment in which students would…
26 CFR 1.1502-17 - Methods of accounting.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 12 2010-04-01 2010-04-01 false Methods of accounting. 1.1502-17 Section 1.1502... (CONTINUED) INCOME TAXES Computation of Separate Taxable Income § 1.1502-17 Methods of accounting. (a) General rule. The method of accounting to be used by each member of the group shall be determined in...
Distributed information system (water fact sheet)
Harbaugh, A.W.
1986-01-01
During 1982-85, the Water Resources Division (WRD) of the U.S. Geological Survey (USGS) installed over 70 large minicomputers in offices across the country to support its mission in the science of hydrology. These computers are connected by a communications network that allows information to be shared among computers in each office. The computers and network together are known as the Distributed Information System (DIS). The computers are accessed through the use of more than 1500 terminals and minicomputers. The WRD has three fundamentally different needs for computing: data management; hydrologic analysis; and administration. Data management accounts for 50% of the computational workload of WRD because hydrologic data are collected in all 50 states, Puerto Rico, and the Pacific trust territories. Hydrologic analysis consists of 40% of the computational workload of WRD. Cost accounting, payroll, personnel records, and planning for WRD programs occupies an estimated 10% of the computer workload. The DIS communications network is shown on a map. (Lantz-PTT)
ERIC Educational Resources Information Center
East Texas State Univ., Commerce. Occupational Curriculum Lab.
Sixteen units on office occupations are presented in this teacher's guide. The unit topics include the following: related information (e.g., preparing for job interview); accounting and computing (e.g., preparing a payroll and a balance sheet); information communications (e.g., handling appointments, composing correspondence); and stenographic,…
Nonlinear finite amplitude vibrations of sharp-edged beams in viscous fluids
NASA Astrophysics Data System (ADS)
Aureli, M.; Basaran, M. E.; Porfiri, M.
2012-03-01
In this paper, we study flexural vibrations of a cantilever beam with thin rectangular cross section submerged in a quiescent viscous fluid and undergoing oscillations whose amplitude is comparable with its width. The structure is modeled using Euler-Bernoulli beam theory and the distributed hydrodynamic loading is described by a single complex-valued hydrodynamic function which accounts for added mass and fluid damping experienced by the structure. We perform a parametric 2D computational fluid dynamics analysis of an oscillating rigid lamina, representative of a generic beam cross section, to understand the dependence of the hydrodynamic function on the governing flow parameters. We find that increasing the frequency and amplitude of the vibration elicits vortex shedding and convection phenomena which are, in turn, responsible for nonlinear hydrodynamic damping. We establish a manageable nonlinear correction to the classical hydrodynamic function developed for small amplitude vibration and we derive a computationally efficient reduced order modal model for the beam nonlinear oscillations. Numerical and theoretical results are validated by comparison with ad hoc designed experiments on tapered beams and multimodal vibrations and with data available in the literature. Findings from this work are expected to find applications in the design of slender structures of interest in marine applications, such as biomimetic propulsion systems and energy harvesting devices.
Direct SSH Gateway Access to Peregrine | High Performance Computing |
can access peregrine-ssh.nrel.gov, you must have: An active NREL HPC user account (see User Accounts ) An OTP Token (see One Time Password Tokens) Logging into peregrine-ssh.nrel.gov With your HPC account
Kuhn, Gerhard; Krammes, Gary S.; Beal, Vivian J.
2007-01-01
The U.S. Geological Survey, in cooperation with Colorado Springs Utilities, the Colorado Water Conservation Board, and the El Paso County Water Authority, began a study in 2004 with the following objectives: (1) Apply a stream-aquifer model to Monument Creek, (2) use the results of the modeling to develop a transit-loss accounting program for Monument Creek, (3) revise an existing accounting program for Fountain Creek to easily incorporate ongoing and future changes in management of return flows of reusable water, and (4) integrate the two accounting programs into a single program and develop a Web-based interface to the integrated program that incorporates simple and reliable data entry that is automated to the fullest extent possible. This report describes the results of completing objectives (2), (3), and (4) of that study. The accounting program for Monument Creek was developed first by (1) using the existing accounting program for Fountain Creek as a prototype, (2) incorporating the transit-loss results from a stream-aquifer modeling analysis of Monument Creek, and (3) developing new output reports. The capabilities of the existing accounting program for Fountain Creek then were incorporated into the program for Monument Creek and the output reports were expanded to include Fountain Creek. A Web-based interface to the new transit-loss accounting program then was developed that provided automated data entry. An integrated system of 34 nodes and 33 subreaches was integrated by combining the independent node and subreach systems used in the previously completed stream-aquifer modeling studies for the Monument and Fountain Creek reaches. Important operational criteria that were implemented in the new transit-loss accounting program for Monument and Fountain Creeks included the following: (1) Retain all the reusable water-management capabilities incorporated into the existing accounting program for Fountain Creek; (2) enable daily accounting and transit-loss computations for a variable number of reusable return flows discharged into Monument Creek at selected locations; (3) enable diversion of all or a part of a reusable return flow at any selected node for purposes of storage in off-stream reservoirs or other similar types of reusable water management; (4) and provide flexibility in the accounting program to change the number of return-flow entities, the locations at which the return flows discharge into Monument or Fountain Creeks, or the locations to which the return flows are delivered. The primary component of the Web-based interface is a data-entry form that displays data stored in the accounting program input file; the data-entry form allows for entry and modification of new data, which then is rewritten to the input file. When the data-entry form is displayed, up-to-date discharge data for each station are automatically computed and entered on the data-entry form. Data for native return flows, reusable return flows, reusable return flow diversions, and native diversions also are entered automatically or manually, if needed. In computing the estimated quantities of reusable return flow and the associated transit losses, the accounting program uses two sets of computations. The first set of computations is made between any two adjacent streamflow-gaging stations (termed 'stream-segment loop'); the primary purpose of the stream-segment loop is to estimate the loss or gain in native discharge between the two adjacent streamflow-gaging stations. The second set of computations is made between any two adjacent nodes (termed 'subreach loop'); the actual transit-loss computations are made in the subreach loop, using the result from the stream-segment loop. The stream-segment loop is completed for a stream segment, and then the subreach loop is completed for each subreach within the segment. When the subreach loop is completed for all subreaches within a stream segment, the stream-segment loop is initiated for the ne
Time-dependence of graph theory metrics in functional connectivity analysis
Chiang, Sharon; Cassese, Alberto; Guindani, Michele; Vannucci, Marina; Yeh, Hsiang J.; Haneef, Zulfi; Stern, John M.
2016-01-01
Brain graphs provide a useful way to computationally model the network structure of the connectome, and this has led to increasing interest in the use of graph theory to quantitate and investigate the topological characteristics of the healthy brain and brain disorders on the network level. The majority of graph theory investigations of functional connectivity have relied on the assumption of temporal stationarity. However, recent evidence increasingly suggests that functional connectivity fluctuates over the length of the scan. In this study, we investigate the stationarity of brain network topology using a Bayesian hidden Markov model (HMM) approach that estimates the dynamic structure of graph theoretical measures of whole-brain functional connectivity. In addition to extracting the stationary distribution and transition probabilities of commonly employed graph theory measures, we propose two estimators of temporal stationarity: the S-index and N-index. These indexes can be used to quantify different aspects of the temporal stationarity of graph theory measures. We apply the method and proposed estimators to resting-state functional MRI data from healthy controls and patients with temporal lobe epilepsy. Our analysis shows that several graph theory measures, including small-world index, global integration measures, and betweenness centrality, may exhibit greater stationarity over time and therefore be more robust. Additionally, we demonstrate that accounting for subject-level differences in the level of temporal stationarity of network topology may increase discriminatory power in discriminating between disease states. Our results confirm and extend findings from other studies regarding the dynamic nature of functional connectivity, and suggest that using statistical models which explicitly account for the dynamic nature of functional connectivity in graph theory analyses may improve the sensitivity of investigations and consistency across investigations. PMID:26518632
Time-dependence of graph theory metrics in functional connectivity analysis.
Chiang, Sharon; Cassese, Alberto; Guindani, Michele; Vannucci, Marina; Yeh, Hsiang J; Haneef, Zulfi; Stern, John M
2016-01-15
Brain graphs provide a useful way to computationally model the network structure of the connectome, and this has led to increasing interest in the use of graph theory to quantitate and investigate the topological characteristics of the healthy brain and brain disorders on the network level. The majority of graph theory investigations of functional connectivity have relied on the assumption of temporal stationarity. However, recent evidence increasingly suggests that functional connectivity fluctuates over the length of the scan. In this study, we investigate the stationarity of brain network topology using a Bayesian hidden Markov model (HMM) approach that estimates the dynamic structure of graph theoretical measures of whole-brain functional connectivity. In addition to extracting the stationary distribution and transition probabilities of commonly employed graph theory measures, we propose two estimators of temporal stationarity: the S-index and N-index. These indexes can be used to quantify different aspects of the temporal stationarity of graph theory measures. We apply the method and proposed estimators to resting-state functional MRI data from healthy controls and patients with temporal lobe epilepsy. Our analysis shows that several graph theory measures, including small-world index, global integration measures, and betweenness centrality, may exhibit greater stationarity over time and therefore be more robust. Additionally, we demonstrate that accounting for subject-level differences in the level of temporal stationarity of network topology may increase discriminatory power in discriminating between disease states. Our results confirm and extend findings from other studies regarding the dynamic nature of functional connectivity, and suggest that using statistical models which explicitly account for the dynamic nature of functional connectivity in graph theory analyses may improve the sensitivity of investigations and consistency across investigations. Copyright © 2015 Elsevier Inc. All rights reserved.
Computational Complexity and Human Decision-Making.
Bossaerts, Peter; Murawski, Carsten
2017-12-01
The rationality principle postulates that decision-makers always choose the best action available to them. It underlies most modern theories of decision-making. The principle does not take into account the difficulty of finding the best option. Here, we propose that computational complexity theory (CCT) provides a framework for defining and quantifying the difficulty of decisions. We review evidence showing that human decision-making is affected by computational complexity. Building on this evidence, we argue that most models of decision-making, and metacognition, are intractable from a computational perspective. To be plausible, future theories of decision-making will need to take into account both the resources required for implementing the computations implied by the theory, and the resource constraints imposed on the decision-maker by biology. Copyright © 2017 Elsevier Ltd. All rights reserved.
Computational aeroelasticity using a pressure-based solver
NASA Astrophysics Data System (ADS)
Kamakoti, Ramji
A computational methodology for performing fluid-structure interaction computations for three-dimensional elastic wing geometries is presented. The flow solver used is based on an unsteady Reynolds-Averaged Navier-Stokes (RANS) model. A well validated k-ε turbulence model with wall function treatment for near wall region was used to perform turbulent flow calculations. Relative merits of alternative flow solvers were investigated. The predictor-corrector-based Pressure Implicit Splitting of Operators (PISO) algorithm was found to be computationally economic for unsteady flow computations. Wing structure was modeled using Bernoulli-Euler beam theory. A fully implicit time-marching scheme (using the Newmark integration method) was used to integrate the equations of motion for structure. Bilinear interpolation and linear extrapolation techniques were used to transfer necessary information between fluid and structure solvers. Geometry deformation was accounted for by using a moving boundary module. The moving grid capability was based on a master/slave concept and transfinite interpolation techniques. Since computations were performed on a moving mesh system, the geometric conservation law must be preserved. This is achieved by appropriately evaluating the Jacobian values associated with each cell. Accurate computation of contravariant velocities for unsteady flows using the momentum interpolation method on collocated, curvilinear grids was also addressed. Flutter computations were performed for the AGARD 445.6 wing at subsonic, transonic and supersonic Mach numbers. Unsteady computations were performed at various dynamic pressures to predict the flutter boundary. Results showed favorable agreement of experiment and previous numerical results. The computational methodology exhibited capabilities to predict both qualitative and quantitative features of aeroelasticity.
On the Application of Euler Deconvolution to the Analytic Signal
NASA Astrophysics Data System (ADS)
Fedi, M.; Florio, G.; Pasteka, R.
2005-05-01
In the last years papers on Euler deconvolution (ED) used formulations that accounted for the unknown background field, allowing to consider the structural index (N) an unknown to be solved for, together with the source coordinates. Among them, Hsu (2002) and Fedi and Florio (2002) independently pointed out that the use of an adequate m-order derivative of the field, instead than the field itself, allowed solving for both N and source position. For the same reason, Keating and Pilkington (2004) proposed the ED of the analytic signal. A function being analyzed by ED must be homogeneous but also harmonic, because it must be possible to compute its vertical derivative, as well known from potential field theory. Huang et al. (1995), demonstrated that analytic signal is a homogeneous function, but, for instance, it is rather obvious that the magnetic field modulus (corresponding to the analytic signal of a gravity field) is not a harmonic function (e.g.: Grant & West, 1965). Thus, it appears that a straightforward application of ED to the analytic signal is not possible because a vertical derivation of this function is not correct by using standard potential fields analysis tools. In this note we want to theoretically and empirically check what kind of error are caused in the ED by such wrong assumption about analytic signal harmonicity. We will discuss results on profile and map synthetic data, and use a simple method to compute the vertical derivative of non-harmonic functions measured on a horizontal plane. Our main conclusions are: 1. To approximate a correct evaluation of the vertical derivative of a non-harmonic function it is useful to compute it with finite-difference, by using upward continuation. 2. We found that the errors on the vertical derivative computed as if the analytic signal was harmonic reflects mainly on the structural index estimate; these errors can mislead an interpretation even though the depth estimates are almost correct. 3. Consistent estimates of depth and S.I. are instead obtained by using a finite-difference vertical derivative of the analytic signal. 4. Analysis of a case history confirms the strong error in the estimation of structural index if the analytic signal is treated as an harmonic function.
Monte Carlo calculations of lunar regolith thickness distributions.
NASA Technical Reports Server (NTRS)
Oberbeck, V. R.; Quaide, W. L.; Mahan, M.; Paulson, J.
1973-01-01
It is pointed out that none of the existing models of lunar regolith evolution take into account the relationship between regolith thickness, crater shape, and volume of debris ejected. The results of a Monte Carlo computer simulation of regolith evolution are presented. The simulation was designed to consider the full effect of the buffering regolith through calculation of the amount of debris produced by any given crater as a function of the amount of debris present at the site of the crater at the time of crater formation. The method is essentially an improved version of the Oberbeck and Quaide (1968) model.
Knowledge-based nursing diagnosis
NASA Astrophysics Data System (ADS)
Roy, Claudette; Hay, D. Robert
1991-03-01
Nursing diagnosis is an integral part of the nursing process and determines the interventions leading to outcomes for which the nurse is accountable. Diagnoses under the time constraints of modern nursing can benefit from a computer assist. A knowledge-based engineering approach was developed to address these problems. A number of problems were addressed during system design to make the system practical extended beyond capture of knowledge. The issues involved in implementing a professional knowledge base in a clinical setting are discussed. System functions, structure, interfaces, health care environment, and terminology and taxonomy are discussed. An integrated system concept from assessment through intervention and evaluation is outlined.
Standard model light-by-light scattering in SANC: Analytic and numeric evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bardin, D. Yu., E-mail: bardin@nu.jinr.ru; Kalinovskaya, L. V., E-mail: kalinov@nu.jinr.ru; Uglov, E. D., E-mail: corner@nu.jinr.r
2010-11-15
The implementation of the Standard Model process {gamma}{gamma} {yields} {gamma}{gamma} through a fermion and boson loop into the framework of SANC system and additional precomputation modules used for calculation of massive box diagrams are described. The computation of this process takes into account nonzero mass of loop particles. The covariant and helicity amplitudes for this process, some particular cases of D{sub 0} and C{sub 0} Passarino-Veltman functions, and also numerical results of corresponding SANC module evaluation are presented. Whenever possible, the results are compared with those existing in the literature.
Spin and orbital exchange interactions from Dynamical Mean Field Theory
NASA Astrophysics Data System (ADS)
Secchi, A.; Lichtenstein, A. I.; Katsnelson, M. I.
2016-02-01
We derive a set of equations expressing the parameters of the magnetic interactions characterizing a strongly correlated electronic system in terms of single-electron Green's functions and self-energies. This allows to establish a mapping between the initial electronic system and a spin model including up to quadratic interactions between the effective spins, with a general interaction (exchange) tensor that accounts for anisotropic exchange, Dzyaloshinskii-Moriya interaction and other symmetric terms such as dipole-dipole interaction. We present the formulas in a format that can be used for computations via Dynamical Mean Field Theory algorithms.
NASA Technical Reports Server (NTRS)
Tang, C. C. H.
1984-01-01
A preliminary study of an all-sky coverage of the EUVE mission is given. Algorithms are provided to compute the exposure of the celestial sphere under the spinning telescopes, taking into account that during part of the exposure time the telescopes are blocked by the earth. The algorithms are used to give an estimate of exposure time at different ecliptic latitudes as a function of the angle of field of view of the telescope. Sample coverage patterns are also given for a 6-month mission.
Statistical Mechanics Provides Novel Insights into Microtubule Stability and Mechanism of Shrinkage
Jain, Ishutesh; Inamdar, Mandar M.; Padinhateeri, Ranjith
2015-01-01
Microtubules are nano-machines that grow and shrink stochastically, making use of the coupling between chemical kinetics and mechanics of its constituent protofilaments (PFs). We investigate the stability and shrinkage of microtubules taking into account inter-protofilament interactions and bending interactions of intrinsically curved PFs. Computing the free energy as a function of PF tip position, we show that the competition between curvature energy, inter-PF interaction energy and entropy leads to a rich landscape with a series of minima that repeat over a length-scale determined by the intrinsic curvature. Computing Langevin dynamics of the tip through the landscape and accounting for depolymerization, we calculate the average unzippering and shrinkage velocities of GDP protofilaments and compare them with the experimentally known results. Our analysis predicts that the strength of the inter-PF interaction (Ems) has to be comparable to the strength of the curvature energy (Emb) such that Ems−Emb≈1kBT, and questions the prevalent notion that unzippering results from the domination of bending energy of curved GDP PFs. Our work demonstrates how the shape of the free energy landscape is crucial in explaining the mechanism of MT shrinkage where the unzippered PFs will fluctuate in a set of partially peeled off states and subunit dissociation will reduce the length. PMID:25692909
Gene coexpression measures in large heterogeneous samples using count statistics.
Wang, Y X Rachel; Waterman, Michael S; Huang, Haiyan
2014-11-18
With the advent of high-throughput technologies making large-scale gene expression data readily available, developing appropriate computational tools to process these data and distill insights into systems biology has been an important part of the "big data" challenge. Gene coexpression is one of the earliest techniques developed that is still widely in use for functional annotation, pathway analysis, and, most importantly, the reconstruction of gene regulatory networks, based on gene expression data. However, most coexpression measures do not specifically account for local features in expression profiles. For example, it is very likely that the patterns of gene association may change or only exist in a subset of the samples, especially when the samples are pooled from a range of experiments. We propose two new gene coexpression statistics based on counting local patterns of gene expression ranks to take into account the potentially diverse nature of gene interactions. In particular, one of our statistics is designed for time-course data with local dependence structures, such as time series coupled over a subregion of the time domain. We provide asymptotic analysis of their distributions and power, and evaluate their performance against a wide range of existing coexpression measures on simulated and real data. Our new statistics are fast to compute, robust against outliers, and show comparable and often better general performance.
GTOOLS: an Interactive Computer Program to Process Gravity Data for High-Resolution Applications
NASA Astrophysics Data System (ADS)
Battaglia, M.; Poland, M. P.; Kauahikaua, J. P.
2012-12-01
An interactive computer program, GTOOLS, has been developed to process gravity data acquired by the Scintrex CG-5 and LaCoste & Romberg EG, G and D gravity meters. The aim of GTOOLS is to provide a validated methodology for computing relative gravity values in a consistent way accounting for as many environmental factors as possible (e.g., tides, ocean loading, solar constraints, etc.), as well as instrument drift. The program has a modular architecture. Each processing step is implemented in a tool (function) that can be either run independently or within an automated task. The tools allow the user to (a) read the gravity data acquired during field surveys completed using different types of gravity meters; (b) compute Earth tides using an improved version of Longman's (1959) model; (c) compute ocean loading using the HARDISP code by Petit and Luzum (2010) and ocean loading harmonics from the TPXO7.2 ocean tide model; (d) estimate the instrument drift using linear functions as appropriate; and (e) compute the weighted least-square-adjusted gravity values and their errors. The corrections are performed up to microGal ( μGal) precision, in accordance with the specifications of high-resolution surveys. The program has the ability to incorporate calibration factors that allow for surveys done using different gravimeters to be compared. Two additional tools (functions) allow the user to (1) estimate the instrument calibration factor by processing data collected by a gravimeter on a calibration range; (2) plot gravity time-series at a chosen benchmark. The interactive procedures and the program output (jpeg plots and text files) have been designed to ease data handling and archiving, to provide useful information for future data interpretation or modeling, and facilitate comparison of gravity surveys conducted at different times. All formulas have been checked for typographical errors in the original reference. GTOOLS, developed using Matlab, is open source and machine independent. We will demonstrate program use and utility with data from multiple microgravity surveys at Kilauea volcano, Hawai'i.
On basis set superposition error corrected stabilization energies for large n-body clusters.
Walczak, Katarzyna; Friedrich, Joachim; Dolg, Michael
2011-10-07
In this contribution, we propose an approximate basis set superposition error (BSSE) correction scheme for the site-site function counterpoise and for the Valiron-Mayer function counterpoise correction of second order to account for the basis set superposition error in clusters with a large number of subunits. The accuracy of the proposed scheme has been investigated for a water cluster series at the CCSD(T), CCSD, MP2, and self-consistent field levels of theory using Dunning's correlation consistent basis sets. The BSSE corrected stabilization energies for a series of water clusters are presented. A study regarding the possible savings with respect to computational resources has been carried out as well as a monitoring of the basis set dependence of the approximate BSSE corrections. © 2011 American Institute of Physics
Bodily systems and the spatial-functional structure of the human body.
Smith, Barry; Munn, Katherine; Papakin, Igor
2004-01-01
The human body is a system made of systems. The body is divided into bodily systems proper, such as the endocrine and circulatory systems, which are subdivided into many sub-systems at a variety of levels, whereby all systems and subsystems engage in massive causal interaction with each other and with their surrounding environments. Here we offer an explicit definition of bodily system and provide a framework for understanding their causal interactions. Medical sciences provide at best informal accounts of basic notions such as system, process, and function, and while such informality is acceptable in documentation created for human beings, it falls short of what is needed for computer representations. In our analysis we will accordingly provide the framework for a formal definition of bodily system and of associated notions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Dong; Liu, Yangang
2014-12-18
Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost,more » allowing for more realistic representation of cloud radiation interactions in large-scale models.« less
Koopmans-Compliant Spectral Functionals for Extended Systems
NASA Astrophysics Data System (ADS)
Nguyen, Ngoc Linh; Colonna, Nicola; Ferretti, Andrea; Marzari, Nicola
2018-04-01
Koopmans-compliant functionals have been shown to provide accurate spectral properties for molecular systems; this accuracy is driven by the generalized linearization condition imposed on each charged excitation, i.e., on changing the occupation of any orbital in the system, while accounting for screening and relaxation from all other electrons. In this work, we discuss the theoretical formulation and the practical implementation of this formalism to the case of extended systems, where a third condition, the localization of Koopmans's orbitals, proves crucial to reach seamlessly the thermodynamic limit. We illustrate the formalism by first studying one-dimensional molecular systems of increasing length. Then, we consider the band gaps of 30 paradigmatic solid-state test cases, for which accurate experimental and computational results are available. The results are found to be comparable with the state of the art in many-body perturbation theory, notably using just a functional formulation for spectral properties and the generalized-gradient approximation for the exchange and correlation functional.
Sandia National Laboratories: Careers: Hiring Process
Suppliers iSupplier Account Accounts Payable Contract Information Construction & Facilities Contract Foundations Bioscience Computing & Information Science Electromagnetics Engineering Science Geoscience notifications. Visit our Careers tool to search for jobs and register for an account. Registering will enable
ERIC Educational Resources Information Center
Pondy, Dorothy, Comp.
The catalog was compiled to assist instructors in planning community college and university curricula using the 48 computer-assisted accountancy lessons available on PLATO IV (Programmed Logic for Automatic Teaching Operation) for first semester accounting courses. It contains information on lesson access, lists of acceptable abbreviations for…
Code of Federal Regulations, 2010 CFR
2010-10-01
... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM SYSTEM OF ACCOUNTS... purpose computer software and for network software. Subsidiary records for this account shall also include..., not including software, having a life of one year or less shall be charged directly to Account 6564...
NASA Astrophysics Data System (ADS)
Ignatova, V. A.; Möller, W.; Conard, T.; Vandervorst, W.; Gijbels, R.
2005-06-01
The TRIDYN collisional computer simulation has been modified to account for emission of ionic species and molecules during sputter depth profiling, by introducing a power law dependence of the ion yield as a function of the oxygen surface concentration and by modelling the sputtering of monoxide molecules. The results are compared to experimental data obtained with dual beam TOF SIMS depth profiling of ZrO2/SiO2/Si high-k dielectric stacks with thicknesses of the SiO2 interlayer of 0.5, 1, and 1.5 nm. Reasonable agreement between the experiment and the computer simulation is obtained for most of the experimental features, demonstrating the effects of ion-induced atomic relocation, i.e., atomic mixing and recoil implantation, and preferential sputtering. The depth scale of the obtained profiles is significantly distorted by recoil implantation and the depth-dependent ionization factor. A pronounced double-peak structure in the experimental profiles related to Zr is not explained by the computer simulation, and is attributed to ion-induced bond breaking and diffusion, followed by a decoration of the interfaces by either mobile Zr or O.
Lima, Jakelyne; Cerdeira, Louise Teixeira; Bol, Erick; Schneider, Maria Paula Cruz; Silva, Artur; Azevedo, Vasco; Abelém, Antônio Jorge Gomes
2012-01-01
Improvements in genome sequencing techniques have resulted in generation of huge volumes of data. As a consequence of this progress, the genome assembly stage demands even more computational power, since the incoming sequence files contain large amounts of data. To speed up the process, it is often necessary to distribute the workload among a group of machines. However, this requires hardware and software solutions specially configured for this purpose. Grid computing try to simplify this process of aggregate resources, but do not always offer the best performance possible due to heterogeneity and decentralized management of its resources. Thus, it is necessary to develop software that takes into account these peculiarities. In order to achieve this purpose, we developed an algorithm aimed to optimize the functionality of de novo assembly software ABySS in order to optimize its operation in grids. We run ABySS with and without the algorithm we developed in the grid simulator SimGrid. Tests showed that our algorithm is viable, flexible, and scalable even on a heterogeneous environment, which improved the genome assembly time in computational grids without changing its quality. PMID:22461785
NASA Technical Reports Server (NTRS)
Stoll, Frederick
1993-01-01
The NLPAN computer code uses a finite-strip approach to the analysis of thin-walled prismatic composite structures such as stiffened panels. The code can model in-plane axial loading, transverse pressure loading, and constant through-the-thickness thermal loading, and can account for shape imperfections. The NLPAN code represents an attempt to extend the buckling analysis of the VIPASA computer code into the geometrically nonlinear regime. Buckling mode shapes generated using VIPASA are used in NLPAN as global functions for representing displacements in the nonlinear regime. While the NLPAN analysis is approximate in nature, it is computationally economical in comparison with finite-element analysis, and is thus suitable for use in preliminary design and design optimization. A comprehensive description of the theoretical approach of NLPAN is provided. A discussion of some operational considerations for the NLPAN code is included. NLPAN is applied to several test problems in order to demonstrate new program capabilities, and to assess the accuracy of the code in modeling various types of loading and response. User instructions for the NLPAN computer program are provided, including a detailed description of the input requirements and example input files for two stiffened-panel configurations.
Recurrent network dynamics reconciles visual motion segmentation and integration.
Medathati, N V Kartheek; Rankin, James; Meso, Andrew I; Kornprobst, Pierre; Masson, Guillaume S
2017-09-12
In sensory systems, a range of computational rules are presumed to be implemented by neuronal subpopulations with different tuning functions. For instance, in primate cortical area MT, different classes of direction-selective cells have been identified and related either to motion integration, segmentation or transparency. Still, how such different tuning properties are constructed is unclear. The dominant theoretical viewpoint based on a linear-nonlinear feed-forward cascade does not account for their complex temporal dynamics and their versatility when facing different input statistics. Here, we demonstrate that a recurrent network model of visual motion processing can reconcile these different properties. Using a ring network, we show how excitatory and inhibitory interactions can implement different computational rules such as vector averaging, winner-take-all or superposition. The model also captures ordered temporal transitions between these behaviors. In particular, depending on the inhibition regime the network can switch from motion integration to segmentation, thus being able to compute either a single pattern motion or to superpose multiple inputs as in motion transparency. We thus demonstrate that recurrent architectures can adaptively give rise to different cortical computational regimes depending upon the input statistics, from sensory flow integration to segmentation.
Climate@Home: Crowdsourcing Climate Change Research
NASA Astrophysics Data System (ADS)
Xu, C.; Yang, C.; Li, J.; Sun, M.; Bambacus, M.
2011-12-01
Climate change deeply impacts human wellbeing. Significant amounts of resources have been invested in building super-computers that are capable of running advanced climate models, which help scientists understand climate change mechanisms, and predict its trend. Although climate change influences all human beings, the general public is largely excluded from the research. On the other hand, scientists are eagerly seeking communication mediums for effectively enlightening the public on climate change and its consequences. The Climate@Home project is devoted to connect the two ends with an innovative solution: crowdsourcing climate computing to the general public by harvesting volunteered computing resources from the participants. A distributed web-based computing platform will be built to support climate computing, and the general public can 'plug-in' their personal computers to participate in the research. People contribute the spare computing power of their computers to run a computer model, which is used by scientists to predict climate change. Traditionally, only super-computers could handle such a large computing processing load. By orchestrating massive amounts of personal computers to perform atomized data processing tasks, investments on new super-computers, energy consumed by super-computers, and carbon release from super-computers are reduced. Meanwhile, the platform forms a social network of climate researchers and the general public, which may be leveraged to raise climate awareness among the participants. A portal is to be built as the gateway to the climate@home project. Three types of roles and the corresponding functionalities are designed and supported. The end users include the citizen participants, climate scientists, and project managers. Citizen participants connect their computing resources to the platform by downloading and installing a computing engine on their personal computers. Computer climate models are defined at the server side. Climate scientists configure computer model parameters through the portal user interface. After model configuration, scientists then launch the computing task. Next, data is atomized and distributed to computing engines that are running on citizen participants' computers. Scientists will receive notifications on the completion of computing tasks, and examine modeling results via visualization modules of the portal. Computing tasks, computing resources, and participants are managed by project managers via portal tools. A portal prototype has been built for proof of concept. Three forums have been setup for different groups of users to share information on science aspect, technology aspect, and educational outreach aspect. A facebook account has been setup to distribute messages via the most popular social networking platform. New treads are synchronized from the forums to facebook. A mapping tool displays geographic locations of the participants and the status of tasks on each client node. A group of users have been invited to test functions such as forums, blogs, and computing resource monitoring.
Functional connectivity dynamics: modeling the switching behavior of the resting state.
Hansen, Enrique C A; Battaglia, Demian; Spiegler, Andreas; Deco, Gustavo; Jirsa, Viktor K
2015-01-15
Functional connectivity (FC) sheds light on the interactions between different brain regions. Besides basic research, it is clinically relevant for applications in Alzheimer's disease, schizophrenia, presurgical planning, epilepsy, and traumatic brain injury. Simulations of whole-brain mean-field computational models with realistic connectivity determined by tractography studies enable us to reproduce with accuracy aspects of average FC in the resting state. Most computational studies, however, did not address the prominent non-stationarity in resting state FC, which may result in large intra- and inter-subject variability and thus preclude an accurate individual predictability. Here we show that this non-stationarity reveals a rich structure, characterized by rapid transitions switching between a few discrete FC states. We also show that computational models optimized to fit time-averaged FC do not reproduce these spontaneous state transitions and, thus, are not qualitatively superior to simplified linear stochastic models, which account for the effects of structure alone. We then demonstrate that a slight enhancement of the non-linearity of the network nodes is sufficient to broaden the repertoire of possible network behaviors, leading to modes of fluctuations, reminiscent of some of the most frequently observed Resting State Networks. Because of the noise-driven exploration of this repertoire, the dynamics of FC qualitatively change now and display non-stationary switching similar to empirical resting state recordings (Functional Connectivity Dynamics (FCD)). Thus FCD bear promise to serve as a better biomarker of resting state neural activity and of its pathologic alterations. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Thermophysical properties of krypton-helium gas mixtures from ab initio pair potentials
2017-01-01
A new potential energy curve for the krypton-helium atom pair was developed using supermolecular ab initio computations for 34 interatomic distances. Values for the interaction energies at the complete basis set limit were obtained from calculations with the coupled-cluster method with single, double, and perturbative triple excitations and correlation consistent basis sets up to sextuple-zeta quality augmented with mid-bond functions. Higher-order coupled-cluster excitations up to the full quadruple level were accounted for in a scheme of successive correction terms. Core-core and core-valence correlation effects were included. Relativistic corrections were considered not only at the scalar relativistic level but also using full four-component Dirac–Coulomb and Dirac–Coulomb–Gaunt calculations. The fitted analytical pair potential function is characterized by a well depth of 31.42 K with an estimated standard uncertainty of 0.08 K. Statistical thermodynamics was applied to compute the krypton-helium cross second virial coefficients. The results show a very good agreement with the best experimental data. Kinetic theory calculations based on classical and quantum-mechanical approaches for the underlying collision dynamics were utilized to compute the transport properties of krypton-helium mixtures in the dilute-gas limit for a large temperature range. The results were analyzed with respect to the orders of approximation of kinetic theory and compared with experimental data. Especially the data for the binary diffusion coefficient confirm the predictive quality of the new potential. Furthermore, inconsistencies between two empirical pair potential functions for the krypton-helium system from the literature could be resolved. PMID:28595411
Thermophysical properties of krypton-helium gas mixtures from ab initio pair potentials
NASA Astrophysics Data System (ADS)
Jäger, Benjamin; Bich, Eckard
2017-06-01
A new potential energy curve for the krypton-helium atom pair was developed using supermolecular ab initio computations for 34 interatomic distances. Values for the interaction energies at the complete basis set limit were obtained from calculations with the coupled-cluster method with single, double, and perturbative triple excitations and correlation consistent basis sets up to sextuple-zeta quality augmented with mid-bond functions. Higher-order coupled-cluster excitations up to the full quadruple level were accounted for in a scheme of successive correction terms. Core-core and core-valence correlation effects were included. Relativistic corrections were considered not only at the scalar relativistic level but also using full four-component Dirac-Coulomb and Dirac-Coulomb-Gaunt calculations. The fitted analytical pair potential function is characterized by a well depth of 31.42 K with an estimated standard uncertainty of 0.08 K. Statistical thermodynamics was applied to compute the krypton-helium cross second virial coefficients. The results show a very good agreement with the best experimental data. Kinetic theory calculations based on classical and quantum-mechanical approaches for the underlying collision dynamics were utilized to compute the transport properties of krypton-helium mixtures in the dilute-gas limit for a large temperature range. The results were analyzed with respect to the orders of approximation of kinetic theory and compared with experimental data. Especially the data for the binary diffusion coefficient confirm the predictive quality of the new potential. Furthermore, inconsistencies between two empirical pair potential functions for the krypton-helium system from the literature could be resolved.
Atomization Energies of SO and SO2; Basis Set Extrapolation Revisted
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Ricca, Alessandra; Arnold, James (Technical Monitor)
1998-01-01
The addition of tight functions to sulphur and extrapolation to the complete basis set limit are required to obtain accurate atomization energies. Six different extrapolation procedures are tried. The best atomization energies come from the series of basis sets that yield the most consistent results for all extrapolation techniques. In the variable alpha approach, alpha values larger than 4.5 or smaller than 3, appear to suggest that the extrapolation may not be reliable. It does not appear possible to determine a reliable basis set series using only the triple and quadruple zeta based sets. The scalar relativistic effects reduce the atomization of SO and SO2 by 0.34 and 0.81 kcal/mol, respectively, and clearly must be accounted for if a highly accurate atomization energy is to be computed. The magnitude of the core-valence (CV) contribution to the atomization is affected by missing diffuse valence functions. The CV contribution is much more stable if basis set superposition errors are accounted for. A similar study of SF, SF(+), and SF6 shows that the best family of basis sets varies with the nature of the S bonding.
Microwave model prediction and verifications for vegetated terrain
NASA Technical Reports Server (NTRS)
Fung, A. K.
1985-01-01
To understand the scattering properties of a deciduous and a coniferous type vegetation scattering models were developed assuming either a disc type leaf or a needle type leaf. The major effort is to calculate the corresponding scattering phase functions and then each of the functions is used in a radiative transfer formulation to compute the scattering intensity and consequently the scattering coefficient. The radiative transfer formulation takes into account the irregular ground surface by including the rough soil surface in the boundary condition. Thus, the scattering model accounts for volume scattering inside the vegetation layer, the surface scattering from the ground and the interaction between scattering from the soil surface and the vegetation volume. The contribution to backscattering by each of the three scattering mechanisms is illustrated along with the effects of each layer or surface parameter. The major difference between the two types of vegetation is that when the incident wavelength is comparable to the size of the leaf there is a peak appearing in the mid angular region of the backscattering curve for the disc type leaf whereas it is a dip in the same region for a needle type leaf.
A dictionary based informational genome analysis
2012-01-01
Background In the post-genomic era several methods of computational genomics are emerging to understand how the whole information is structured within genomes. Literature of last five years accounts for several alignment-free methods, arisen as alternative metrics for dissimilarity of biological sequences. Among the others, recent approaches are based on empirical frequencies of DNA k-mers in whole genomes. Results Any set of words (factors) occurring in a genome provides a genomic dictionary. About sixty genomes were analyzed by means of informational indexes based on genomic dictionaries, where a systemic view replaces a local sequence analysis. A software prototype applying a methodology here outlined carried out some computations on genomic data. We computed informational indexes, built the genomic dictionaries with different sizes, along with frequency distributions. The software performed three main tasks: computation of informational indexes, storage of these in a database, index analysis and visualization. The validation was done by investigating genomes of various organisms. A systematic analysis of genomic repeats of several lengths, which is of vivid interest in biology (for example to compute excessively represented functional sequences, such as promoters), was discussed, and suggested a method to define synthetic genetic networks. Conclusions We introduced a methodology based on dictionaries, and an efficient motif-finding software application for comparative genomics. This approach could be extended along many investigation lines, namely exported in other contexts of computational genomics, as a basis for discrimination of genomic pathologies. PMID:22985068
Probabilistic analysis of tsunami hazards
Geist, E.L.; Parsons, T.
2006-01-01
Determining the likelihood of a disaster is a key component of any comprehensive hazard assessment. This is particularly true for tsunamis, even though most tsunami hazard assessments have in the past relied on scenario or deterministic type models. We discuss probabilistic tsunami hazard analysis (PTHA) from the standpoint of integrating computational methods with empirical analysis of past tsunami runup. PTHA is derived from probabilistic seismic hazard analysis (PSHA), with the main difference being that PTHA must account for far-field sources. The computational methods rely on numerical tsunami propagation models rather than empirical attenuation relationships as in PSHA in determining ground motions. Because a number of source parameters affect local tsunami runup height, PTHA can become complex and computationally intensive. Empirical analysis can function in one of two ways, depending on the length and completeness of the tsunami catalog. For site-specific studies where there is sufficient tsunami runup data available, hazard curves can primarily be derived from empirical analysis, with computational methods used to highlight deficiencies in the tsunami catalog. For region-wide analyses and sites where there are little to no tsunami data, a computationally based method such as Monte Carlo simulation is the primary method to establish tsunami hazards. Two case studies that describe how computational and empirical methods can be integrated are presented for Acapulco, Mexico (site-specific) and the U.S. Pacific Northwest coastline (region-wide analysis).